Mathematical Modeling I

Size: px
Start display at page:

Download "Mathematical Modeling I"

Transcription

1 Mathematical Modeling I Dr. Zachariah Sinkala Department of Mathematical Sciences Middle Tennessee State University Murfreesboro Tennessee 37132, USA February 14, 2017

2

3 Introduction When we choose to model the change of discrete entities using formalism that is inherently continuous in nature(differential calculus), then it is clear that there will be modeling errors. This means differential equation models are always inaccurate models when they describe discrete systems. A relative importance of that inaccuracy will depend on the size of the system.

4 Introduction A relevant measure could be a number of particles in the system. The higher this number, the better deterministic models. For very few small systems, deterministic approaches are often too inaccurate to be useful.

5 What law do systems follow? More often than not real systems do not follow a deterministic law but their behavior has elements of randomness. On the average, these systems may often follow a deterministic law and are accurately described by differential equations. Fundamentally, their apparent deterministic behavior is the emergent result of random effects. In very large systems, the stochastic nature of these systems may not be noticeable.

6 What law do systems follow? For small systems, these fluctuations become increasing dominant. The actual behavior of these small systems will significantly deviate from their mean behavior. This deviation of the actual behavior of the system from the mean behavior is referred to as noise, and is the result of stochastic fluctuations.

7 Advantages of using differential equation models Small and discrete systems with stochastic fluctuations are typically not very well described by differential equation models. Theoretical, differential equations are well understood and often, statements of considerable generality can be deduced. Computationally they are relatively cheap to analyze numerically, particularly when compared to explicit simulation methods. There are significant arguments in favor of using differential equations to model systems.

8 How useful is the model? Even if a system has significant stochastic effects and only approximately follows a deterministic path, one may still prefer differential equations to explicit stochastic models. In Modeling, the question is never whether or not a model is wrong, but always, how useful it is.

9 How do you choose a use Model? If the system consisting of millions of molecules that interact with one another, then it will still be a system of discrete entities. For all practical purposes, the system will very much behave like a continuous system. These cases differential equation approaches will remain very useful. The problem starts when the number of particles is not very high, and when discrete nature of the particles starts to manifest itself in the phenomenology of the system.

10 Biological example: Transcription factors Transcription factors are often present in copy numbers of thousands or may be only hundred of molecules. In these cases, the deviation of the actual behavior from the deterministic behavior predicted by differential equation can be quite substantial.

11 Gene expression A system where random fluctuations are often taken into account is gene expression. There are many layers of stochasticity in the system. One: The gene may controlled by a stochastic signal(for example, fluctuating transcription factor concentration). Two: Binding of any regulatory molecules to their sites is stochastic. Three: Even under constant external conditions, the actual expression of the protein is a stochastic process, in the sense that the number of proteins produced will fluctuate from one time interval to the next. Four: The break down or loss of proteins the cell is also a stochastic process.

12 Gene expression Taking into account that the total molecule number may be very low (hundreds of molecules rather than millions) the fluctuations of the protein concentration may dominate the system and cannot be ignored.

13 Protein expression Protein expression is another example where stochastic effects are important.

14 Mathematical techniques As Always mathematical approaches are preferable to simulations. Unfortunately, when comes to stochastic systems the mathematical involved is rather difficult. The focus of this lecture is understanding stochastic modeling skills.

15 Stochastic (1) The basis of many mathematical models of stochastic systems is the so-called master equation. (2) It is a differential equations, or set of differential equations that formulates how the probabilities associated with particular system change over time. (3) Assume that a system can be in a number of states, s = 1, 2, 3,...n, each with a certain probability P s. (4) Concretely, the P s could represent the probability that there are s proteins in the cell.

16 master equation (1) The master equation is a set of differential equations, each of which describes how the probability of being in a particular state s change over time. P s t = f s (P 1,..., P n ) (2) We assume that the states are labeled from 1 to n.

17 Problems of solving masters equation (1) In principle, the solutions to master equation give us probabilities of being in various states as a function of time, which is the information one normally wishes to have in stochastic modeling applications. (2) The problem at the moment is we do not f s. (3) Another problem is that even we find f s, in most cases the master equation is not solved exactly. (4) In some cases good approximations can be found, but this is nearly always difficult.

18 Random Walker The basic setting is that there is an infinite number of squares lined up such that each square has exactly two neighbors. The random walker can at any time, occupy exactly one of these squares. From its current position the walker can move to one of the two adjacent sites with a rate of k, or choose to remain where it is. (We assume transition rates and hence continuous time) (1) The relevant states of this problem are the occupied squares, that is we can assume to be in state s at time t if the random walker is on the square labeled s at time t.

19 Formulate the master equation for random walker (1) Let us denote the expression P(s, t 0, 0) to be the probability that a random walker is on square s at time t given that it was at square 0 at time 0. (2) We cannot write down the probability of this right outright. (3) We can write how this probability changes. (4) If we assume at time t the random walker at square s, then we know that, with a rate of k, it will move to the left or to the right. (5) Altogether, it moves away with a rate of 2k.

20 Formulate the master equation for random walker (1) The random walker moves to one of the adjacent sites. (2) Our focus square s is a neighbor of the other two sites. (3) From each of the adjacent sites it will experience an influx with a rate k. (4) The master equation of the random walk is P(s, t 0, 0) t = kp(s 1, t 0, 0)+kP(s+1, t 0, 0) 2kP(s, t 0, 0).

21 The probability of an event is proportional to some waiting time Example: If we sit on a molecule in a well stirred vessel,then probability Pc t of colliding with another molecule within the next time unit t proportional to t (1) Using m as proportionality constant, then Pc t = m t. (2) Assume we are interested in the number collisions since time t = 0. (3) Let us denote P(k, t) to be the probability at time t there have been exactly k collisions between the molecule we are sitting on and others.

22 Finding probability P(k, t) (1) We do not know P(k, t) but we can formulate how it changes. (2) ( P(k, t + t) = P(k, t) 1 P t c ) + P(k 1, t)p c = P(k, t)(1 m ) + P(k 1, t)m t. (3) It consists of the sum of: the probability that there have already been exactly k collisions by time t (given byp(k, t)) and another does not follow within t(given by 1 Pc t ); and the probability that there have been k 1 by time t(given by P(k 1, t) and that a single collision follows within the next t.

23 Finding probability P(k, t) (1) P(k, t + t) P(k, t) = m (P(k 1, t) P(k, t)) t (2) Taking the limit t 0 we obtain the master equation P(k, t) t = m (P(k 1, t) P(k, t)).

24 Solving the master equation (1) We use the probability generating function G(x, t) = x k P(k, t). (2) k=0 G(x, t) t = k=0 k P(k, t) x. t k=0 (3) Multiplying the master equation by x k and then take the sum over all k, we obtain k P(k, t) ( ) x = m x k P(k 1, t) x k P(k, t). t k=0

25 Solving the master equation (1) (2) (3) (4) Thus, ( ) = m x k P(k 1, t) x k P(k, t) k=0 = m ( x G(x, t) t k=0 ) x k P(k, t) G(x, t) k=0 = m(xg(x, t) G(x, t)) G(x, t) = G(x, 0) exp(m(x 1)t)

26 Solving the master equation (1) We determine the value of G(x, 0) = x k P(k, 0). k=0 (2) Given the meaning of P we must assume that at time t = 0 there have been no collisions at all yet;reset the counter at t = 0. (3) Thus, G(x, 0) = P(1, 0) = 1. (4) Then G(x, t) = exp(m(x 1)t) = exp(mxt) exp( mt)

27 Solving the master equation (1) (2) (3) exp(mxt) exp( mt) = exp( mt) x k P(k, t) k=0 (mt) k x k k=0 k! P(k, t) = (mt)k exp( mt) k!

28 Chemical species We are concerned with a process that involves N different types of molecules, or chemical species. These molecules may take part in one or more of M types of chemical reactions; for example, we may know that a molecule of species A and a molecule of species B can combine to create a molecule of species C. In principle, we could start with a position and a velocity for each molecule and let the system evolve under appropriate laws of physics, keeping track of collisions between molecules and the resulting interactions.

29 Chemical species However, this molecular dynamics approach is typically too expensive, computationally, when the overall number of molecules is large or the dynamics over a long time interval are of interest. Instead, our approach is to ignore spatial information and simply keep track of the number of molecules of each type.

30 Chemical species This simplification is valid if we have a well-stirred system, so that molecules of each type are spread uniformly throughout the spatial domain. We also assume that the system is in thermal equilibrium and that the volume of the spatial domain is constant. Suppose that initially, at time t = 0, we know how many molecules of each species are present and our aim is to describe how these numbers evolve as time increases.

31 Description of well stirred chemical system Consider a well stirred system of N molecular species S 1, S 2, S 3,..., S N interacting through M chemical reaction channels {R 1, R 2,..., R M }. The state of the system is described by the vector X (t) = (X 1 (t), X 2 (t),..., X N (t) N ). Each reaction channel R j is characterized by its propensity function a j (x) and its change vector ( ) ν j = νj 1,..., νj N, where a j (x) 0 for physical states. Here a j (x)dt gives the probability that the system will experience an R j reaction in the next infinitesimal time dt when the current state X (t) = x, ν i j is the change of S i molecules caused by one reaction R j reaction.

32 Chemical master equation The state vector X (t) can change whenever one of the M types of reaction takes place. In this setup, because we do not have spatial information, we think in terms of the probability of a reaction taking place, based on the current state of the system. It is therefore natural to talk about the probability of the system being in a particular state at time t and to describe the evolution of these probabilities. This leads to the chemical master equation (CME), a set of ordinary differential equations (ODEs), one ODE for each possible state of the system.

33 Chemical master equation At time t the kth equation gives the probability of the system being in the kth state. The important point here is that the dimension of the ODE is not given by the number of species, N, but by the number of possible states of the system. The latter quantity depends upon the total number of molecules present and the precise form of the chemical reactions, and is usually very large.

34 Example suppose there are N = 4 species A, B, C, D, molecule types, and M = 2 types of reaction: A + B C + D; the reverse reaction: C + D A + B.

35 Example If we start with 100 molecules of type A and 100 molecules of type B, with no molecules of C or D, so that 100 X (0) = 100 0, 0 then the state vector X (t) takes the possible values values

36 Example , , so there are 101 different states ,..., ,

37 Example In generally, the CME has such extremely high dimension that it cannot be handled analytically or computationally.

38 Michaelis-Menten system A motivating example of a chemical Michaelis-Menten system involving four species: a substrate, S 1, an enzyme, S 2, a complex, S 3, and a product, S 4.

39 Associated Reaction channels The reactions may be written S 1 + S 2 c 1 S 3, (1) S3 c 2 S1 + S2, (2) S 3 c 3 S 4 + S 2 (3) The Michaelis-Menten system deals with an extremely important mechanism and hence has been very widely studied. Almost everything that happens in life boils down to enzymatic catalysis and biochemical kinetics

40 In the system (1)-(3), a molecule of the product S 4 is never involved in any further reactions. If we were not interested in the amount of product, then we could ignore S 4 and use a state vector that recorded only the amounts of S 1, S 2, and S 3 (since any change of state only depends on these three species). Also, we could recover the state of S 4 from the states of S 1 and S 3 by noting that S 1 + S 3 + S 4 = constant. However, in this illustration we will work with the full state vector X (t) R 4. If reaction (1) takes place, then X 1 (t) and X 2 (t) decrease by one and X 3 (t) increases by one.

41 So X (t) becomes X (t) + ν 1, where ν 1 = Similarly, for reactions (2) and (3) we introduce ν 2 =

42 and respectively. ν 3 = ,

43 Now a type (1) reaction can arise only when an S 1 molecule meets an S 2 molecule. Intuitively, if there are very few S 1 or S 2 molecules present at some time, then a reaction of this type is less likely to take place than if there were many S 1 and S 2 molecules present. Using this type of argument, we may talk about the probability of this reaction taking place, and we will assume this probability to be proportional to the product of the numbers of S 1 and S 2 molecules present.

44 More precisely, we assume that the probability of this reaction taking place in the infinitesimal time interval [t, t + dt) is given by c 1 X 1 (t)x 2 (t)dt. Here the product X 1 (t)x 2 (t) relates to the likelihood of two appropriate molecules coming into contact, and the constant c 1 is a scale factor that, among other things, allows for the fact that not every such collision will result in a reaction.

45 For the second type of reaction, (2), only S 3 has an active role. Hence, we take the corresponding probability to be c 2 X 3 (t)dt proportional to the amount of S 3 present. Similarly, we use the probability c 3 X 3 (t)dt for the third type of reaction, (3). This setup generalizes to any collection of unimolecular (one molecule on the left-hand side) or bimolecular (two molecules on the left-hand side) reactions

46 We generalize the example observations Let S 1, S 2,..., S N be chemical species that take part in M different types of chemical reaction, or reaction channels. For 1 j M, the jth type of reaction has an associated stoichiometric, or state-change vector, ν j R N, whose ith component is the change in the number of S i molecules caused by the jth reaction. So one reaction of type j has the effect of changing the state vector from X (t) to X (t) + ν j

47 Also associated with the jth reaction is the propensity function, a j (X (t)), which has the property that the probability of this reaction taking place in the infinitesimal time interval [t, t + dt) is given by a j (X (t))dt.

48 The propensity functions are constructed as follows. Second Order S m + S n c j something, with m n, has a j (X (t)) = c j X m (t)x n (t). Dimerization. S m + S m c j something has 1 a j (X (t)) = c j 2 X m(t)(x m (t) 1). First Order. S m c j something has a j (X (t)) = c j X m (t). The conditions on the propensity functions a j (x) must be supplemented to prevent the physically meaningless negative population of molecules.

49 We are now in a position to study the quantity P(x, t) = P(x, t x 0, t 0 ), which we define to be the probability that X (t) = x, and we assume that X (0) is given at time t 0. Given that we know the probability of being in any of the possible states at time t, we will work out the probability of being in state x at time t + dt, assuming dt is so small that at most one reaction can take place over [t, t + dt).

50 The first step is to notice that to be in state x at time t + dt there are only two basic scenarios for time t; either the system was already in state x at time t and no reaction took place over [t, t + dt), or for some 1 j M the system was in state x ν j at time t and the jth reaction fired over [t, t + dt), thereby bringing the system into state x. We need to apply a standard result from probability theory known as the law of total probability. Suppose A is the event of interest and suppose that the events H 0, H 1, H 2,..., H M, H M+1 are (a) disjoint (no more than one can happen) and (b) exhaustive (one of them must happen).

51 Law of total probability Then the law of total probability says that P(A) = M+1 j=0 P(A H j )P(H j ). (4)

52 P(A H j ) means the probability that A happens, given that H j happens(conditional probability). A is the event that the system is in state x at time t + dt. We can let H 0 be the event that the system is in state x at time t, let H j for 1 j M be the event that the system is in state x ν j at time t, and let H M+1 be the event that the system is in any other state at time t. Now, for 1 j M, P(A H j ) is simply the probability of the jth reaction firing over [t, t + dt).

53

54 From the definition of the propensity functions this means P(A H j ) = a j (x ν j )dt, 1 j M. (5) Similarly, P(A H 0 ) is the probability of no reaction firing over [t, t + dt). This must equal 1 minus the probability of any reaction firing, so P(A H 0 ) = 1 m a j (x)dt. (6) j=1

55 Finally, P(A H M+1 ) = 0, (7) because H M+1 contains all the states that are more than one reaction away from x. Using (5), (6), and (7) in (4), along with the definition of P(x, t), we find that M M P(x, t+dt) = 1 a j (x)dt P(x, t)+ a j (x ν j )dtp(x ν j, t). j=1 j=1

56 This equation can be rearranged to P(x, t + dt) P(x, t) dt = M (a j (x ν j )P(x ν j, t) a j (x)p(x, t)). j=1 Letting dt 0 we see that the left-hand side of this equation becomes a time derivative, leading to the CME P(x, t) t = M (a j (x ν j )P(x ν j, t) a j (x)p(x, t)). (8) j=1 This is also the forward Kolmogorov equation or Fokker-Planck equation for transition probability function. We emphasize here that the state vector x ranges over a (large) discrete set of values, and the CME (8) is a linear ODE system with one ODE for each possible state.

57 Matrix representation In this master equation representation, there is one equation for each configuration x of the state space and if P(t) denote the vector of probabilities then the master equation becomes with appropriate matrix. This is simply a linear ODE, so Ṗ(t) = AP(t), P(0) = P 0, P(t) = e At P(0).

58 Infinite linear system The size of this system becomes very large quickly with the increase of N. The upper bounds on each molecular population(very often such upper bounds are impossible to establish, in which case this system of ODEs is infinite.)

59 Computing the first and second moments of the master equation Read the following paper: Chemical master equation and Langevin regimes for a gene transcription Model.

60 Gillespie s algorithm The stochastic simulation algorithm (SSA), also called Gillespies algorithm, gives one approach to computing indirectly with the CME. Here, rather than solving the full set of ODEs to get a probability distribution over all possible states for each time t, we compute samples from these distributions, that is, compute realizations of the state vector {t, X (t)} in such a way that the chance of a particular realization being computed reflects the corresponding probability given by the CME.

61 Although straightforward to implement, the SSA can be impractically slow when reactions occur frequently. We can try to speed up the SSA by lumping together reactions and only updating the state vector after many reactions have fired. This tau leaping approximation introduces errors that will be small as long as the state vector updates are relatively small.

62 Typically, the CME is too high-dimensional to deal with computationally. The SSA gets around this issue by computing single realizations of the state vector rather than an entire probability distribution.

63 For SSA simulation, define p(τ x, t) = p(τ, j x 0, t 0 ) as the probability density, given X (t) = x, that the next reaction in the system will occur after τ and will be an a j reaction. where given X (t) = x, P 0 (τ x, t) is the probability that no reaction takes place in the time interval [t, t + τ).

64 Consider the time interval [t, t + τ + dτ). We assume that what happens over [t, t + τ) is independent of what happens over [t + τ, t + τ + dτ), so that and translates into product.

65 In words, we have Prob. no reaction over[t, t + τ + dτ): = Prob. no reaction over[t, t + τ) and no reaction over[t + τ, t + τ + dτ) = Prob. no reaction over[t, t + τ) Prob. no reaction over[t + τ, t + τ + dτ) = Prob. no reaction over[t, t + τ) (1 sum of prob. each reaction over[t + τ, t + τ + dτ)).

66 Propensity function Using the definition of the propensity function: ( ) M P 0 (τ + δτ x, t) = P 0 (τ x, t) 1 a k (x)dτ, k=1 P 0 (τ + δτ x, t) P 0 (τ x, t) = a sum (x)p 0 (τ x, t), dτ where M a sum (x) = a k (x). k=1

67 Linear scalar ODE Passing to the limit as dτ 0 leads to a linear scalar ODE, which, by definition, has initial condition P 0 (0 x, t) = 1. Solving the ODE gives P 0 (τ x, t) = e asum(x)τ. (9)

68 The key quantity for the SSA is P(τ, j x, t), which is defined by given X (t) = x, p(τ, j x, t)dτ is the probability that the next reaction (a) will be the jth reaction and (b) will occur in the time interval [t + τ, t + τ + dτ). In words, with and becoming a product again, we have Prob. (a) and (b) = Prob. no reaction took place over [t, t + τ) Prob. jth reaction took place over [t + τ, t + τ + dτ). (Here, we assume that dτ is so small that at most one reaction can take place over that length of time.)

69 Using the definitions of P 0 and a j, this translates to so, from (9), P(τ, j x, t)dτ = P 0 (τ x, t)a j (x)dτ, P(τ, j x, t) = a j (x)e asum(x)τ. This is conveniently rewritten as P(τ, j x, t) = a j(x) a sum (x) a sum(x)e asum(x)τ. (10)

70 Formally, p(τ, j x, t) is the joint density function for the two random variables next reaction index and time until next reaction, and (10) shows that it may be written as the product of two individual density functions.

71 Next Reaction index Next Reaction Index a j (x)/a sum (x) corresponds to a discrete random variable: pick one of the reactions with the rule that the chance of picking the jth reaction is proportional to a j (x).

72 Time until next reaction Time until Next Reaction a sum (x)e asum(x)τ is the density function for a continuous random variable with an exponential distribution. These exponential random variables arise universally in descriptions of the time elapsing between unpredictable events.

73 Computational perspective From a computational perspective this is a very important observation. It allows us to simulate independently a reaction index and a reaction time. Each can be computed via a uniform (0, 1) sample.

74 Pseudocode The resulting algorithm can be summarized very simply in the following pseudocode, where an initial state X (0) is given. 1. Evaluate {a k (X (t))} M k=1 and a sum(x (t)) = M k=1 a k(x (t)). 2. Draw two independent uniform (0, 1) random numbers, ξ 1 and ξ Set j to be the smallest integer satisfying j k=1 a k(x (t)) > ξ 1 a sum (X (t)). 4. Set τ = ln( 1 ξ 2 )/a sum (X (t)). 5. Set X (t + τ) = X (t) + ν j and update t to t + τ. 6. Return to step 1.

75 In practice In practice, of course, step 6 would also include a termination condition; for example, stop when t has passed a specified value, when some species exceeds a specified upper or lower bound, or when a specified number of iterations have been taken.

76 Matlab code: ssaplot.m Read and implement matlab code ssplot.m. It can be found at the website:http: //personal.strath.ac.uk/d.j.higham/algfiles.html. Write a c++ code or fortran to implement the above code.

77 Modeling gene expression Read the paper by Cai and Wang:Stochastic modeling and simulation of gene networks. Implement SSA Algorithm in matlab and c + +(or fortran).

78 Problem with SSA The SSA is exact, in the sense that the statistics from the CME are reproduced precisely. However, this exactness comes at a high computational price. At each iteration, a reaction time and reaction index must be drawn, and then the state vector and propensity functions must be updated. If there are many molecules in the system and/or some fast reactions, so that a sum (X (t)) is large and hence the time τ to the next reaction is typically small, then this can result in a huge amount of random number generation and bookkeeping.

79 Problems with SSA It is therefore attractive to consider a fixed time interval length, τ, and to fire simultaneously all reactions that would have taken place. More precisely, given X (t), we could freeze the propensity functions a j (X (t)) and, using these values, fire an appropriate number of reactions of each type. This gives the tau-leaping method

80 Tau method X (et) (t + τ) = X (t) + M j=1 ν j K (et) j (x, τ), (11) where the random variables {K j (x, τ) = P j (a j (X (t)), τ)} M j=1 must now be determined. In order for this approximation to the SSA to be valid, we require that τ is sufficiently small that relatively few reactions take place, in the sense that the propensity functions a j (X (t)) would not have changed very much if we had taken the effort to update after each reaction.

81 Poisson random variable Now if a j (X (t)) were to stay exactly constant over [t, t + τ), then the number of type j reactions to fire would be given by a simple counting process: we know that the probability of the jth reaction firing over a small time interval of length dτ is given by a j (X (t))dτ, and we need to count how many of these events arise over[t, t + τ). It follows that the number of reactions, P j (a j (X (t)), τ), will have what is known as a Poisson distribution with parameter a j (X (t))τ). (Generally a Poisson random variable P with parameter λ > 0 takes possible values {0, 1, 2, 3,...} such that P(P = i) = e λ λi i!.)

82 A path from the tau-leaping method with leap time τ can be computed as follows. 1. Draw samples {p j } M j=1 from the distributions of independent Poisson random variables {P j (a j (X (t)), τ)} M j=1. 2. Set X (t + τ) = X (t) + M j=1 ν jp j and update t to t + τ. 3. Return to step 1.

83 Weaknesses of the tau method For many situations τ leaping significantly speeds up the Monte Carlo simulation, but: Poisson random variables are unbounded. Propensity functions may change dramatically over small time intervals. May result in negative populations

84 Limitations of τ leaping Note that these concerns are most important when the population of some species are very small. Precisely the circumstance where stochastic models are most important.

85 The explicit and implicit tau methods For details and code in matlab go to http: //theses.gla.ac.uk/844/01/2009marbamayamsc.pdf The explicit tau leaping method has been described above. The step size τ what conditions should it satisfy.

86 Leap condition where τ = f jk = min j [1,M] N i=1 µ j (x) = σ 2 j (x) = { } ɛa 0 (x) µ j (x), ɛ2 a0 2(x) σj 2(x) a j (x) x i ν ik, j, k = 1,..., M, M f ik a k (x), j = 1,..., M, k=1 M k=1 f 2 jk (x)a k(x), j = 1,..., M

87 Leap condition attempts This procedure attempts to ensure that the change in each propensity function during a leap of size τ will be no larger than ɛa 0 (x), where ɛ is a prescribed error control parameter(0 < ɛ 1) If the determined step size τ k a 0 (x), k [1107 ]

88 Poisson midpoint τ leap method A predicted state at the midpoint (t + τ/2) is defined by x = x + 1 M a j (x)ν j, 2 The system is updated by j=1 K j = P (a j ( x)τ) x(t + τ) = x(t) + M ν j K j. What is the condition on τ? look at this paper:d.t. Gillespie, Journal Chem. Phys. 155, page 1760 (2001) j=1

89 Implicit tau (unrounded version) method Given that x(t) = x is the current state, the state at time t + τ(τ > 0) is to be M M x(t+τ) = x+ ν j a j (x(t+τ))τ+ ν j (P j (a j (x), τ) a j (x)τ). j=1 The unrounded implicit tau method has disadvantages that lead to state values that are not integers. j=1 (12)

90 Rounded implicit tau method Suppose that at time t we have the state X ( itr)(t) = x,(the superscript itr stands for rounded implicit tau method. First compute the intermediate state value X according to equation 12. M M X = x + ν j a j (X )τ + ν j (P j (a j (x), τ) a j (x)τ). (13) j=1 j=1

91 Rounded implicit tau method The approximate the actual number of firings K j (x, τ) of reaction channel R j in the interval time (t, t + τ] by the integer-valued random variable K (itr) defined by K (itr) j = [ a j (X )τ + P j (a j (x), τ) a j (x)τ ]. (14) [z] denotes the nearest nonnegative integer to z. X (itr) (t + τ) = x + M j=1 ν j K (itr) j (x, τ). (15) If X (itr) (t) = x is an integer vector, then so is X (itr) (t + τ).

92 Rounded implicit tau method Throughout the remaining of these notes, implicit tau shall mean the unrounded implicit tau method, unless specified otherwise.

93 Extension of propensity function to real states For the purpose of analysis as well as practical computation of the implicit tau, it is important to extend the definition of propensity function to positive real states. The implicit tau method can produce states that are in R N + but not necessarily integers. If one or more propensity functions become negative, then the process becomes ill defined. This can be fixed by setting to zero any propensity function a j (x) that evaluates to being negative.

94 Example Consider the example of a single reaction, single species case S 1 + S 1 0. The propensity function a(x) = c 1 2x(x 1) will be negative if 0 < x < 1. Even when the state is positive, the propensity can be negative. This does not happen with integer states. when x is an integer, a(x) = 0 for both x = 0 and x = 1, and a(x) > 0 for x 2.

95 Example Thus it makes sense to set a(x) = 0 for 0 < x < 1. When 1 < x < 2, a(x) > 0, however the occurrence of a reaction will lead to unrealistic negative state x 2. This may be avoided by defining a(x) = 0, 1 < x < 2 as well. But this leads to discontinuous propensity function a(x).

96 Analysis point of view From the point view of the analysis, it is far convenient to have continuity of a(x) to ensure consistency results.

97 Extension of propensity function to positive real states Definition Given a polynomial propensity function a j : Z N + R, its extension a j : R N + R to any nonnegative real vector x is defined as follows. If the value of a j (x) according to natural extension is nonnegative, then a j (x) is given the natural extension. Otherwise a ( x) = 0.

98 Lipschitz Extension Lemma Lemma If the original propensity functions a j : Z N + R are polynomials, then their extensions a j : R N + R as defined by the definition above are Lipschitz continuous on any bounded subdomain of R N +.

99 Bounding of the Poisson random variables All the tau-leaping methods are based on generating Poisson random numbers given the mean and variance. The Poisson distribution with nonzero mean assigns nonzero probability to arbitrary large numbers. This can cause some practical problems as well as theoretical problems.

100 Bounding of the Poisson random variables One problem is that it may produce negative states, which are nonphysical. Then the propensities may become negative for negative states. This can lead to a process which mathematically and computationally ill defined, since probabilities and hence propensities, cannot be negative.

101 Example Consider the isomerization reaction S 1 0. The propensity function is a(x) = cx, where c > 0 for some constant. Starting at an initial state x 0, there is a nonzero probability that that the first step of explicit tau becomes ill defined.

102 Bounding the K j to avoid negative states Suppose the state before the tau-method is x(nonnegative), the state reached by the leap x n is a negative state, and the number of reactions that occurred according to the leap is K j for j = 1,..., M. Then while x m has negative components for l = 1 to M K l K l 1; x m = M ν j K j ; if x m is nonnegative, then break; end for end while j=1

103 New update The resulting state x m is taken to be the new update. Also note that this procedure will terminate(will not result in an infinite loop).

104 The second problem The second problem is that even if the update x m remains nonnegative, the tau methods using Poisson random numbers will generate arbitrary large numbers with nonzero probability. This unboundness lead difficulties with the implicit tau method because the ranges of τ values for the implicit is well defined could possibly get arbitrary small. To avoid this problem a truncated procedure is used.

105 Bounding K j to avoid an arbitrary large number of reactions We choose a predifined value of K max such that whatever K j = P(a j, τ) exceeds K max we set K j = K max.

106 Local error Analysis Let the multi-index k = (k 1,..., k l ), where k j {1,..., M}, denote a sequence of reaction events R kj happening in that order, and let k = l be a number of reaction events. Let p(k; x, τ) denote the probability that the sequence of reactions that occurred in the interval (t, t + τ] is precisely k, conditioned on being at state x at time t. Then it follows that p ((); x, τ) = exp( a 0 (x)τ) where () stands for no reactions happening(empty sequence) and a 0 = M j=1 a j(x).

107 Recursively representation p((k 1,..., k l ); x, τ) can be written recursively in terms of p((k 1,..., k l 1 ), x, τ) by p((k 1,..., k l ); x, τ) = τ 0 p((k 1,..., k l 1 ), x, s) a kl (x + ν j ν jl 1 ) exp( a 0 (x + ν j ν jl )(τ s))ds. By induction for each x and k, p(k; x, τ) is analytic of τ for all τ R and that p(k; x, τ) = O(τ k ) as τ 0 (16)

108 Taylor expansion We compute p(k; x, τ) for terms up to k = 2(i.e. terms of up to O(τ 2 )) for general M and N. A numerical scheme must have a Taylor expansion in τ for the transition probabilities that matches that of the true process described above for terms up to O(τ) for first-order accuracy.

109 By (3), we get p((j); x, τ) = τ 0 exp( a 0(x)s)a j (x) exp( a 0 (x + ν j )(τ s))ds, j = 1,..., M. Apply again (3), we obtain p(j 1, j 2 ); x, τ) = τ 0 p((j 1); x, s)a j2 (x + ν j1 ) exp( a 0 (x + ν j1 + ν j2 )τ) exp(a 0 (x + ν j1 + ν j2 )s) (17) where j, j 1, j 2 = 1,..., M. (18)

110 By Taylor expansion for p((j); x, τ) and p((j 1, j 2 ); x, τ) we obtain p((j); x, τ) = a j (x)τ 1 2 τ 2 M j 1 =1 a j (x){a j1 (x+ν j )+a j1 (x)}+o(τ 3 ), p((j 1, j 2 ); x, τ) = 1 2 τ 2 a j1 (x)a j2 (x + ν j1 ) + O(τ 3 ), (19) where j, j 1, j 2 = 1,..., M.

111 We consider the moment of the increment X (t + τ) X (t). The conditional first moment E(X (t + τ) X (t) X (t) = x) is given by ( l ) E(X (t +τ) X (t) X (t) = x) = p(k; x, τ) ν kα. l=1 k =l α=1 (20) The conditional rth moment E(X (t + τ) X (t) X (t) = x) is given by E((X (t+τ) X (t)) r X (t) = x) = ( l p(k; x, τ) ν kα. l=1 k =l α=1 (21) ) r

112 Notation For vector y R N, y r denotes the r-fold tensor product, which is a tensor of rank r in N dimensions. The second moment may be regarded as an N N matrix. The above moments exists only if the corresponding series converges.

113 Since for any fixed x and k,p(k; x, τ) is analytic for all τ, and if the infinite series converge uniformly for τ [0, δ(x)] for any fixed x, then the components of E((X (t + τ) X (t)) r X (t) = x) will be analytic functions of τ for τ [0, δ(x)] for some δ(x) > 0. Assuming that the moments exists, we will compute them up to terms including O(τ 2 ).

114 Using the immediate above equations, we need only to sum terms with l = 1 and l = 2. Thus we obtain M E(X (t + τ) X (t) X (t) = x) = ν j p((j); x, τ) + M j 1 =1 j 2 =1 j=1 M (ν j1 + ν j2 )p((j 1, j 2 ); x, τ) + O(τ 3 ) E((X (t + τ) X (t)) r X (t) = x) = + M j 1 =1 j 2 =1 M νj r p((j); x, τ) j=1 M (ν j1 + ν j2 ) r p((j 1, j 2 ); x, τ) + O(τ 3 )

115 Substituting (19) and simplifying we obtain M E(X (t + τ) X (t) X (t) = x) = τ ν j a j (x) j 1 =1 j 2 =1 j= τ M M 2 ν j1 a j2 (x){a j1 (x + ν j2 ) a j1 (x)} + O(τ 3 ) (22) for the mean

116 For the general rth moment M E((X (t + τ) X (t)) r X (t) = x) = τ νj r a j (x) 1 2 τ τ 2 M j 1 =1 j 2 =1 M τ 2 M j 1 =1 j 2 =1 j 1 =1 j 2 =1 M νj r 1 a j1 (x)a j2 (x) M νj r 1 a j1 (x)a j2 (x + ν j1 ) j=1 M (ν j1 + ν j2 ) r a j1 (x)a j2 (x + ν j1 ) + O(τ 3 ) (23)

117 Conditional covariance The conditional covariance up O(τ 2 ) is computed also using (19). Using the fact that Cov (X (t + τ) X (t) = x) = Cov (X (t + τ) X (t) X (t) = x) ( ) = E (X (t + τ) X (t)) 2 X (t) = x (E (X (t + τ) X (t) X (t) = x)) 2

118 ( ) Cov (X (t + τ) X (t) = x) = E (X (t + τ) X (t)) 2 X (t) = x M (E(X (t + τ) X (t) X (t) = x)) 2 = τ νj 2 a j (x) 1 2 τ τ 2 M j 1 =1 j 2 =1 M τ 2 M j 1 =1 j 2 =1 j 1 =1 j 2 =1 j=1 M νj 2 1 a j2 (x){a j1 (x + ν j2 ) a j1 (x)} M ν j1 ν j2 a j1 (x){a j2 (x + ν j1 ) a j2 (x)} M ν j1 ν j2 a j2 (x){a j1 (x + ν j2 ) a j1 (x)} + O(τ 3 ) (24)

119 Explicit tau method For explicit tau method, the rth moment of the increment conditioned on X (et) (t) = x is given by E ( (X (et) (t + τ) X (et) ) r X (et) (t) = x ) r M = E ν j P j (a j (x), τ) j=1 (25)

120 Summary For convenience we summarize a well known fact about the moments of a Poisson random variable with mean λ.

121 Lemma 3.1 Lemma Suppose P is a Poisson random variable with mean and variance λ. Then for any integer r 2 E (P r ) = λ + O(λ 2 ), λ 0. (26)

122 r-fold tensor product The term inside the expectation operator on the right hand side of (25) may be expanded as sums of M r terms, each of which is an r-fold tensor product. It must be noted that tensor product do not commute in general. Since the P j are independent Poissons of these M r terms, those which involve two or more different values of j(two or more different channel reactions) will have expectations that are O(τ 2 ) or higher.

123 One reaction channel Let us consider the terms that involve only one reaction channel. There are M of these, and they are of the form ν r j P r j (a j (x), τ), j = 1,..., M. Taking expectations and retaining only terms up to O(τ), we get E ( ν r j P r j (a j (x), τ) ) = τν r j a j (x) + O(τ 2 ),

124 E ( ) (X (et) (t + τ) X (et) ) r X (et) (t) = x = τ M νj r a j (x)+o(τ 2 ). j=1 (27)

125 Weak consistent of the explicit tau method: Theorem 3.3 Theorem The explicit tau method is weakly consistent to first order in the following sense. Consider a given initial state x Z N + for the explicit tau. Then for each r 1 there exist C r > 0 and δ r > 0 (depending on x) such that E ( (X (et) (t + τ) X (et) ) r X (et) (t) = x ) E ((X (t + τ) X ) r X (t) = x) < C r τ 2, (28) for any τ [0, δ r ].

126 Corollary 3.12 Corollary For any multivariate polynomial function g : R N R and initial state x Z N + there exist C > 0 and δ > 0 such that E ( g(x (et) (t + τ) g(x (t + τ)) X (et) (t) = X (t) = x ) < Cτ 2, (29) for any τ [0, δ].

127 Covariance matrix We find expressions for the coefficients of the O(τ) error terms for the mean and the covariance matrix. For the mean we get E ( (X (et) (t + τ) X (et) ) X (et) (t) = x ) M = E ν j P j (a j (x), τ) j=1 (30)

128 E ( ) (X (et) (t + τ) X (et) ) X (et) (t) = x = τ M ν j a j (x). (31) j=1

129 Local error formula for the mean of explicit tau Equation (31) and (22) together provide the local error in the mean for the explicit tau method E ( (X (et) (t + τ) X (t + τ) X (et) (t) = X (t) = x ) = 1 2 τ M M 2 ν j1 a j2 (x){a j1 (x + ν j2 ) a j1 (x)} + O(τ 3 ) j 1 =1 j 2 =1 (32)

130 Since the Poisson numbers P j (a j (x), τ) are independent, we get ( ) Cov X (et) (t + τ) X (et) (t) = x = τ M νj 2 a j (x) (33) j=1 where the tensors ν 2 j can be represented by matrices ν j ν T j. Equations (33) and (6) together provide us with the local in the covariance for explicit tau.

131 Local error formula for the covariance of explicit tau Cov ( X (et) (t + τ) X (et) (t) = x ) Cov (X (t + τ) X (t) = x) = 1 2 τ M M 2 νj 2 1 a j1 (x){a j2 (x + ν j2 ) a j1 (x)} 1 2 τ τ 2 j 1 =1 j 2 =1 M M j 1 =1 j 2 =1 M j 1 =1 j 2 =1 ν j1 ν j2 a j1 {(x)a j2 (x + ν j1 ) a j2 (x)} M ν j1 ν j2 a j2 (x){a j1 (x)(x + ν j1 ) a j1 (x)} + O(τ 3 ) (34)

132 The implicit tau method Given that X (it) (t) = x, for a step size τ > 0 the implicit tau method involves finding X (it) (t + τ), which is the unique solution of (12). Writing X (it) (t + τ) = X and comparing (12) with (11) shows that X may be written as the unique solution of X = X e + τ M ν j {a j (X ) a j (x)}, (35) j=1 where X e = x + M j=1 ν j K (et) j (x, τ).

133 Given initial state x R N + we can think of the implicit tau method as involving first the computation of an intermediate state X e according to explicit tau and then solving(35). We see that X is a deterministic function of X e.

134 Implicit function theorem We rewrite (35) for convenience as F (X, X e, x, τ) = 0, (36) where the C -smooth function F (X, X e, x, τ) : R N R N R N R R N is given by F (X, X e, x, τ) = X X e τ M ν j {a j (X ) a j (x)}. (37) j=1

135 Implicit function theorem In order to ensure that X is well defined for τ sufficiently small, we use the implicit function theorem. We note that F (X, X e, x, 0) = 0. The jacobian F X is given by F X = I τ M j=1 where I is the N N identity matrix. a j (X ) ν j, (38) x

136 At τ = 0 the jacobian is the identity matrix, hence rank equals N. By implicit function theorem there exists δ > 0, a region D e R N, and a c -smooth function G : D e [0, δ] R N such that X is given by Note that G(X e, 0) = X e. X = G(X e, τ) (X e, τ) D e [0, δ]. (39)

137 Since the Jacobian is independent of X e, we may choose D e to be arbitrary large but bounded. The size of δ may be smaller for larger D e gets. Since by the bounding procedure Poisson random numbers X e takes bounded values, we can choose D e to be bounded region that contains all possible values of X e. For this choice, we still assured of the existence of δ > 0 such that (39) holds.

138 Remark 3.6 While the theoretical consideration mentioned above indicates that the larger K max is, the smaller δ may be, in practice we have never encountered problems in finding a solution to the implicit equation using the Newton iteration when using practical stepsizes. It must be noted that a similar theoretical concern exists in the application of the implicit Euler method to SDEs driven by Gaussian white noise.

139 We will compute the values of G τ (36). and 2 G τ 2 Differentiating (36) with respect to τ, we obtain F G X τ + F τ = 0, by differentiating which upon using the expression (37) for F yields M a j I τ ν j x (X ) G M τ ν j {a j (X ) a j (x)} = 0. j=1 j=1 (40)

140 Substituting τ = 0 in (40) and also using (39) and the fact that G(X e, 0) = X e yields G M τ (τ = 0) = ν j {a j (X e) a j (x)}. (41) j=1 Differentiating (40) with respect to τ, we obtain 2 F X 2 ( ) G 2 + F 2 G τ X τ F G τ X τ + 2 F τ 2 = 0.

141 = 0 and that when τ = 0, 2 F τ 2 X 2 = I, we obtain Note that 2 F F X 2 G M τ 2 (τ = 0) = M j 1 =1 j 2 =1 = 0 and ν j1 a j1 x (X e)ν j2 {a j2 (X e) a j2 (x)} (42) where we have used the fact that X = X e at τ = 0 and (41).

142 Since G is jointly C -smooth in X e and τ, we obtain using Taylor s formula and (41) and (42) X = X e + τ τ 2 2 M M j 1 =1 j 2 =1 M ν j {a j (X e) a j (x)} j=1 ν j1 a j1 x (X e)ν j1 {a j2 (X e) a j2 (x)} + O(τ 3 ) (43)

143 Consistency of the implicit tau method To show the consistency of the implicit tau method we need certain lemmas. The following lemma asserts that for sufficiently small stepsizes the rounded version of the implicit tau coincides with explicit tau.

144 Lemma 3.7 Lemma Assuming the bounding procedures, for any given initial state x Z N + there exists δ > 0 such that X (itr) (t + τ) = X (et) (t + τ) with probability 1, conditioned on X (itr) (t) = X (et) (t) = x, for all τ [0, δ]. Proof. It follows from (??) that for τ > 0 small enough, K (itr) j (with probability 1). = K (et) j

145 consistency of rounded implicit tau: Theorem 3.8 Theorem Assuming the bounding procedures, for any multivariate polynomial function g : R N R and initial state x Z+ N there exist C > 0 and δ > 0 such that E ( g(x (itr) (t + τ)) g(x (t + τ)) ) X (itr) (t) = X (t) = x < Cτ 2 τ [0 (44)

146 Consistency of unrounded implicit tau: Theorem 3.9 Theorem Assuming the bounding procedures, for any multivariate polynomial function g : R N R and initial state x Z+ N there exist C > 0 and δ > 0 such that E ( g(x (it) (t + τ)) g(x (t + τ)) ) X (it) (t) = X (t) = x < Cτ 2 τ [0, (45)

147 Local error formulae for implicit tau Moreover, we can derive the formula for the local error of the implicit tau method.

148 Local error formulae for implicit tau We note that the term with double summation in (43) is O(τ 3 ). Hence the following equation may be used to relate local error in implicit tau to that of explicit tau: X (it) (t+τ) = X (et) (t+τ)+τ M ν j {a j (X (et) (t+τ)) a j (x)}+o(τ 3 ). j=1 (46)

149 Note that this equation assumes X (it) (t) = X (et) (t) = x. In order to compute the rth moment of X (it) (t + τ), one has to raise this equation to the power r and take expectations of both sides. If the a j are polynomials, the expectation on the right-hand side will involve taking expectations of various powers of Poisson random variables, the formulae for which are well known.

150 Stability and convergence for systems with linear propensity functions We will investigate the stability properties of the explicit and implicit tau methods for systems with linear propensity functions and prove that they converge with first-order accuracy. We focus on chemical reaction systems where N and M are arbitrary, but the propensity functions a j (x) take the form a j (x) = c T j x, x R N ; j = 1,..., M, (47) where c j R N are constant vectors. An important form of stability relevant for convergence is that of 0-stability.

151 0-stability definition Definition Denote by ˆX the discrete time numerical approximation of a stochastic process. We shall say that the numerical method is 0-stable up to r moments if, for a fixed time interval [0, T ], there exist δ > 0 and K lj > 0 for l = 1,..., r and j = 1,..., l, such that E(( ˆX 1 (t)) l ) E(( ˆX 2 (t)) l ) l E(( K lj ˆX 1 (0)) j ) E(( ˆX 2 (0)) j ) j=1

152 Definition for all l = 1,..., r, and t = n τ i T, where n is arbitrary, i=1 0 < τ i δ for i = 1,..., n, and ˆX 1, ˆX 2 correspond, respectively, to the numerical solutions obtained from any pair of arbitrary initial conditions ˆX 1 (0), ˆX 2 (0), which are random variables assumed to have finite moments.

153 Linear propensities For systems with linear propensities of the form (47), the expected tau method is given by X (et) (t + τ) = X (et) (t) + M ν j P j (cj T X (et) (t), τ), (48) j=1 Taking expectation conditioned on X (et) (t), we obtain E ( ) X (et) (t + τ) X (et) (t) = X (et) (t) + τ M ν j cj T j=1 X (et) (t), (49)

154 which may be written as ( ) E X (et) (t + τ) X (et) (t) = X (et) (t) + τax (et) (t), (50) where the N N matrix A is given by M A = ν j cj T. (51) j=1

155 Remark 4.2 If X and Y are random variables, then E(Y X ) is the conditional expectation of Y conditioned on X and a random variable which is deterministic function of X that takes the value E(Y X = x) when X takes the value X = x. Then it follows that E(E(Y X )) = E(Y ).

156 Taking expectation of (50), we obtain ( ) E X (et (t + τ) = (I + τa) E(X (et) (t)). (52) Similarly we obtain the following for implicit tau: M X (it) (t + τ) = X (it) (t) + τ ν j cj T X (it) (t + τ) j=1 M + ν j P j (cj T X (it) (t), τ) τ j=1 j=1 M ν j cj T X (it) (t) (53)

157 which leads to ( ) E X (it (t + τ) = (I τa) 1 E(X (it) (t)). (54) where A is the same matrix given by (51).

158 Asymptotic stability of the mean for a constant stepsize τ, ( ) E X (et (t + nτ) = (I + τa) n E(X (et) (t)). (55) and E ( ) X (it) (t + nτ) = (I τa) n E(X (it) (t)). (56)

159 Geometric progression Thus the mean of the explicit tau simulation evolves in a geometric progression just the same as the explicit Euler simulation of the system of equations ẋ = Ax, while the mean of the implicit tau method evolves the same as the implicit Euler simulation. Thus the mean of the explicit tau simulation is asymptotically stable if 1 + λ i (A)τ < 1, i = 1,..., N, (57) while the mean value of the implicit tau simulation is asymptotically stable if 1/ 1 λ i (A)τ < 1, i = 1,..., N, (58) where λ i (A) is the ith largest eigenvalue of A.

160 Remark 4.3 Note that implicit tau is unconditionally stable in the mean for stable systems(i.e., systems whose all eigenvalues in the left complex half plane). This clearly confirms its advantage over explicit tau for stiff systems. A detailed study of the asymptotic behavior of the covariance and higher-order moments is necessary in order to assess the suitability of implicit tau for stiff systems.

161 Lemma 4.4: 0 stability of the mean Lemma Both the explicit and the implicit tau methods are 0-stable in the mean. Proof. These follows from standard procedures known in numerical analysis of ODEs. They essentially follow from bounding I + τa < e τ A for explicit tau and the bound I τa < e Kτ which holds for sufficiently small τ. Here K is any number greater than A.

162 Weak convergence Derive equations for the evolution of the rth moment E ( (X (et) (t + nτ) r ). From M X (et) (t + τ) = X (et) (t) + ν j P j (cj T X (et) (t), τ), (59) it follows that j=1 E ( (X (et) (t + τ)) r X (et) (t) ) = ( X (et) (t) ) r r r! ( ) (r k) M + X (et) (t) νj k {cj T X (et) (t)τ + O(τ 2 )}, k!(r k)! k=1 where we have used Lemma 3.1. j=1

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics MULTISCALE MODEL. SIMUL. Vol. 4, No. 3, pp. 867 895 c 2005 Society for Industrial and Applied Mathematics CONSISTENCY AND STABILITY OF TAU-LEAPING SCHEMES FOR CHEMICAL REACTION SYSTEMS MURUHAN RATHINAM,

More information

Efficient Leaping Methods for Stochastic Chemical Systems

Efficient Leaping Methods for Stochastic Chemical Systems Efficient Leaping Methods for Stochastic Chemical Systems Ioana Cipcigan Muruhan Rathinam November 18, 28 Abstract. Well stirred chemical reaction systems which involve small numbers of molecules for some

More information

STOCHASTIC CHEMICAL KINETICS

STOCHASTIC CHEMICAL KINETICS STOCHASTIC CHEICAL KINETICS Dan Gillespie GillespieDT@mailaps.org Current Support: Caltech (NIGS) Caltech (NIH) University of California at Santa Barbara (NIH) Past Support: Caltech (DARPA/AFOSR, Beckman/BNC))

More information

Fast Probability Generating Function Method for Stochastic Chemical Reaction Networks

Fast Probability Generating Function Method for Stochastic Chemical Reaction Networks MATCH Communications in Mathematical and in Computer Chemistry MATCH Commun. Math. Comput. Chem. 71 (2014) 57-69 ISSN 0340-6253 Fast Probability Generating Function Method for Stochastic Chemical Reaction

More information

Gillespie s Algorithm and its Approximations. Des Higham Department of Mathematics and Statistics University of Strathclyde

Gillespie s Algorithm and its Approximations. Des Higham Department of Mathematics and Statistics University of Strathclyde Gillespie s Algorithm and its Approximations Des Higham Department of Mathematics and Statistics University of Strathclyde djh@maths.strath.ac.uk The Three Lectures 1 Gillespie s algorithm and its relation

More information

Lecture 4 The stochastic ingredient

Lecture 4 The stochastic ingredient Lecture 4 The stochastic ingredient Luca Bortolussi 1 Alberto Policriti 2 1 Dipartimento di Matematica ed Informatica Università degli studi di Trieste Via Valerio 12/a, 34100 Trieste. luca@dmi.units.it

More information

Stochastic Simulation.

Stochastic Simulation. Stochastic Simulation. (and Gillespie s algorithm) Alberto Policriti Dipartimento di Matematica e Informatica Istituto di Genomica Applicata A. Policriti Stochastic Simulation 1/20 Quote of the day D.T.

More information

Stochastic Chemical Kinetics

Stochastic Chemical Kinetics Stochastic Chemical Kinetics Joseph K Scott November 10, 2011 1 Introduction to Stochastic Chemical Kinetics Consider the reaction I + I D The conventional kinetic model for the concentration of I in a

More information

Stochastic Simulation of Biochemical Reactions

Stochastic Simulation of Biochemical Reactions 1 / 75 Stochastic Simulation of Biochemical Reactions Jorge Júlvez University of Zaragoza 2 / 75 Outline 1 Biochemical Kinetics 2 Reaction Rate Equation 3 Chemical Master Equation 4 Stochastic Simulation

More information

Numerical Simulation for Biochemical Kinetics

Numerical Simulation for Biochemical Kinetics Chapter 1 Numerical Simulation for Biochemical Kinetics Daniel T. Gillespie and Linda R. Petzold In chemical systems formed by living cells, the small numbers of molecules of a few reactant species can

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Simulation methods for stochastic models in chemistry

Simulation methods for stochastic models in chemistry Simulation methods for stochastic models in chemistry David F. Anderson anderson@math.wisc.edu Department of Mathematics University of Wisconsin - Madison SIAM: Barcelona June 4th, 21 Overview 1. Notation

More information

Reaction time distributions in chemical kinetics: Oscillations and other weird behaviors

Reaction time distributions in chemical kinetics: Oscillations and other weird behaviors Introduction The algorithm Results Summary Reaction time distributions in chemical kinetics: Oscillations and other weird behaviors Ramon Xulvi-Brunet Escuela Politécnica Nacional Outline Introduction

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Slow Scale Tau-leaping Method

Slow Scale Tau-leaping Method Slow Scale Tau-leaping Method Yang Cao Linda Petzold Abstract For chemical systems involving both fast and slow scales, stiffness presents challenges for efficient stochastic simulation. Two different

More information

Lecture 7: Simple genetic circuits I

Lecture 7: Simple genetic circuits I Lecture 7: Simple genetic circuits I Paul C Bressloff (Fall 2018) 7.1 Transcription and translation In Fig. 20 we show the two main stages in the expression of a single gene according to the central dogma.

More information

Persistence and Stationary Distributions of Biochemical Reaction Networks

Persistence and Stationary Distributions of Biochemical Reaction Networks Persistence and Stationary Distributions of Biochemical Reaction Networks David F. Anderson Department of Mathematics University of Wisconsin - Madison Discrete Models in Systems Biology SAMSI December

More information

Modelling in Systems Biology

Modelling in Systems Biology Modelling in Systems Biology Maria Grazia Vigliotti thanks to my students Anton Stefanek, Ahmed Guecioueur Imperial College Formal representation of chemical reactions precise qualitative and quantitative

More information

A Moment Closure Method for Stochastic Chemical Reaction Networks with General Kinetics

A Moment Closure Method for Stochastic Chemical Reaction Networks with General Kinetics MATCH Communications in Mathematical and in Computer Chemistry MATCH Commun. Math. Comput. Chem. 70 2013 785-800 ISSN 0340-6253 A Moment Closure Method for Stochastic Chemical Reaction Networks with General

More information

The Adaptive Explicit-Implicit Tau-Leaping Method with Automatic Tau Selection

The Adaptive Explicit-Implicit Tau-Leaping Method with Automatic Tau Selection The Adaptive Explicit-Implicit Tau-Leaping Method with Automatic Tau Selection Yang Cao Department of Computer Science, 660 McBryde Hall, Virginia Tech, Blacksburg, VA 24061 Daniel T. Gillespie Dan T.

More information

arxiv: v2 [q-bio.qm] 12 Jan 2017

arxiv: v2 [q-bio.qm] 12 Jan 2017 Approximation and inference methods for stochastic biochemical kinetics - a tutorial review arxiv:1608.06582v2 [q-bio.qm] 12 Jan 2017 David Schnoerr 1,2,3, Guido Sanguinetti 2,3, and Ramon Grima 1,3,*

More information

16. Working with the Langevin and Fokker-Planck equations

16. Working with the Langevin and Fokker-Planck equations 16. Working with the Langevin and Fokker-Planck equations In the preceding Lecture, we have shown that given a Langevin equation (LE), it is possible to write down an equivalent Fokker-Planck equation

More information

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Simo Särkkä Aalto University, Finland (visiting at Oxford University, UK) November 13, 2013 Simo Särkkä (Aalto) Lecture 1: Pragmatic

More information

Accelerated Stochastic Simulation of the Stiff Enzyme-Substrate Reaction

Accelerated Stochastic Simulation of the Stiff Enzyme-Substrate Reaction 1 Accelerated Stochastic Simulation of the Stiff Enzyme-Substrate Reaction Yang Cao a) Dept. of Computer Science, Univ. of California, Santa Barbara, Santa Barbara, CA 9106 Daniel T. Gillespie b) Dan T

More information

Stochastic Simulation of Chemical Kinetics

Stochastic Simulation of Chemical Kinetics Annu. Rev. Phys. Chem. 2007. 58:35 55 First published online as a Review in Advance on October 12, 2006 The Annual Review of Physical Chemistry is online at http://physchem.annualreviews.org This article

More information

Stochastic model of mrna production

Stochastic model of mrna production Stochastic model of mrna production We assume that the number of mrna (m) of a gene can change either due to the production of a mrna by transcription of DNA (which occurs at a rate α) or due to degradation

More information

Cell Cycle Modeling for Budding Yeast with Stochastic Simulation Algorithms

Cell Cycle Modeling for Budding Yeast with Stochastic Simulation Algorithms Cell Cycle Modeling for Budding Yeast with Stochastic Simulation Algorithms Tae-Hyuk Ahn 1, Layne T Watson 1,2, Yang Cao 1, Clifford A Shaffer 1, and William T Baumann 3 1 Department of Computer Science,

More information

Derivations for order reduction of the chemical master equation

Derivations for order reduction of the chemical master equation 2 TWMCC Texas-Wisconsin Modeling and Control Consortium 1 Technical report number 2006-02 Derivations for order reduction of the chemical master equation Ethan A. Mastny, Eric L. Haseltine, and James B.

More information

Stiffness in stochastic chemically reacting systems: The implicit tau-leaping method

Stiffness in stochastic chemically reacting systems: The implicit tau-leaping method JOURNAL OF CHEICAL PHYSICS VOLUE 119, NUBER 24 22 DECEBER 2003 Stiffness in stochastic chemically reacting systems: The implicit tau-leaping method uruhan Rathinam a) Department of athematics and Statistics,

More information

An Introduction to Stochastic Simulation

An Introduction to Stochastic Simulation Stephen Gilmore Laboratory for Foundations of Computer Science School of Informatics University of Edinburgh PASTA workshop, London, 29th June 2006 Background The modelling of chemical reactions using

More information

Methods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie)

Methods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie) Methods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie) Week 1 1 Motivation Random numbers (RNs) are of course only pseudo-random when generated

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

Ordinary Differential Equations

Ordinary Differential Equations Chapter 13 Ordinary Differential Equations We motivated the problem of interpolation in Chapter 11 by transitioning from analzying to finding functions. That is, in problems like interpolation and regression,

More information

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Simo Särkkä Aalto University, Finland November 18, 2014 Simo Särkkä (Aalto) Lecture 4: Numerical Solution of SDEs November

More information

Lecture 11: Non-Equilibrium Statistical Physics

Lecture 11: Non-Equilibrium Statistical Physics Massachusetts Institute of Technology MITES 2018 Physics III Lecture 11: Non-Equilibrium Statistical Physics In the previous notes, we studied how systems in thermal equilibrium can change in time even

More information

Stochastic Simulation Methods for Solving Systems with Multi-State Species

Stochastic Simulation Methods for Solving Systems with Multi-State Species Stochastic Simulation Methods for Solving Systems with Multi-State Species Zhen Liu Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of

More information

2 Discrete Dynamical Systems (DDS)

2 Discrete Dynamical Systems (DDS) 2 Discrete Dynamical Systems (DDS) 2.1 Basics A Discrete Dynamical System (DDS) models the change (or dynamics) of single or multiple populations or quantities in which the change occurs deterministically

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations Simo Särkkä Aalto University Tampere University of Technology Lappeenranta University of Technology Finland November

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Introduction to the Numerical Solution of IVP for ODE

Introduction to the Numerical Solution of IVP for ODE Introduction to the Numerical Solution of IVP for ODE 45 Introduction to the Numerical Solution of IVP for ODE Consider the IVP: DE x = f(t, x), IC x(a) = x a. For simplicity, we will assume here that

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Regulation of metabolism

Regulation of metabolism Regulation of metabolism So far in this course we have assumed that the metabolic system is in steady state For the rest of the course, we will abandon this assumption, and look at techniques for analyzing

More information

Accelerated Stochastic Simulation of the Stiff Enzyme-Substrate Reaction

Accelerated Stochastic Simulation of the Stiff Enzyme-Substrate Reaction JCP A5.05.047 1 Accelerated Stochastic Simulation of the Stiff Enzyme-Substrate Reaction Yang Cao a) Dept. of Computer Science, Univ. of California, Santa Barbara, Santa Barbara, CA 9106 Daniel T. Gillespie

More information

Extending the multi-level method for the simulation of stochastic biological systems

Extending the multi-level method for the simulation of stochastic biological systems Extending the multi-level method for the simulation of stochastic biological systems Christopher Lester Ruth E. Baker Michael B. Giles Christian A. Yates 29 February 216 Abstract The multi-level method

More information

Stochastic chemical kinetics on an FPGA: Bruce R Land. Introduction

Stochastic chemical kinetics on an FPGA: Bruce R Land. Introduction Stochastic chemical kinetics on an FPGA: Bruce R Land Introduction As you read this, there are thousands of chemical reactions going on in your body. Some are very fast, for instance, the binding of neurotransmitters

More information

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Lecture No. # 33 Probabilistic methods in earthquake engineering-2 So, we have

More information

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Lecture No. # 03 Moving from one basic feasible solution to another,

More information

ORF 522. Linear Programming and Convex Analysis

ORF 522. Linear Programming and Convex Analysis ORF 5 Linear Programming and Convex Analysis Initial solution and particular cases Marco Cuturi Princeton ORF-5 Reminder: Tableaux At each iteration, a tableau for an LP in standard form keeps track of....................

More information

Numerical solution of stochastic models of biochemical kinetics

Numerical solution of stochastic models of biochemical kinetics Numerical solution of stochastic models of biochemical kinetics SILVANA ILIE a WAYNE H. ENRIGHT b KENNETH R. JACKSON b a Department of Mathematics, Ryerson University, Toronto, ON, M5B 2K3, Canada email:

More information

Eco517 Fall 2004 C. Sims MIDTERM EXAM

Eco517 Fall 2004 C. Sims MIDTERM EXAM Eco517 Fall 2004 C. Sims MIDTERM EXAM Answer all four questions. Each is worth 23 points. Do not devote disproportionate time to any one question unless you have answered all the others. (1) We are considering

More information

Quantitative Understanding in Biology Module IV: ODEs Lecture I: Introduction to ODEs

Quantitative Understanding in Biology Module IV: ODEs Lecture I: Introduction to ODEs Quantitative Understanding in Biology Module IV: ODEs Lecture I: Introduction to ODEs Ordinary Differential Equations We began our exploration of dynamic systems with a look at linear difference equations.

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

Partial Differential Equations (PDEs)

Partial Differential Equations (PDEs) 10.34 Numerical Methods Applied to Chemical Engineering Fall 2015 Final Exam Review Partial Differential Equations (PDEs) 1. Classification (a) Many PDEs encountered by chemical engineers are second order

More information

Biomolecular Feedback Systems

Biomolecular Feedback Systems Biomolecular Feedback Systems Domitilla Del Vecchio MIT Richard M. Murray Caltech DRAFT v0.4a, January 16, 2011 c California Institute of Technology All rights reserved. This manuscript is for review purposes

More information

Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions

Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions THE JOURNAL OF CHEMICAL PHYSICS 122, 054103 2005 Accurate hybrid schastic simulation of a system of coupled chemical or biochemical reactions Howard Salis and Yiannis Kaznessis a) Department of Chemical

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Introduction to Initial Value Problems

Introduction to Initial Value Problems Chapter 2 Introduction to Initial Value Problems The purpose of this chapter is to study the simplest numerical methods for approximating the solution to a first order initial value problem (IVP). Because

More information

Formal Modeling of Biological Systems with Delays

Formal Modeling of Biological Systems with Delays Universita degli Studi di Pisa Dipartimento di Informatica Dottorato di Ricerca in Informatica Ph.D. Thesis Proposal Formal Modeling of Biological Systems with Delays Giulio Caravagna caravagn@di.unipi.it

More information

1. Introductory Examples

1. Introductory Examples 1. Introductory Examples We introduce the concept of the deterministic and stochastic simulation methods. Two problems are provided to explain the methods: the percolation problem, providing an example

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

Notes on generating functions in automata theory

Notes on generating functions in automata theory Notes on generating functions in automata theory Benjamin Steinberg December 5, 2009 Contents Introduction: Calculus can count 2 Formal power series 5 3 Rational power series 9 3. Rational power series

More information

Special Theory of Relativity Prof. Shiva Prasad Department of Physics Indian Institute of Technology, Bombay. Lecture - 15 Momentum Energy Four Vector

Special Theory of Relativity Prof. Shiva Prasad Department of Physics Indian Institute of Technology, Bombay. Lecture - 15 Momentum Energy Four Vector Special Theory of Relativity Prof. Shiva Prasad Department of Physics Indian Institute of Technology, Bombay Lecture - 15 Momentum Energy Four Vector We had started discussing the concept of four vectors.

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Quantitative Understanding in Biology Module IV: ODEs Lecture I: Introduction to ODEs

Quantitative Understanding in Biology Module IV: ODEs Lecture I: Introduction to ODEs Quantitative Understanding in Biology Module IV: ODEs Lecture I: Introduction to ODEs Ordinary Differential Equations We began our exploration of dynamic systems with a look at linear difference equations.

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

Extending the Tools of Chemical Reaction Engineering to the Molecular Scale

Extending the Tools of Chemical Reaction Engineering to the Molecular Scale Extending the Tools of Chemical Reaction Engineering to the Molecular Scale Multiple-time-scale order reduction for stochastic kinetics James B. Rawlings Department of Chemical and Biological Engineering

More information

Langevin Methods. Burkhard Dünweg Max Planck Institute for Polymer Research Ackermannweg 10 D Mainz Germany

Langevin Methods. Burkhard Dünweg Max Planck Institute for Polymer Research Ackermannweg 10 D Mainz Germany Langevin Methods Burkhard Dünweg Max Planck Institute for Polymer Research Ackermannweg 1 D 55128 Mainz Germany Motivation Original idea: Fast and slow degrees of freedom Example: Brownian motion Replace

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

Efficient step size selection for the tau-leaping simulation method

Efficient step size selection for the tau-leaping simulation method THE JOURNAL OF CHEMICAL PHYSICS 24, 04409 2006 Efficient step size selection for the tau-leaping simulation method Yang Cao a Department of Computer Science, Virginia Tech, Blacksburg, Virginia 2406 Daniel

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Approximate accelerated stochastic simulation of chemically reacting systems

Approximate accelerated stochastic simulation of chemically reacting systems JOURNAL OF CHEICAL PHYSICS VOLUE 115, NUBER 4 22 JULY 2001 Approximate accelerated stochastic simulation of chemically reacting systems Daniel T. Gillespie a) Research Department, Code 4T4100D, Naval Air

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO 1. RENEWAL PROCESSES A renewal process is the increasing sequence of random nonnegative numbers S 0,S 1,S 2,... gotten by adding i.i.d. positive random

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Numerical methods for stochastic simulation of biochemical systems

Numerical methods for stochastic simulation of biochemical systems International Journal of Sciences and Techniques of Automatic control & computer engineering IJ-STA, Volume 4, N 2, December 2010, pp. 1268 1283. Numerical methods for stochastic simulation of biochemical

More information

Recursive Estimation

Recursive Estimation Recursive Estimation Raffaello D Andrea Spring 08 Problem Set 3: Extracting Estimates from Probability Distributions Last updated: April 9, 08 Notes: Notation: Unless otherwise noted, x, y, and z denote

More information

Random Eigenvalue Problems Revisited

Random Eigenvalue Problems Revisited Random Eigenvalue Problems Revisited S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. Email: S.Adhikari@bristol.ac.uk URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html

More information

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland 4 May 2012 Because the presentation of this material

More information

CHAPTER 5: Linear Multistep Methods

CHAPTER 5: Linear Multistep Methods CHAPTER 5: Linear Multistep Methods Multistep: use information from many steps Higher order possible with fewer function evaluations than with RK. Convenient error estimates. Changing stepsize or order

More information

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution

Lecture 7. Root finding I. 1 Introduction. 2 Graphical solution 1 Introduction Lecture 7 Root finding I For our present purposes, root finding is the process of finding a real value of x which solves the equation f (x)=0. Since the equation g x =h x can be rewritten

More information

p(t)dt a p(τ)dτ , c R.

p(t)dt a p(τ)dτ , c R. 11 3. Solutions of first order linear ODEs 3.1. Homogeneous and inhomogeneous; superposition. A first order linear equation is homogeneous if the right hand side is zero: (1) ẋ + p(t)x = 0. Homogeneous

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

2008 Hotelling Lectures

2008 Hotelling Lectures First Prev Next Go To Go Back Full Screen Close Quit 1 28 Hotelling Lectures 1. Stochastic models for chemical reactions 2. Identifying separated time scales in stochastic models of reaction networks 3.

More information

Algebra Exam. Solutions and Grading Guide

Algebra Exam. Solutions and Grading Guide Algebra Exam Solutions and Grading Guide You should use this grading guide to carefully grade your own exam, trying to be as objective as possible about what score the TAs would give your responses. Full

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

CS181 Midterm 2 Practice Solutions

CS181 Midterm 2 Practice Solutions CS181 Midterm 2 Practice Solutions 1. Convergence of -Means Consider Lloyd s algorithm for finding a -Means clustering of N data, i.e., minimizing the distortion measure objective function J({r n } N n=1,

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differential Equations We call Ordinary Differential Equation (ODE) of nth order in the variable x, a relation of the kind: where L is an operator. If it is a linear operator, we call the equation

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

Simulating stochastic epidemics

Simulating stochastic epidemics Simulating stochastic epidemics John M. Drake & Pejman Rohani 1 Introduction This course will use the R language programming environment for computer modeling. The purpose of this exercise is to introduce

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Handbook of Stochastic Methods

Handbook of Stochastic Methods C. W. Gardiner Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences Third Edition With 30 Figures Springer Contents 1. A Historical Introduction 1 1.1 Motivation I 1.2 Some Historical

More information

Ordinary differential equations. Phys 420/580 Lecture 8

Ordinary differential equations. Phys 420/580 Lecture 8 Ordinary differential equations Phys 420/580 Lecture 8 Most physical laws are expressed as differential equations These come in three flavours: initial-value problems boundary-value problems eigenvalue

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information