Computational methods for continuous time Markov chains with applications to biological processes

Size: px
Start display at page:

Download "Computational methods for continuous time Markov chains with applications to biological processes"

Transcription

1 Computational methods for continuous time Markov chains with applications to biological processes David F. Anderson Department of Mathematics University of Wisconsin - Madison Penn. State January 13th, 212

2 Stochastic Models of Biochemical Reaction Systems Most common stochastic models of biochemical reaction systems are continuous time Markov chains. Often called chemical master equation type models in biosciences. Common examples include: 1. Gene regulatory networks. 2. Models of viral infection. 3. General population models (epidemic, predator-prey, etc.

3 Stochastic Models of Biochemical Reaction Systems Most common stochastic models of biochemical reaction systems are continuous time Markov chains. Often called chemical master equation type models in biosciences. Common examples include: 1. Gene regulatory networks. 2. Models of viral infection. 3. General population models (epidemic, predator-prey, etc. Path-wise simulation methods include: Language in Biology Gillespie s Algorithm Next reaction method First reaction method Language in Math Sim. embedded DTMC Sim. random time change representation of Tom Kurtz Sim. using exponential alarm clocks

4 Stochastic Models of Biochemical Reaction Systems Path-wise methods can approximate values such as For example, 1. Means: f (x = x i. 2. Moments/variances: f (x = x 2 i. Ef (X(t 3. Probabilities: f (x = 1 {x A}. or compute sensitivities d dκ Ef (X κ (t.

5 Stochastic Models of Biochemical Reaction Systems Path-wise methods can approximate values such as For example, 1. Means: f (x = x i. 2. Moments/variances: f (x = x 2 i. Ef (X(t 3. Probabilities: f (x = 1 {x A}. or compute sensitivities d dκ Ef (X κ (t. Problem: solving using these algorithms can be computationally expensive.

6 First problem: joint with Des Higham Our first problem: Approximate Ef (X(T to some desired tolerance, ɛ >.

7 First problem: joint with Des Higham Our first problem: Approximate Ef (X(T to some desired tolerance, ɛ >. Easy! Simulate the CTMC exactly, generate independent paths, X [i] (t, use the unbiased estimator µ n = 1 n n f (X [i] (t. i=1 stop when desired confidence interval is ± ɛ.

8 What is the computational cost? Recall, Thus, µ n = 1 n n f (X [i] (t. i=1 Var(µ n = O ( 1. n

9 What is the computational cost? Recall, Thus, So, if we want we need µ n = 1 n n f (X [i] (t. i=1 Var(µ n = O σ n = O(ɛ, ( 1. n 1 n = O(ɛ = n = O(ɛ 2.

10 What is the computational cost? Recall, Thus, So, if we want we need µ n = 1 n n f (X [i] (t. i=1 Var(µ n = O σ n = O(ɛ, ( 1. n 1 n = O(ɛ = n = O(ɛ 2. If N gives average cost (steps of a path using exact algorithm: Total computational complexity = (cost per path (# paths = O(Nɛ 2. Can be bad if (i N is large, or (ii ɛ is small.

11 Benefits/drawbacks Benefits: 1. Easy to implement. 2. Estimator is unbiased. µ n = 1 n n f (X [i] (t i=1

12 Benefits/drawbacks Benefits: 1. Easy to implement. 2. Estimator is unbiased. µ n = 1 n n f (X [i] (t i=1 Drawbacks: 1. The cost of O(Nɛ 2 could be prohibitively large. 2. For our models, we often have that N is very large. We need to develop the model for better ideas...

13 Build up model: Random time change representation of Tom Kurtz Consider the simple system A + B C where one molecule each of A and B is being converted to one of C.

14 Build up model: Random time change representation of Tom Kurtz Consider the simple system A + B C where one molecule each of A and B is being converted to one of C. Simple book-keeping: if X(t = (X A (t, X B (t, X C (t T gives the state at time t, X(t = X( + R(t 1 1 1, where R(t is the # of times the reaction has occurred by time t, and X( is the initial condition.

15 Build up model: Random time change representation of Tom Kurtz Assuming intensity or propensity of reaction is κx A (sx B (s,

16 Build up model: Random time change representation of Tom Kurtz Assuming intensity or propensity of reaction is We can model ( t R(t = Y κx A (sx B (s, where Y is a unit-rate Poisson point process. κx A (sx B (sds

17 Build up model: Random time change representation of Tom Kurtz Assuming intensity or propensity of reaction is We can model ( t R(t = Y κx A (sx B (s, where Y is a unit-rate Poisson point process. κx A (sx B (sds Hence X A (t X B (t X C (t X(t = X( ( t Y κx A (sx B (sds.

18 Build up model: Random time change representation of Tom Kurtz Now consider a network of reactions involving d chemical species, S 1,..., S d : d d ν ik S i i=1 i=1 Denote reaction vector as ζ k = ν k ν k, ν iks i

19 Build up model: Random time change representation of Tom Kurtz Now consider a network of reactions involving d chemical species, S 1,..., S d : d d ν ik S i i=1 i=1 Denote reaction vector as ζ k = ν k ν k, ν iks i The intensity (or propensity of kth reaction is λ k : Z d R. By analogy with before X(t = X( + k Y k ( t λ k (X(sds ζ k, Y k are independent, unit-rate Poisson processes.

20 Example Consider a model of gene transcription and translation: G 25 G + M, M 1 M + P, P + P.1 D, M.1, P 1 Then, if X = [X M, X P, X D ] T, (Transcription (Translation (Dimerization (Degradation of mrna (Degradation of Protein.

21 Example Consider a model of gene transcription and translation: G 25 G + M, M 1 M + P, P + P.1 D, M.1, (Transcription (Translation (Dimerization (Degradation of mrna P 1 Then, if X = [X M, X P, X D ] T, X(t = X( + Y 1 (25t 1 (Degradation of Protein. + Y 2 (1 t t + Y 3 (.1 X P (s(x P (s 1ds t + Y 4 (.1 X M (sds 1 + Y 5 (1 X M (sds 2 1 t 1 X P (sds 1

22 Back to our problem Recall: Benefits: 1. Easy to implement. 2. Estimator is unbiased. µ n = 1 n n f (X [i] (t i=1 Drawbacks: 1. The cost of O(Nɛ 2 could be prohibitively large. 2. For our models, we often have that N is very large. Let s try an approximate scheme.

23 Tau-leaping: Euler s method Explicit tau-leaping 1 or Euler s method, was first formulated by Dan Gillespie in this setting. Tau-leaping is essentially an Euler approximation of Z (h = Z ( + k Z ( + k d = Z ( + k t ( h Y k λ k (Z (s ds Y k (λ k (Z ( h ζ k λ k (X(sds: ζ k ( Poisson λ k (Z ( h ζ k. 1 D. T. Gillespie, J. Chem. Phys., 115,

24 Euler s method Path-wise representation for Z (t generated by Euler s method is Z (t = X( + k Y k ( t λ k (Z η(sds ζ k, where η(s = t n if t n s < t n+1 = t n + h is a step function giving left endpoints of time discretization.

25 Return to approximating Ef (X(T Let Z L denote an approximate processes generated with time discretization step of h L. Let µ n = 1 n f (Z L,[i] (t. n i=1

26 Return to approximating Ef (X(T Let Z L denote an approximate processes generated with time discretization step of h L. Let µ n = 1 n f (Z L,[i] (t. n We note i=1 Ef (X(t µ n = [ Ef (X(t Ef (Z L (t ] + Ef (Z L (t µ n

27 Return to approximating Ef (X(T Let Z L denote an approximate processes generated with time discretization step of h L. Let µ n = 1 n f (Z L,[i] (t. n We note i=1 Ef (X(t µ n = [ Ef (X(t Ef (Z L (t ] + Ef (Z L (t µ n Suppose have an order one method Ef (X(t Ef (Z L (t = O(h L.

28 Return to approximating Ef (X(T Let Z L denote an approximate processes generated with time discretization step of h L. Let µ n = 1 n f (Z L,[i] (t. n We note i=1 Ef (X(t µ n = [ Ef (X(t Ef (Z L (t ] + Ef (Z L (t µ n Suppose have an order one method Ef (X(t Ef (Z L (t = O(h L. We need: 1. h L = O(ɛ. 2. n = ɛ 2.

29 Return to approximating Ef (X(T Let Z L denote an approximate processes generated with time discretization step of h L. Let µ n = 1 n f (Z L,[i] (t. n We note i=1 Ef (X(t µ n = [ Ef (X(t Ef (Z L (t ] + Ef (Z L (t µ n Suppose have an order one method Ef (X(t Ef (Z L (t = O(h L. We need: 1. h L = O(ɛ. 2. n = ɛ 2. Suppose a path costs O(ɛ 1 steps. Then Total computational complexity = (# paths (cost per path = O(ɛ 3.

30 Benefits/drawbacks Benefits: 1. Can drastically lower the computational complexity of a problem if ɛ 1 N. CC of using exact = Nɛ 2 CC of using approximate = ɛ 1 ɛ 2.

31 Benefits/drawbacks Benefits: 1. Can drastically lower the computational complexity of a problem if ɛ 1 N. CC of using exact = Nɛ 2 CC of using approximate = ɛ 1 ɛ 2. Drawbacks: 1. Convergence results usually give order of convergence. Can t give a precise h L. Bias is a problem. 2. Tau-leaping has problems: what happens if you go negative? 3. Gone away from an unbiased estimator.

32 Multi-level Monte Carlo and control variates Suppose I want EX 1 n X [i], n but realizations of X are expensive. i=1

33 Multi-level Monte Carlo and control variates Suppose I want EX 1 n X [i], n but realizations of X are expensive. i=1 Suppose X ẐL, and ẐL is cheap.

34 Multi-level Monte Carlo and control variates Suppose I want EX 1 n X [i], n but realizations of X are expensive. i=1 Suppose X ẐL, and ẐL is cheap. Suppose X, ẐL can be generated simultaneously so that Var(X ẐL is small.

35 Multi-level Monte Carlo and control variates Suppose I want EX 1 n X [i], n but realizations of X are expensive. i=1 Suppose X ẐL, and ẐL is cheap. Suppose X, ẐL can be generated simultaneously so that Var(X ẐL is small. Then use EX = E[X ẐL] + EẐL

36 Multi-level Monte Carlo and control variates Suppose I want EX 1 n X [i], n but realizations of X are expensive. i=1 Suppose X ẐL, and ẐL is cheap. Suppose X, ẐL can be generated simultaneously so that Var(X ẐL is small. Then use EX = E[X ẐL] + EẐL 1 n 1 (X [i] n ẐL,[i] + 1 n 2 Ẑ L,[i]. 1 n 2 i=1 i=1

37 Multi-level Monte Carlo and control variates Suppose I want EX 1 n X [i], n but realizations of X are expensive. i=1 Suppose X ẐL, and ẐL is cheap. Suppose X, ẐL can be generated simultaneously so that Var(X ẐL is small. Then use EX = E[X ẐL] + EẐL 1 n 1 (X [i] n ẐL,[i] + 1 n 2 Ẑ L,[i]. 1 n 2 i=1 i=1 Multi-level Monte Carlo (Mike Giles, Stefan Heinrich = Keep going EX = E(X ẐL + EẐL =

38 Multi-level Monte Carlo and control variates Suppose I want EX 1 n X [i], n but realizations of X are expensive. i=1 Suppose X ẐL, and ẐL is cheap. Suppose X, ẐL can be generated simultaneously so that Var(X ẐL is small. Then use EX = E[X ẐL] + EẐL 1 n 1 (X [i] n ẐL,[i] + 1 n 2 Ẑ L,[i]. 1 n 2 i=1 i=1 Multi-level Monte Carlo (Mike Giles, Stefan Heinrich = Keep going EX = E(X ẐL + EẐL = E(Z ẐL + E(ẐL ẐL 1 + EẐL 1 =

39 Multi-level Monte Carlo: an unbiased estimator In our setting: Ef (X(t =

40 Multi-level Monte Carlo: an unbiased estimator In our setting: Ef (X(t = E[f (X(t f (Z L (t] + L E[f (Z l (t f (Z l 1 (t] + Ef (Z l (t. l=l +1

41 Multi-level Monte Carlo: an unbiased estimator In our setting: Ef (X(t = E[f (X(t f (Z L (t] + L l=l +1 E[f (Z l (t f (Z l 1 (t] + Ef (Z l (t. For appropiate choices of n, n l, and n E, we define the estimators for the three terms above via Q E def = 1 n E n E i=1 (f (X [i] (T f (Z L,[i] (T,

42 Multi-level Monte Carlo: an unbiased estimator In our setting: Ef (X(t = E[f (X(t f (Z L (t] + L l=l +1 E[f (Z l (t f (Z l 1 (t] + Ef (Z l (t. For appropiate choices of n, n l, and n E, we define the estimators for the three terms above via Q E Q l def = 1 n E n E i=1 (f (X [i] (T f (Z L,[i] (T, def = 1 n l (f (Z l,[i] (T f (Z l 1,[i] (T, for l {l + 1,..., L} n l i=1

43 Multi-level Monte Carlo: an unbiased estimator In our setting: Ef (X(t = E[f (X(t f (Z L (t] + L l=l +1 E[f (Z l (t f (Z l 1 (t] + Ef (Z l (t. For appropiate choices of n, n l, and n E, we define the estimators for the three terms above via Q E Q l Q def = 1 n E and note that n E i=1 (f (X [i] (T f (Z L,[i] (T, def = 1 n l (f (Z l,[i] (T f (Z l 1,[i] (T, for l {l + 1,..., L} n l i=1 def = 1 n f (Z l,[i](t, n i=1 Q def = Q E + is an unbiased estimator for Ef (X(T. L l=l +1 Q l + Q

44 Multi-level Monte Carlo: an unbiased estimator In our setting: Ef (X(t = E[f (X(t f (Z L (t] + L l=l +1 E[f (Z l (t f (Z l 1 (t] + Ef (Z l (t. For appropiate choices of n, n l, and n E, we define the estimators for the three terms above via Q E Q l Q def = 1 n E and note that n E i=1 (f (X [i] (T f (Z L,[i] (T, def = 1 n l (f (Z l,[i] (T f (Z l 1,[i] (T, for l {l + 1,..., L} n l i=1 def = 1 n f (Z l,[i](t, n i=1 Q def = Q E + is an unbiased estimator for Ef (X(T. L l=l +1 Q l + Q So what is the coupling and the variance of the estimator?

45 How do we generate processes simultaneously Suppose I want to generate: A Poisson process with intensity A Poisson process with intensity 13.

46 How do we generate processes simultaneously Suppose I want to generate: A Poisson process with intensity A Poisson process with intensity 13. We could let Y 1 and Y 2 be independent, unit-rate Poisson processes, and set Z 13.1 (t = Y 1 (13.1t, Z 13 (t = Y 2 (13t, Using this representation, these processes are independent and, hence, not coupled.

47 How do we generate processes simultaneously Suppose I want to generate: A Poisson process with intensity A Poisson process with intensity 13. We could let Y 1 and Y 2 be independent, unit-rate Poisson processes, and set Z 13.1 (t = Y 1 (13.1t, Z 13 (t = Y 2 (13t, Using this representation, these processes are independent and, hence, not coupled. The variance of difference is large: Var(Z 13.1 (t Z 13 (t = Var(Y 1 (13.1t + Var(Y 2 (13t = 26.1t.

48 How do we generate processes simultaneously Suppose I want to generate: A Poisson process with intensity A Poisson process with intensity 13.

49 How do we generate processes simultaneously Suppose I want to generate: A Poisson process with intensity A Poisson process with intensity 13. We could let Y 1 and Y 2 be independent unit-rate Poisson processes, and set Z 13.1 (t = Y 1 (13t + Y 2 (.1t Z 13 (t = Y 1 (13t, The variance of difference is much smaller: Var(Z 13.1 (t Z 13 (t = Var (Y 2 (.1t =.1t.

50 How do we generate processes simultaneously More generally, suppose we want 1. non-homogeneous Poisson process with intensity f (t and 2. non-homogeneous Poisson process with intensity g(t.

51 How do we generate processes simultaneously More generally, suppose we want 1. non-homogeneous Poisson process with intensity f (t and 2. non-homogeneous Poisson process with intensity g(t. We can can let Y 1, Y 2, and Y 3 be independent, unit-rate Poisson processes and define ( t ( t Z f (t = Y 1 f (s g(sds + Y 2 f (s (f (s g(s ds, ( t ( t Z g(t = Y 1 f (s g(sds + Y 3 g(s (f (s g(s ds,

52 How do we generate processes simultaneously More generally, suppose we want 1. non-homogeneous Poisson process with intensity f (t and 2. non-homogeneous Poisson process with intensity g(t. We can can let Y 1, Y 2, and Y 3 be independent, unit-rate Poisson processes and define ( t ( t Z f (t = Y 1 f (s g(sds + Y 2 f (s (f (s g(s ds, ( t ( t Z g(t = Y 1 f (s g(sds + Y 3 g(s (f (s g(s ds, where we are using that, for example, ( t ( t ( t Y 1 f (s g(sds + Y 2 f (s (f (s g(s ds = Y where Y is a unit rate Poisson process. f (sds,

53 Back to our processes X(t = X( + k Z (t = X( + k Y k ( t Y k ( t λ k (X(sds ζ k, λ k (Z η(sds ζ k.

54 Back to our processes X(t = X( + k Z (t = X( + k Y k ( t Y k ( t λ k (X(sds ζ k, λ k (Z η(sds ζ k. Now couple X(t = X( + k + k Z l (t = Z l ( + k + k Y k,1 ( t Y k,2 ( t Y k,1 ( t Y k,3 ( t λ k (X(s λ k (Z l η l (sds ζ k λ k (X(s λ k (X(s λ k (Z l η l (sds ζ k λ k (X(s λ k (Z l η l (sds ζ k λ k (Z l η l (s λ k (X(s λ k (Z l η l (sds ζ k

55 Back to our processes X(t = X( + k Z (t = X( + k Y k ( t Y k ( t λ k (X(sds ζ k, λ k (Z η(sds ζ k. Now couple X(t = X( + k + k Z l (t = Z l ( + k + k Y k,1 ( t Y k,2 ( t Y k,1 ( t Y k,3 ( t λ k (X(s λ k (Z l η l (sds ζ k λ k (X(s λ k (X(s λ k (Z l η l (sds ζ k λ k (X(s λ k (Z l η l (sds ζ k λ k (Z l η l (s λ k (X(s λ k (Z l η l (sds ζ k Algorithm for simulation is equivalent to next reaction method or Gillespie.

56 For approximate processes Z l (t = Z l ( + ( t Y k,1 λ k (Z l η l (s λ k (Z l 1 η l 1 (sds ζ k k + ( t Y k,2 λ k (Z l η l (s λ k (Z l η l (s λ k (Z l 1 η l 1 (sds ζ k k Z l 1 (t = Z l 1 ( + ( t Y k,1 λ k (Z l η l (s λ k (Z l 1 η l 1 (sds ζ k k + ( t Y k,3 λ k (Z l 1 η l 1 (s λ k (Z l η l (s λ k (Z l 1 η l 1 (sds ζ k, k Algorithm for simulation is equivalent in to τ-leaping.

57 Multi-level Monte Carlo: chemical kinetic setting Can prove: Theorem (Anderson, Higham 211 Suppose (X, Z l satisfy coupling. Then, sup E X(t Z l (t 2 C 1 (T N ρ h l + C 2 (T hl. 2 t T 1 David F. Anderson and Desmond J. Higham, Multi-level Monte Carlo for stochastically modeled chemical kinetic systems. To appear in SIAM: Modeling and Simulation. Available at arxiv.org: Also at anderson.

58 Multi-level Monte Carlo: chemical kinetic setting Can prove: Theorem (Anderson, Higham 211 Suppose (X, Z l satisfy coupling. Then, sup E X(t Z l (t 2 C 1 (T N ρ h l + C 2 (T hl. 2 t T Theorem (Anderson, Higham 211 Suppose (Z l, Z l 1 satisfy coupling. Then, sup E Z l (t Z l 1 (t 2 C 1 (T N ρ h l + C 2 (T hl. 2 t T 1 David F. Anderson and Desmond J. Higham, Multi-level Monte Carlo for stochastically modeled chemical kinetic systems. To appear in SIAM: Modeling and Simulation. Available at arxiv.org: Also at anderson.

59 Multi-level Monte Carlo: an unbiased estimator For well chosen n, n l, and n E. We have Var( Q L = Var QE + l=l +1 Q l + Q = O(ɛ 2, with [ ] ( Comp. cost = ɛ 2 (N ρ h L + hl 2 N+ɛ 2 h 1 l + ln(ɛ 2 N ρ + ln(ɛ 1 1 M 1 h l

60 Multi-level Monte Carlo: an unbiased estimator Some observations: 1. Weak error plays no role in analysis: free to choose h L.

61 Multi-level Monte Carlo: an unbiased estimator Some observations: 1. Weak error plays no role in analysis: free to choose h L. 2. Common problems associated with tau-leaping Negativity of species numbers, does not matter. Just define process in a sensible way.

62 Multi-level Monte Carlo: an unbiased estimator Some observations: 1. Weak error plays no role in analysis: free to choose h L. 2. Common problems associated with tau-leaping Negativity of species numbers, does not matter. Just define process in a sensible way. 3. The method is unbiased.

63 Example Consider a model of gene transcription and translation: Suppose: G 25 G + M, M 1 M + P, P + P.1 D, M.1, P initialize with: G = 1, M =, P =, D =, 2. want to estimate the expected number of dimers at time T = 1, 3. to an accuracy of ± 1. with 95% confidence.

64 Example Method: Exact algorithm with crude Monte Carlo. Approximation # paths CPU Time # updates 3,714.2 ± 1. 4,74, 149, CPU S (41 hours!

65 Example Method: Exact algorithm with crude Monte Carlo. Approximation # paths CPU Time # updates 3,714.2 ± 1. 4,74, 149, CPU S (41 hours! Method: Euler tau-leaping with crude Monte Carlo. Step-size Approximation # paths CPU Time # updates h = 3 7 3,712.3 ± 1. 4,75, 13,374.6 S h = 3 6 3,77.5 ± 1. 4,75, 6,27.9 S h = 3 5 3,693.4 ± 1. 4,7, 2,83.9 S h = 3 4 3,654.6 ± 1. 4,65, 1,219. S

66 Example Method: Exact algorithm with crude Monte Carlo. Approximation # paths CPU Time # updates 3,714.2 ± 1. 4,74, 149, CPU S (41 hours! Method: Euler tau-leaping with crude Monte Carlo. Step-size Approximation # paths CPU Time # updates h = 3 7 3,712.3 ± 1. 4,75, 13,374.6 S h = 3 6 3,77.5 ± 1. 4,75, 6,27.9 S h = 3 5 3,693.4 ± 1. 4,7, 2,83.9 S h = 3 4 3,654.6 ± 1. 4,65, 1,219. S Method: unbiased MLMC with l = 2, and M and L detailed below. Step-size parameters Approx. CPU Time # updates M = 3, L = 6 3,713.9 ± 1. 1,63.3 S M = 3, L = 5 3,714.7 ± 1. 1,114.9 S M = 3, L = 4 3,714.2 ± 1. 1,656.6 S M = 4, L = ± 1. 1,334.8 S M = 4, L = 5 3,713.8 ± 1. 1,14.9 S

67 Example Method: Exact algorithm with crude Monte Carlo. Approximation # paths CPU Time # updates 3,714.2 ± 1. 4,74, 149, CPU S (41 hours! Method: Euler tau-leaping with crude Monte Carlo. Step-size Approximation # paths CPU Time # updates h = 3 7 3,712.3 ± 1. 4,75, 13,374.6 S h = 3 6 3,77.5 ± 1. 4,75, 6,27.9 S h = 3 5 3,693.4 ± 1. 4,7, 2,83.9 S h = 3 4 3,654.6 ± 1. 4,65, 1,219. S Method: unbiased MLMC with l = 2, and M and L detailed below. Step-size parameters Approx. CPU Time # updates M = 3, L = 6 3,713.9 ± 1. 1,63.3 S M = 3, L = 5 3,714.7 ± 1. 1,114.9 S M = 3, L = 4 3,714.2 ± 1. 1,656.6 S M = 4, L = ± 1. 1,334.8 S M = 4, L = 5 3,713.8 ± 1. 1,14.9 S the exact algorithm with crude Monte Carlo demanded 14 times more CPU time than our unbiased MLMC estimator!

68 Example Method: Exact algorithm with crude Monte Carlo. Approximation # paths CPU Time # updates 3,714.2 ± 1. 4,74, 149, CPU S (41 hours! Unbiased Multi-level Monte Carlo with M = 3, L = 5, and l = 2. Level # paths CPU Time Var. estimator # updates (X, Z 3 5 3, S (Z 3 5, Z 3 4 3, 49. S (Z 3 4, Z , 71.7 S (Z 3 3, Z , S Tau-leap with h = 3 2 8,63, S Totals N.A S

69 Some conclusions about this method 1. Gillespie s algorithm is by far the most common way to compute expectations: 1.1 Means. 1.2 Variances. 1.3 Probabilities. 2. The new method (MLMC also performs this task with no bias (exact.

70 Some conclusions about this method 1. Gillespie s algorithm is by far the most common way to compute expectations: 1.1 Means. 1.2 Variances. 1.3 Probabilities. 2. The new method (MLMC also performs this task with no bias (exact. 3. Will be at worst the same speed as Gillespie (exact algorithm + crude Monte Carlo. 4. Will commonly be many orders of magnitude faster.

71 Some conclusions about this method 1. Gillespie s algorithm is by far the most common way to compute expectations: 1.1 Means. 1.2 Variances. 1.3 Probabilities. 2. The new method (MLMC also performs this task with no bias (exact. 3. Will be at worst the same speed as Gillespie (exact algorithm + crude Monte Carlo. 4. Will commonly be many orders of magnitude faster. 5. Applicable to essentially all continuous time Markov chains: X(t = X( + ( t Y k λ k (X(sds ζ k. k

72 Some conclusions about this method 1. Gillespie s algorithm is by far the most common way to compute expectations: 1.1 Means. 1.2 Variances. 1.3 Probabilities. 2. The new method (MLMC also performs this task with no bias (exact. 3. Will be at worst the same speed as Gillespie (exact algorithm + crude Monte Carlo. 4. Will commonly be many orders of magnitude faster. 5. Applicable to essentially all continuous time Markov chains: X(t = X( + ( t Y k λ k (X(sds ζ k. k 6. Con: Is substantially harder to implement; good software is needed.

73 Some conclusions about this method 1. Gillespie s algorithm is by far the most common way to compute expectations: 1.1 Means. 1.2 Variances. 1.3 Probabilities. 2. The new method (MLMC also performs this task with no bias (exact. 3. Will be at worst the same speed as Gillespie (exact algorithm + crude Monte Carlo. 4. Will commonly be many orders of magnitude faster. 5. Applicable to essentially all continuous time Markov chains: X(t = X( + ( t Y k λ k (X(sds ζ k. k 6. Con: Is substantially harder to implement; good software is needed. 7. Makes no use of any specific structure or scaling in the problem.

74 Another example: Viral infection Let 1. T = viral template. 2. G = viral genome. 3. S = viral structure. 4. V = virus. Reactions: R1 T + stuff κ 1 T + G κ 1 = 1 R2 G κ 2 T κ 2 =.25 R3 T + stuff κ 3 T + S κ 3 = 1 R4 T κ 4 κ 4 =.25 R5 S κ 5 κ 5 = 2 R6 G + S κ 6 V κ 6 = R. Srivastava, L. You, J. Summers, and J. Yin, J. Theoret. Biol., 22. E. Haseltine and J. Rawlings, J. Chem. Phys, 22. K. Ball, T. Kurtz, L. Popovic, and G. Rempala, Annals of Applied Probability, 26. W. E, D. Liu, and E. Vanden-Eijden, J. Comput. Phys, 26.

75 Another example: Viral infection Stochastic equations for X = (X G, X S, X T, X V are ( t ( X 1 (t = X 1 ( + Y 1 X 3 (sds Y 2.25 Y 6 ( t t X 1 (sx 2 (sds t ( X 2 (t = X 2 ( + Y 3 (1 X 3 (sds Y 5 2 ( t Y X 1 (sx 2 (sds t X 3 (t = X 3 ( + Y 2 (.25 X 1 (sds X 4 (t = X 4 ( + Y 6 ( t t ( Y 4.25 X 1 (sx 2 (sds. X 1 (sds X 2 (sds t X 3 (sds

76 Another example: Viral infection Reactions: R1 T + stuff κ 1 T + G κ 1 = 1 R2 G κ 2 T κ 2 =.25 R3 T + stuff κ 3 T + S κ 3 = 1 R4 T κ 4 κ 4 =.25 R5 S κ 5 κ 5 = 2 R6 G + S κ 6 V κ 6 = If T >, reactions 3 and 5 are much faster than others. Looks like S is approximately Poisson(5 T.

77 Another example: Viral infection Reactions: R1 T + stuff κ 1 T + G κ 1 = 1 R2 G κ 2 T κ 2 =.25 R3 T + stuff κ 3 T + S κ 3 = 1 R4 T κ 4 κ 4 =.25 R5 S κ 5 κ 5 = 2 R6 G + S κ 6 V κ 6 = If T >, reactions 3 and 5 are much faster than others. Looks like S is approximately Poisson(5 T. Can average out to get approximate process Z (t.

78 Another example: Viral infection Approximate process satisfies. ( t ( Z 1 (t = X 1 ( + Y 1 Z 3 (sds Y 2.25 Y 6 ( t t Z 3 (t = X 3 ( + Y 2 (.25 Z 1 (sds Z 4 (t = X 4 ( + Y 6 ( t t Z 1 (sz 3 (sds ( Y 4.25 Z 1 (sz 3 (sds. Z 1 (sds t Z 3 (sds (1 Now use Ef (X(t = E[f (X(t f (Z (t] + Ef (Z (t.

79 Another example: Viral infection ( t min{x 3 (s, Z 3 (s}ds ζ 1 + Y 1,2 ( t X(t = X( + Y 1,1 t ( t + Y 2,1 (.25 min{x 1 (s, Z 1 (s}ds ζ 2 + Y 2,2.25 t + Y 3 (1 X 3 (sds ζ 3 t ( t + Y 4,1 (.25 min{x 3 (s, Z 3 (s}(sds ζ 4 + Y 4,2.25 t + Y 5 (2 X 2 (sds ζ 5 ( t + Y 6,1 X 3 (s min{x 3 (s, Z 3 (s}ds ζ 1 X 1 (s min{x 1 (s, Z 1 (s}ds ζ 2 X 3 (s min{x 3 (s, Z 3 (s}(sds ζ 4 ( t min{λ 6 (X(s, Λ 6 (Z (s}ds ζ 6 Y 6,2 λ 6 (X(s min{λ 6 (X(s, Λ 6 (Z (s}ds ζ 6 ( t ( t Z (t = Y 1,1 min{x 3 (s, Z 3 (s}ds ζ 1 + Y 1,3 Z 3 (s min{x 3 (s, Z 3 (s}ds ζ 1 t ( t + Y 2,1 (.25 min{x 1 (s, Z 1 (s}ds ζ 2 + Y 2,3.25 Z 1 (s min{x 1 (s, Z 1 (s}ds ζ 2 t ( t + Y 4,1 (.25 min{x 3 (s, Z 3 (s}(sds ζ 4 + Y 4,3.25 Z 3 (s min{x 3 (s, Z 3 (s}(sds ζ 4 ( t ( t + Y 6,1 min{λ 6 (X(s, Λ 6 (Z (s}ds ζ 6 Y 6,3 Λ 6 (Z (s min{λ 6 (X(s, Λ 6 (Z (s}ds ζ 6,

80 Another example: Viral infection Suppose want Given T ( = 1, all others zero. EX virus (2 Method: Exact algorithm with crude Monte Carlo. Approximation # paths CPU Time # updates ±.7 75, 24,8 CPU S

81 Another example: Viral infection Suppose want Given T ( = 1, all others zero. EX virus (2 Method: Exact algorithm with crude Monte Carlo. Approximation # paths CPU Time # updates ±.7 75, 24,8 CPU S Method: Ef (X(t = E[f (X(t f (Z (t] + Ef (Z (t. Approximation CPU Time # updates ±.7 1,118.5 CPU S Exact + crude Monte Carlo used: 1. 6 times more total steps times more CPU time.

82 Mathematical Analysis We had X(t = X( + k Y k ( t λ k(x(sds ζ k. Assumed λ k(x( N 1. k There are therefore two extreme parameters floating around our models: 1. Some parameter N 1, causing N 1 (inherent to model. 2. h, the stepsize (inherent to approximation. To quantify errors, need to account for both.

83 Mathematical Analysis: Scaling in style of Thomas Kurtz For each species i, define the normalized abundance Xi N (t = N α i X i (t, where α i should be selected so that X N i = O(1.

84 Mathematical Analysis: Scaling in style of Thomas Kurtz For each species i, define the normalized abundance Xi N (t = N α i X i (t, where α i should be selected so that X N i = O(1. Rate constants, κ k, may also vary over several orders of magnitude. We write κ k = κ k N β k where the β k are selected so that κ k = O(1.

85 Mathematical Analysis: Scaling in style of Thomas Kurtz For each species i, define the normalized abundance Xi N (t = N α i X i (t, where α i should be selected so that X N i = O(1. Rate constants, κ k, may also vary over several orders of magnitude. We write κ k = κ k N β k where the β k are selected so that κ k = O(1. Eventually leads to scaled model X N (t = X N ( + k Y k ( N γ t N β k +α ν k γ λ k (X N (sds ζk N.

86 Results Let ρ k satisfy and set X N (t = X N ( + k Y k ( N γ t ζ N k N ρ k, ρ = min{ρ k }. N c k λ k (X N (sds ζk N. Theorem (A., Higham 211 Suppose (Z N l, Z N l 1 satisfy coupling with Z N l ( = Z N l 1(. Then, sup E Zl N (t Zl 1(t N 2 C 1 (T, N, γn ρ h l + C 2 (T, N, γhl. 2 t T

87 Results Let ρ k satisfy and set X N (t = X N ( + k Y k ( N γ t ζ N k N ρ k, ρ = min{ρ k }. N c k λ k (X N (sds ζk N. Theorem (A., Higham 211 Suppose (X N, Z N l satisfy coupling with X N ( = Z N l (. Then, sup E X N (t Zl N (t 2 C 1 (T, N, γn ρ h l + C 2 (T, N, γhl. 2 t T

88 Flavor of Proof Theorem (A., Higham 211 Suppose (X N, Z N l satisfy coupling with X N ( = Z N l (. Then, sup E X N (t Zl N (t 2 C 1 (T, N, γn ρ h l + C 2 (T, N, γhl. 2 t T η l(sds ζk N X N (t =X N ( + t Y k,1 (N γ N c k λ k (X N (s λ k (Zl N k + t Y k,2 (N γ N c k λ k (X N (s λ k (X N (s λ k (Zl N k η l(sds ζk N Zl N (t =Z l N ( + t Y k,1 (N γ N c k λ k (X N (s λ k (Z l η l (sds ζk N k + t Y k,3 (N γ N c k λ k (Zl N η l(s λ k (X N (s λ k (Zl N η l(sds ζk N k

89 Flavor of Proof So, X N (t Z N (t = t Y k,2 (N γ N c k λ k (X N (s λ k (X N (s λ k (Zl N η l(sds ζk N k t Y k,3 (N γ N c k λ k (Zl N η l(s λ k (X N (s λ k (Zl N η l(sds ζk N

90 Flavor of Proof So, X N (t Z N (t = t Y k,2 (N γ N c k λ k (X N (s λ k (X N (s λ k (Zl N η l(sds ζk N k t Y k,3 (N γ N c k λ k (Zl N η l(s λ k (X N (s λ k (Zl N η l(sds ζk N Hence, X N (t Z N (t = M N (t + k N γ ζ N k N c k t (λ k (X N (s λ k (Z N l η l (sds. Now work.

91 Next problem: parameter sensitivities. Motivated by Jim Rawlings.

92 Next problem: parameter sensitivities. Motivated by Jim Rawlings. We have and we define X θ (t = X θ ( + k Y k ( t λ θ k (X θ (sds ζ k. J(θ = Ef (X θ (t].

93 Next problem: parameter sensitivities. Motivated by Jim Rawlings. We have and we define X θ (t = X θ ( + k Y k ( t λ θ k (X θ (sds ζ k. J(θ = Ef (X θ (t]. We want J (θ = d dθ Ef (X θ (t.

94 Next problem: parameter sensitivities. Motivated by Jim Rawlings. We have and we define X θ (t = X θ ( + k Y k ( t λ θ k (X θ (sds ζ k. J(θ = Ef (X θ (t]. We want J (θ = d dθ Ef (X θ (t. There are multiple methods. We consider finite differences: J (θ = J(θ + ɛ J(θ ɛ + O(ɛ.

95 Next problem: parameter sensitivities. Noting that J (θ = d dθ Ef (X θ (t = Ef (X θ+ɛ (t Ef (X θ (t ɛ + o(ɛ. The usual finite difference estimator is D R (ɛ = ɛ 1 1 R R i=1 f (X θ+ɛ [i] (t 1 R R f (X[j](t θ j=1

96 Next problem: parameter sensitivities. Noting that J (θ = d dθ Ef (X θ (t = Ef (X θ+ɛ (t Ef (X θ (t ɛ + o(ɛ. The usual finite difference estimator is D R (ɛ = ɛ 1 1 R R i=1 f (X θ+ɛ [i] (t 1 R R f (X[j](t θ j=1 If generated independently, then Var(D R (ɛ = O(R 1 ɛ 2.

97 Next problem: parameter sensitivities. Couple the processes. X θ+ɛ (t = X θ+ɛ ( + ( t Y k,1 λ θ+ɛ k (X θ+ɛ (s λ θ k (X θ (sds ζ k k + ( t Y k,2 λ θ+ɛ k (X θ+ɛ (s λ θ+ɛ k (X θ+ɛ (s λ θ k (X θ (sds k X θ (t = X θ ( + ( t Y k,1 λ θ+ɛ k (X θ+ɛ (s λ θ k (X θ (sds ζ k k + ( t Y k,3 λ θ k (X θ (s λ θ+ɛ k (X θ+ɛ (s λ θ k (X θ (sds ζ k, k ζ k Use: D R (ɛ = ɛ 1 1 R R [ f (X θ+ɛ [i] i=1 ] (t f (X[i](t θ.

98 Next problem: parameter sensitivities. Theorem (Anderson, 211 Suppose (X θ+ɛ, X θ satisfy coupling. Then, for any T > there is a C T,f > for which [ ] ( 2 E f (X θ+ɛ (t f (X θ (t C T,f ɛ. sup t T 1 David F. Anderson, An efficient Finite Difference Method for Parameter Sensitivities of Continuous Time Markov Chains. Submitted. Available at arxiv.org: Also at anderson.

99 Next problem: parameter sensitivities. Theorem (Anderson, 211 Suppose (X θ+ɛ, X θ satisfy coupling. Then, for any T > there is a C T,f > for which [ ] ( 2 E f (X θ+ɛ (t f (X θ (t C T,f ɛ. sup t T This lowers variance of estimator from O(R 1 ɛ 2, to O(R 1 ɛ 1. Lowered by order of magnitude (in ɛ. 1 David F. Anderson, An efficient Finite Difference Method for Parameter Sensitivities of Continuous Time Markov Chains. Submitted. Available at arxiv.org: Also at anderson.

100 Parameter Sensitivities G 2 G + M, M 1 M + P, M k, P 1. Want k E [ X k protein(3 ], k 1/4.

101 Parameter Sensitivities G 2 G + M, M 1 M + P, M k, P 1. Want [ ] k E Xprotein(3 k, k 1/4. Method # paths Approximation # updates CPU Time Likelihood ratio 689, ± ,56.6 S Exact/Naive FD 246, ± ,282.1 S CRP 26, ± S Coupled 4, ± S

102 Analysis Theorem Suppose (X θ+ɛ, X θ satisfy coupling. Then, for any T > there is a C T,f > for which ( 2 E sup f (X θ+ɛ (t f (X (t θ CT,f ɛ. t T Proof:

103 Analysis Theorem Suppose (X θ+ɛ, X θ satisfy coupling. Then, for any T > there is a C T,f > for which ( 2 E sup f (X θ+ɛ (t f (X (t θ CT,f ɛ. t T Proof: Key observation of proof: X θ+ɛ (t X θ (t = M θ,ɛ (t + t F θ+ɛ (X θ+ɛ (s F θ (X θ (sds. Now work on Martingale and absolutely continuous part.

104 Thanks! References: 1. David F. Anderson and Desmond J. Higham, Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics, to appear in SIAM: Multiscale Modeling and Simulation. Available at arxiv.org: Also on my website: anderson. 2. David F. Anderson, Efficient Finite Difference Method for Parameter Sensitivities of Continuous time Markov Chains, submitted. Available at arxiv.org: Also on my website: anderson. Funding: NSF-DMS

Stochastic models of biochemical systems

Stochastic models of biochemical systems Stochastic models of biochemical systems David F. Anderson anderson@math.wisc.edu Department of Mathematics University of Wisconsin - Madison University of Amsterdam November 14th, 212 Stochastic models

More information

Simulation methods for stochastic models in chemistry

Simulation methods for stochastic models in chemistry Simulation methods for stochastic models in chemistry David F. Anderson anderson@math.wisc.edu Department of Mathematics University of Wisconsin - Madison SIAM: Barcelona June 4th, 21 Overview 1. Notation

More information

2008 Hotelling Lectures

2008 Hotelling Lectures First Prev Next Go To Go Back Full Screen Close Quit 1 28 Hotelling Lectures 1. Stochastic models for chemical reactions 2. Identifying separated time scales in stochastic models of reaction networks 3.

More information

Longtime behavior of stochastically modeled biochemical reaction networks

Longtime behavior of stochastically modeled biochemical reaction networks Longtime behavior of stochastically modeled biochemical reaction networks David F. Anderson Department of Mathematics University of Wisconsin - Madison ASU Math Biology Seminar February 17th, 217 Overview

More information

Stochastic and deterministic models of biochemical reaction networks

Stochastic and deterministic models of biochemical reaction networks Stochastic and deterministic models of biochemical reaction networks David F. Anderson Department of Mathematics University of Wisconsin - Madison Case Western Reserve November 9th, 215 Goals of the talk

More information

arxiv: v2 [q-bio.mn] 31 Aug 2007

arxiv: v2 [q-bio.mn] 31 Aug 2007 A modified Next Reaction Method for simulating chemical systems with time dependent propensities and delays David F. Anderson Department of Mathematics, University of Wisconsin-Madison, Madison, Wi 53706

More information

Persistence and Stationary Distributions of Biochemical Reaction Networks

Persistence and Stationary Distributions of Biochemical Reaction Networks Persistence and Stationary Distributions of Biochemical Reaction Networks David F. Anderson Department of Mathematics University of Wisconsin - Madison Discrete Models in Systems Biology SAMSI December

More information

Fast Probability Generating Function Method for Stochastic Chemical Reaction Networks

Fast Probability Generating Function Method for Stochastic Chemical Reaction Networks MATCH Communications in Mathematical and in Computer Chemistry MATCH Commun. Math. Comput. Chem. 71 (2014) 57-69 ISSN 0340-6253 Fast Probability Generating Function Method for Stochastic Chemical Reaction

More information

Stochastic models for chemical reactions

Stochastic models for chemical reactions First Prev Next Go To Go Back Full Screen Close Quit 1 Stochastic models for chemical reactions Formulating Markov models Two stochastic equations Simulation schemes Reaction Networks Scaling limit Central

More information

Stochastic analysis of biochemical reaction networks with absolute concentration robustness

Stochastic analysis of biochemical reaction networks with absolute concentration robustness Stochastic analysis of biochemical reaction networks with absolute concentration robustness David F. Anderson anderson@math.wisc.edu Department of Mathematics University of Wisconsin - Madison Boston University

More information

Extending the Tools of Chemical Reaction Engineering to the Molecular Scale

Extending the Tools of Chemical Reaction Engineering to the Molecular Scale Extending the Tools of Chemical Reaction Engineering to the Molecular Scale Multiple-time-scale order reduction for stochastic kinetics James B. Rawlings Department of Chemical and Biological Engineering

More information

Gillespie s Algorithm and its Approximations. Des Higham Department of Mathematics and Statistics University of Strathclyde

Gillespie s Algorithm and its Approximations. Des Higham Department of Mathematics and Statistics University of Strathclyde Gillespie s Algorithm and its Approximations Des Higham Department of Mathematics and Statistics University of Strathclyde djh@maths.strath.ac.uk The Three Lectures 1 Gillespie s algorithm and its relation

More information

Efficient Leaping Methods for Stochastic Chemical Systems

Efficient Leaping Methods for Stochastic Chemical Systems Efficient Leaping Methods for Stochastic Chemical Systems Ioana Cipcigan Muruhan Rathinam November 18, 28 Abstract. Well stirred chemical reaction systems which involve small numbers of molecules for some

More information

Computational complexity analysis for Monte Carlo approximations of classically scaled population processes

Computational complexity analysis for Monte Carlo approximations of classically scaled population processes Computational complexity analysis for Monte Carlo approximations of classically scaled population processes David F. Anderson, Desmond J. Higham, Yu Sun arxiv:52.588v3 [math.a] 4 Jun 28 June 5, 28 Abstract

More information

Extending the multi-level method for the simulation of stochastic biological systems

Extending the multi-level method for the simulation of stochastic biological systems Extending the multi-level method for the simulation of stochastic biological systems Christopher Lester Ruth E. Baker Michael B. Giles Christian A. Yates 29 February 216 Abstract The multi-level method

More information

Nested Stochastic Simulation Algorithm for KMC with Multiple Time-Scales

Nested Stochastic Simulation Algorithm for KMC with Multiple Time-Scales Nested Stochastic Simulation Algorithm for KMC with Multiple Time-Scales Eric Vanden-Eijnden Courant Institute Join work with Weinan E and Di Liu Chemical Systems: Evolution of an isothermal, spatially

More information

Non-nested Adaptive Timesteps in Multilevel Monte Carlo Computations

Non-nested Adaptive Timesteps in Multilevel Monte Carlo Computations Non-nested Adaptive Timesteps in Multilevel Monte Carlo Computations M.B. Giles, C. Lester, and J. Whittle Abstract This paper shows that it is relatively easy to incorporate adaptive timesteps into multilevel

More information

On the numerical solution of the chemical master equation with sums of rank one tensors

On the numerical solution of the chemical master equation with sums of rank one tensors ANZIAM J. 52 (CTAC21) pp.c628 C643, 211 C628 On the numerical solution of the chemical master equation with sums of rank one tensors Markus Hegland 1 Jochen Garcke 2 (Received 2 January 211; revised 4

More information

Chemical reaction network theory for stochastic and deterministic models of biochemical reaction systems

Chemical reaction network theory for stochastic and deterministic models of biochemical reaction systems Chemical reaction network theory for stochastic and deterministic models of biochemical reaction systems University of Wisconsin at Madison anderson@math.wisc.edu MBI Workshop for Young Researchers in

More information

c 2018 Society for Industrial and Applied Mathematics

c 2018 Society for Industrial and Applied Mathematics MULTISCALE MODEL. SIMUL. Vol. 6, o. 3, pp. 26 226 c 28 Society for Industrial and Applied Mathematics COMPUTATIOAL COMPLEXITY AALYSIS FOR MOTE CARLO APPROXIMATIOS OF CLASSICALLY SCALED POPULATIO PROCESSES

More information

Modelling in Systems Biology

Modelling in Systems Biology Modelling in Systems Biology Maria Grazia Vigliotti thanks to my students Anton Stefanek, Ahmed Guecioueur Imperial College Formal representation of chemical reactions precise qualitative and quantitative

More information

Nested Stochastic Simulation Algorithm for Chemical Kinetic Systems with Disparate Rates. Abstract

Nested Stochastic Simulation Algorithm for Chemical Kinetic Systems with Disparate Rates. Abstract Nested Stochastic Simulation Algorithm for Chemical Kinetic Systems with Disparate Rates Weinan E Department of Mathematics and PACM, Princeton University, Princeton, NJ 08544, USA Di Liu and Eric Vanden-Eijnden

More information

Introduction to stochastic multiscale modelling in tumour growth

Introduction to stochastic multiscale modelling in tumour growth Introduction to stochastic multiscale modelling in tumour growth Tomás Alarcón ICREA & Centre de Recerca Matemàtica T. Alarcón (ICREA & CRM, Barcelona, Spain) Lecture 3 CIMPA, Santiago de Cuba, June 2016

More information

Gene Expression as a Stochastic Process: From Gene Number Distributions to Protein Statistics and Back

Gene Expression as a Stochastic Process: From Gene Number Distributions to Protein Statistics and Back Gene Expression as a Stochastic Process: From Gene Number Distributions to Protein Statistics and Back June 19, 2007 Motivation & Basics A Stochastic Approach to Gene Expression Application to Experimental

More information

An Introduction to Stochastic Simulation

An Introduction to Stochastic Simulation Stephen Gilmore Laboratory for Foundations of Computer Science School of Informatics University of Edinburgh PASTA workshop, London, 29th June 2006 Background The modelling of chemical reactions using

More information

Multifidelity Approaches to Approximate Bayesian Computation

Multifidelity Approaches to Approximate Bayesian Computation Multifidelity Approaches to Approximate Bayesian Computation Thomas P. Prescott Wolfson Centre for Mathematical Biology University of Oxford Banff International Research Station 11th 16th November 2018

More information

Stochastic Simulation.

Stochastic Simulation. Stochastic Simulation. (and Gillespie s algorithm) Alberto Policriti Dipartimento di Matematica e Informatica Istituto di Genomica Applicata A. Policriti Stochastic Simulation 1/20 Quote of the day D.T.

More information

Problem Set 5. 1 Waiting times for chemical reactions (8 points)

Problem Set 5. 1 Waiting times for chemical reactions (8 points) Problem Set 5 1 Waiting times for chemical reactions (8 points) In the previous assignment, we saw that for a chemical reaction occurring at rate r, the distribution of waiting times τ between reaction

More information

Adaptive timestepping for SDEs with non-globally Lipschitz drift

Adaptive timestepping for SDEs with non-globally Lipschitz drift Adaptive timestepping for SDEs with non-globally Lipschitz drift Mike Giles Wei Fang Mathematical Institute, University of Oxford Workshop on Numerical Schemes for SDEs and SPDEs Université Lille June

More information

Exact Simulation of Continuous Time Markov Jump Processes with Anticorrelated Variance Reduced Monte Carlo Estimation

Exact Simulation of Continuous Time Markov Jump Processes with Anticorrelated Variance Reduced Monte Carlo Estimation 53rd I onference on Decision and ontrol December 5-7,. Los Angeles, alifornia, USA xact Simulation of ontinuous Time Markov Jump Processes with Anticorrelated Variance Reduced Monte arlo stimation Peter

More information

arxiv: v2 [math.na] 20 Dec 2016

arxiv: v2 [math.na] 20 Dec 2016 SAIONARY AVERAGING FOR MULI-SCALE CONINUOUS IME MARKOV CHAINS USING PARALLEL REPLICA DYNAMICS ING WANG, PER PLECHÁČ, AND DAVID ARISOFF arxiv:169.6363v2 [math.na 2 Dec 216 Abstract. We propose two algorithms

More information

Multilevel Monte Carlo for Lévy Driven SDEs

Multilevel Monte Carlo for Lévy Driven SDEs Multilevel Monte Carlo for Lévy Driven SDEs Felix Heidenreich TU Kaiserslautern AG Computational Stochastics August 2011 joint work with Steffen Dereich Philipps-Universität Marburg supported within DFG-SPP

More information

Parameter Estimation in Stochastic Chemical Kinetic Models. Rishi Srivastava. A dissertation submitted in partial fulfillment of

Parameter Estimation in Stochastic Chemical Kinetic Models. Rishi Srivastava. A dissertation submitted in partial fulfillment of Parameter Estimation in Stochastic Chemical Kinetic Models by Rishi Srivastava A dissertation submitted in partial fulfillment of the requirement for the degree of Doctor of Philosophy (Chemical Engineering)

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

Lecture 4 The stochastic ingredient

Lecture 4 The stochastic ingredient Lecture 4 The stochastic ingredient Luca Bortolussi 1 Alberto Policriti 2 1 Dipartimento di Matematica ed Informatica Università degli studi di Trieste Via Valerio 12/a, 34100 Trieste. luca@dmi.units.it

More information

On Derivative Estimation of the Mean Time to Failure in Simulations of Highly Reliable Markovian Systems

On Derivative Estimation of the Mean Time to Failure in Simulations of Highly Reliable Markovian Systems On Derivative Estimation of the Mean Time to Failure in Simulations of Highly Reliable Markovian Systems Marvin K. Nakayama Department of Computer and Information Science New Jersey Institute of Technology

More information

STOCHASTIC CHEMICAL KINETICS

STOCHASTIC CHEMICAL KINETICS STOCHASTIC CHEICAL KINETICS Dan Gillespie GillespieDT@mailaps.org Current Support: Caltech (NIGS) Caltech (NIH) University of California at Santa Barbara (NIH) Past Support: Caltech (DARPA/AFOSR, Beckman/BNC))

More information

Comparison of Two Parameter Estimation Techniques for Stochastic Models

Comparison of Two Parameter Estimation Techniques for Stochastic Models East Tennessee State University Digital Commons @ East Tennessee State University Electronic Theses and Dissertations 8-2015 Comparison of Two Parameter Estimation Techniques for Stochastic Models Thomas

More information

The Adaptive Explicit-Implicit Tau-Leaping Method with Automatic Tau Selection

The Adaptive Explicit-Implicit Tau-Leaping Method with Automatic Tau Selection The Adaptive Explicit-Implicit Tau-Leaping Method with Automatic Tau Selection Yang Cao Department of Computer Science, 660 McBryde Hall, Virginia Tech, Blacksburg, VA 24061 Daniel T. Gillespie Dan T.

More information

Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations

Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations Ram Sharan Adhikari Assistant Professor Of Mathematics Rogers State University Mathematical

More information

Efficient step size selection for the tau-leaping simulation method

Efficient step size selection for the tau-leaping simulation method THE JOURNAL OF CHEMICAL PHYSICS 24, 04409 2006 Efficient step size selection for the tau-leaping simulation method Yang Cao a Department of Computer Science, Virginia Tech, Blacksburg, Virginia 2406 Daniel

More information

Supporting Information for The Stochastic Quasi-Steady-State Assumption: Reducing the Model but Not the Noise

Supporting Information for The Stochastic Quasi-Steady-State Assumption: Reducing the Model but Not the Noise TWCCC Texas Wisconsin California Control Consortium Technical report number 2011-03 Supporting Information for The Stochastic Quasi-Steady-State Assumption: Reducing the Model but Not the Noise Rishi Srivastava

More information

This is now an algebraic equation that can be solved simply:

This is now an algebraic equation that can be solved simply: Simulation of CTMC Monday, November 23, 2015 1:55 PM Homework 4 will be posted by tomorrow morning, due Friday, December 11 at 5 PM. Let's solve the Kolmogorov forward equation for the Poisson counting

More information

Stochastic Simulation Methods for Solving Systems with Multi-State Species

Stochastic Simulation Methods for Solving Systems with Multi-State Species Stochastic Simulation Methods for Solving Systems with Multi-State Species Zhen Liu Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of

More information

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n = Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically

More information

An Application of Perturbation Methods in Evolutionary Ecology

An Application of Perturbation Methods in Evolutionary Ecology Dynamics at the Horsetooth Volume 2A, 2010. Focused Issue: Asymptotics and Perturbations An Application of Perturbation Methods in Evolutionary Ecology Department of Mathematics Colorado State University

More information

Introduction to Rare Event Simulation

Introduction to Rare Event Simulation Introduction to Rare Event Simulation Brown University: Summer School on Rare Event Simulation Jose Blanchet Columbia University. Department of Statistics, Department of IEOR. Blanchet (Columbia) 1 / 31

More information

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2017 Kannan Ramchandran March 21, 2017.

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2017 Kannan Ramchandran March 21, 2017. EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2017 Kannan Ramchandran March 21, 2017 Midterm Exam 2 Last name First name SID Name of student on your left: Name of

More information

Stochastic Simulation of Biochemical Reactions

Stochastic Simulation of Biochemical Reactions 1 / 75 Stochastic Simulation of Biochemical Reactions Jorge Júlvez University of Zaragoza 2 / 75 Outline 1 Biochemical Kinetics 2 Reaction Rate Equation 3 Chemical Master Equation 4 Stochastic Simulation

More information

Introduction to Stochastic Processes with Applications in the Biosciences

Introduction to Stochastic Processes with Applications in the Biosciences Introduction to Stochastic Processes with Applications in the Biosciences David F. Anderson University of Wisconsin at Madison Copyright c 213 by David F. Anderson. Contents 1 Introduction 4 1.1 Why study

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Introduction to the Numerical Solution of SDEs Selected topics

Introduction to the Numerical Solution of SDEs Selected topics 0 / 23 Introduction to the Numerical Solution of SDEs Selected topics Andreas Rößler Summer School on Numerical Methods for Stochastic Differential Equations at Vienna University of Technology, Austria

More information

SPA for quantitative analysis: Lecture 6 Modelling Biological Processes

SPA for quantitative analysis: Lecture 6 Modelling Biological Processes 1/ 223 SPA for quantitative analysis: Lecture 6 Modelling Biological Processes Jane Hillston LFCS, School of Informatics The University of Edinburgh Scotland 7th March 2013 Outline 2/ 223 1 Introduction

More information

Multi-level Approximate Bayesian Computation

Multi-level Approximate Bayesian Computation Multi-level Approximate Bayesian Computation Christopher Lester 20 November 2018 arxiv:1811.08866v1 [q-bio.qm] 21 Nov 2018 1 Introduction Well-designed mechanistic models can provide insights into biological

More information

Some investigations concerning the CTMC and the ODE model derived from Bio-PEPA

Some investigations concerning the CTMC and the ODE model derived from Bio-PEPA FBTC 2008 Some investigations concerning the CTMC and the ODE model derived from Bio-PEPA Federica Ciocchetta 1,a, Andrea Degasperi 2,b, Jane Hillston 3,a and Muffy Calder 4,b a Laboratory for Foundations

More information

Models of Molecular Evolution

Models of Molecular Evolution Models of Molecular Evolution Bret Larget Departments of Botany and of Statistics University of Wisconsin Madison September 15, 2007 Genetics 875 (Fall 2009) Molecular Evolution September 15, 2009 1 /

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Cybergenetics: Control theory for living cells

Cybergenetics: Control theory for living cells Department of Biosystems Science and Engineering, ETH-Zürich Cybergenetics: Control theory for living cells Corentin Briat Joint work with Ankit Gupta and Mustafa Khammash Introduction Overview Cybergenetics:

More information

Slow Scale Tau-leaping Method

Slow Scale Tau-leaping Method Slow Scale Tau-leaping Method Yang Cao Linda Petzold Abstract For chemical systems involving both fast and slow scales, stiffness presents challenges for efficient stochastic simulation. Two different

More information

Kinetic Monte Carlo. Heiko Rieger. Theoretical Physics Saarland University Saarbrücken, Germany

Kinetic Monte Carlo. Heiko Rieger. Theoretical Physics Saarland University Saarbrücken, Germany Kinetic Monte Carlo Heiko Rieger Theoretical Physics Saarland University Saarbrücken, Germany DPG school on Efficient Algorithms in Computational Physics, 10.-14.9.2012, Bad Honnef Intro Kinetic Monte

More information

Equilibrium Time, Permutation, Multiscale and Modified Multiscale Entropies for Low-High Infection Level Intracellular Viral Reaction Kinetics

Equilibrium Time, Permutation, Multiscale and Modified Multiscale Entropies for Low-High Infection Level Intracellular Viral Reaction Kinetics Equilibrium Time, Permutation, Multiscale and Modified Multiscale Entropies for Low-High Infection Level Intracellular Viral Reaction Kinetics Fariborz Taherkhani 1, Farid Taherkhani 2* 1 Department of

More information

Inference For High Dimensional M-estimates. Fixed Design Results

Inference For High Dimensional M-estimates. Fixed Design Results : Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and

More information

1 of 7 7/16/2009 6:12 AM Virtual Laboratories > 7. Point Estimation > 1 2 3 4 5 6 1. Estimators The Basic Statistical Model As usual, our starting point is a random experiment with an underlying sample

More information

ABC methods for phase-type distributions with applications in insurance risk problems

ABC methods for phase-type distributions with applications in insurance risk problems ABC methods for phase-type with applications problems Concepcion Ausin, Department of Statistics, Universidad Carlos III de Madrid Joint work with: Pedro Galeano, Universidad Carlos III de Madrid Simon

More information

Normalising constants and maximum likelihood inference

Normalising constants and maximum likelihood inference Normalising constants and maximum likelihood inference Jakob G. Rasmussen Department of Mathematics Aalborg University Denmark March 9, 2011 1/14 Today Normalising constants Approximation of normalising

More information

An Overview of Methods for Applying Semi-Markov Processes in Biostatistics.

An Overview of Methods for Applying Semi-Markov Processes in Biostatistics. An Overview of Methods for Applying Semi-Markov Processes in Biostatistics. Charles J. Mode Department of Mathematics and Computer Science Drexel University Philadelphia, PA 19104 Overview of Topics. I.

More information

Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016

Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016 Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Find the maximum likelihood estimate of θ where θ is a parameter

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Inference For High Dimensional M-estimates: Fixed Design Results

Inference For High Dimensional M-estimates: Fixed Design Results Inference For High Dimensional M-estimates: Fixed Design Results Lihua Lei, Peter Bickel and Noureddine El Karoui Department of Statistics, UC Berkeley Berkeley-Stanford Econometrics Jamboree, 2017 1/49

More information

On the Interpretation of Delays in Delay Stochastic Simulation of Biological Systems

On the Interpretation of Delays in Delay Stochastic Simulation of Biological Systems On the Interpretation of Delays in Delay Stochastic Simulation of Biological Systems Roberto Barbuti Giulio Caravagna Andrea Maggiolo-Schettini Paolo Milazzo Dipartimento di Informatica, Università di

More information

Stein s Method for Steady-State Approximations: Error Bounds and Engineering Solutions

Stein s Method for Steady-State Approximations: Error Bounds and Engineering Solutions Stein s Method for Steady-State Approximations: Error Bounds and Engineering Solutions Jim Dai Joint work with Anton Braverman and Jiekun Feng Cornell University Workshop on Congestion Games Institute

More information

(implicitly assuming time-homogeneity from here on)

(implicitly assuming time-homogeneity from here on) Continuous-Time Markov Chains Models Tuesday, November 15, 2011 2:02 PM The fundamental object describing the dynamics of a CTMC (continuous-time Markov chain) is the probability transition (matrix) function:

More information

7.32/7.81J/8.591J: Systems Biology. Fall Exam #1

7.32/7.81J/8.591J: Systems Biology. Fall Exam #1 7.32/7.81J/8.591J: Systems Biology Fall 2013 Exam #1 Instructions 1) Please do not open exam until instructed to do so. 2) This exam is closed- book and closed- notes. 3) Please do all problems. 4) Use

More information

Stochastic Modeling of Chemical Reactions

Stochastic Modeling of Chemical Reactions Stochastic Modeling of Chemical Reactions Eric Haseltine Department of Chemical Engineering University of Wisconsin-Madison TWMCC 27 September Texas-Wisconsin Modeling and Control Consortium 22TWMCC Outline

More information

Multi-fidelity sensitivity analysis

Multi-fidelity sensitivity analysis Multi-fidelity sensitivity analysis Loic Le Gratiet 1,2, Claire Cannamela 2 & Bertrand Iooss 3 1 Université Denis-Diderot, Paris, France 2 CEA, DAM, DIF, F-91297 Arpajon, France 3 EDF R&D, 6 quai Watier,

More information

Lecture 7: Simple genetic circuits I

Lecture 7: Simple genetic circuits I Lecture 7: Simple genetic circuits I Paul C Bressloff (Fall 2018) 7.1 Transcription and translation In Fig. 20 we show the two main stages in the expression of a single gene according to the central dogma.

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

ECE353: Probability and Random Processes. Lecture 18 - Stochastic Processes

ECE353: Probability and Random Processes. Lecture 18 - Stochastic Processes ECE353: Probability and Random Processes Lecture 18 - Stochastic Processes Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu From RV

More information

A Comparison of Computational Efficiencies of Stochastic Algorithms in Terms of Two Infection Models

A Comparison of Computational Efficiencies of Stochastic Algorithms in Terms of Two Infection Models A Comparison of Computational Efficiencies of Stochastic Algorithms in Terms of Two Infection Models H.T. Banks 1, Shuhua Hu 1, Michele Joyner 3 Anna Broido 2, Brandi Canter 3, Kaitlyn Gayvert 4, Kathryn

More information

FEEDBACK CONTROL OF GROWTH RATE AND SURFACE ROUGHNESS IN THIN FILM GROWTH. Yiming Lou and Panagiotis D. Christofides

FEEDBACK CONTROL OF GROWTH RATE AND SURFACE ROUGHNESS IN THIN FILM GROWTH. Yiming Lou and Panagiotis D. Christofides FEEDBACK CONTROL OF GROWTH RATE AND SURFACE ROUGHNESS IN THIN FILM GROWTH Yiming Lou and Panagiotis D. Christofides Department of Chemical Engineering University of California, Los Angeles IEEE 2003 Conference

More information

Lecture 8. October 22, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University.

Lecture 8. October 22, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University. Lecture 8 Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University October 22, 2007 1 2 3 4 5 6 1 Define convergent series 2 Define the Law of Large Numbers

More information

Simulating ribosome biogenesis in replicating whole cells

Simulating ribosome biogenesis in replicating whole cells Simulating ribosome biogenesis in replicating whole cells Tyler Earnest / PI: Zan Luthey-Schulten (UIUC) NCSA Blue Waters Symposium for Petascale Science and Beyond: June 15, 2016 Multi-scale modeling

More information

The square root rule for adaptive importance sampling

The square root rule for adaptive importance sampling The square root rule for adaptive importance sampling Art B. Owen Stanford University Yi Zhou January 2019 Abstract In adaptive importance sampling, and other contexts, we have unbiased and uncorrelated

More information

dans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés

dans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés Inférence pénalisée dans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés Gersende Fort Institut de Mathématiques de Toulouse, CNRS and Univ. Paul Sabatier Toulouse,

More information

EXAMINATIONS OF THE HONG KONG STATISTICAL SOCIETY

EXAMINATIONS OF THE HONG KONG STATISTICAL SOCIETY EXAMINATIONS OF THE HONG KONG STATISTICAL SOCIETY HIGHER CERTIFICATE IN STATISTICS, 2013 MODULE 5 : Further probability and inference Time allowed: One and a half hours Candidates should answer THREE questions.

More information

1Non Linear mixed effects ordinary differential equations models. M. Prague - SISTM - NLME-ODE September 27,

1Non Linear mixed effects ordinary differential equations models. M. Prague - SISTM - NLME-ODE September 27, GDR MaMoVi 2017 Parameter estimation in Models with Random effects based on Ordinary Differential Equations: A bayesian maximum a posteriori approach. Mélanie PRAGUE, Daniel COMMENGES & Rodolphe THIÉBAUT

More information

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz 1 Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz Technische Universität München Fakultät für Mathematik Lehrstuhl für Numerische Mathematik jonas.latz@tum.de November

More information

Single cell experiments and Gillespie s algorithm

Single cell experiments and Gillespie s algorithm Single cell experiments and Gillespie s algorithm Fernand Hayot Department of Neurology, Mount Sinai School of Medicine, New York, NY, 10029 email: fernand.hayot@mssm.edu August 18, 2008 1 Introduction

More information

The Monte Carlo EM method for the parameter estimation of biological models

The Monte Carlo EM method for the parameter estimation of biological models PASM 2 The Monte Carlo EM method for the parameter estimation of biological models Alessio Angius Department of Computer Science, University of Torino, Torino, Italy András Horváth 2 Department of Computer

More information

When do diffusion-limited trajectories become memoryless?

When do diffusion-limited trajectories become memoryless? When do diffusion-limited trajectories become memoryless? Maciej Dobrzyński CWI (Center for Mathematics and Computer Science) Kruislaan 413, 1098 SJ Amsterdam, The Netherlands Abstract Stochastic description

More information

AARMS Homework Exercises

AARMS Homework Exercises 1 For the gamma distribution, AARMS Homework Exercises (a) Show that the mgf is M(t) = (1 βt) α for t < 1/β (b) Use the mgf to find the mean and variance of the gamma distribution 2 A well-known inequality

More information

System Biology - Deterministic & Stochastic Dynamical Systems

System Biology - Deterministic & Stochastic Dynamical Systems System Biology - Deterministic & Stochastic Dynamical Systems System Biology - Deterministic & Stochastic Dynamical Systems 1 The Cell System Biology - Deterministic & Stochastic Dynamical Systems 2 The

More information

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012 Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 202 BOUNDS AND ASYMPTOTICS FOR FISHER INFORMATION IN THE CENTRAL LIMIT THEOREM

More information

Derivations for order reduction of the chemical master equation

Derivations for order reduction of the chemical master equation 2 TWMCC Texas-Wisconsin Modeling and Control Consortium 1 Technical report number 2006-02 Derivations for order reduction of the chemical master equation Ethan A. Mastny, Eric L. Haseltine, and James B.

More information

Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations

Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations Lukasz Szpruch School of Mathemtics University of Edinburgh joint work with Shuren Tan and Alvin Tse (Edinburgh) Lukasz Szpruch (University

More information

Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions

Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions THE JOURNAL OF CHEMICAL PHYSICS 122, 054103 2005 Accurate hybrid schastic simulation of a system of coupled chemical or biochemical reactions Howard Salis and Yiannis Kaznessis a) Department of Chemical

More information

CONTENTS. Preface List of Symbols and Notation

CONTENTS. Preface List of Symbols and Notation CONTENTS Preface List of Symbols and Notation xi xv 1 Introduction and Review 1 1.1 Deterministic and Stochastic Models 1 1.2 What is a Stochastic Process? 5 1.3 Monte Carlo Simulation 10 1.4 Conditional

More information

Stochastic Analogues to Deterministic Optimizers

Stochastic Analogues to Deterministic Optimizers Stochastic Analogues to Deterministic Optimizers ISMP 2018 Bordeaux, France Vivak Patel Presented by: Mihai Anitescu July 6, 2018 1 Apology I apologize for not being here to give this talk myself. I injured

More information

1. Introduction Systems evolving on widely separated time-scales represent a challenge for numerical simulations. A generic example is

1. Introduction Systems evolving on widely separated time-scales represent a challenge for numerical simulations. A generic example is COMM. MATH. SCI. Vol. 1, No. 2, pp. 385 391 c 23 International Press FAST COMMUNICATIONS NUMERICAL TECHNIQUES FOR MULTI-SCALE DYNAMICAL SYSTEMS WITH STOCHASTIC EFFECTS ERIC VANDEN-EIJNDEN Abstract. Numerical

More information

Approximate Bayesian computation: methods and applications for complex systems

Approximate Bayesian computation: methods and applications for complex systems Approximate Bayesian computation: methods and applications for complex systems Mark A. Beaumont, School of Biological Sciences, The University of Bristol, Bristol, UK 11 November 2015 Structure of Talk

More information