Limit theorems for discrete-time metapopulation models

Size: px
Start display at page:

Download "Limit theorems for discrete-time metapopulation models"

Transcription

1 MASCOS AustMS Meeting, October Page 1 Limit theorems for discrete-time metapopulation models Phil Pollett Department of Mathematics The University of Queensland pkp AUSTRALIAN RESEARCH COUNCIL Centre of Excellence for Mathematics and Statistics of Complex Systems

2 MASCOS AustMS Meeting, October Page 2 Collaborator Fionnuala Buckley (MASCOS) Department of Mathematics The University of Queensland

3 Metapopulations MASCOS AustMS Meeting, October Page 3

4 MASCOS AustMS Meeting, October Page 4 Metapopulations Colonization

5 Metapopulations MASCOS AustMS Meeting, October Page 5

6 MASCOS AustMS Meeting, October Page 6 Metapopulations Local Extinction

7 Metapopulations MASCOS AustMS Meeting, October Page 7

8 Metapopulations MASCOS AustMS Meeting, October Page 8

9 Metapopulations MASCOS AustMS Meeting, October Page 9

10 MASCOS AustMS Meeting, October Page 10 Metapopulations Total Extinction

11 Metapopulations MASCOS AustMS Meeting, October Page 11

12 Mainland-island configuration MASCOS AustMS Meeting, October Page 12

13 MASCOS AustMS Meeting, October Page 13 Mainland-island configuration Colonization from the mainland

14 MASCOS AustMS Meeting, October Page 14 Patch-occupancy models We record the number n t of occupied patches at each time t. A typical approach is to suppose that (n t, t 0) is Markovian.

15 MASCOS AustMS Meeting, October Page 14 Patch-occupancy models We record the number n t of occupied patches at each time t. A typical approach is to suppose that (n t, t 0) is Markovian. Suppose that there are J patches. Each occupied patch becomes empty at rate e (the local extinction rate), colonization of empty patches occurs at rate c/j for each suitable pair (c is the colonization rate) and immigration from the mainland occurs that rate v (the immigration rate).

16 MASCOS AustMS Meeting, October Page 15 A continuous-time stochastic model The state space of the Markov chain (n t, t 0) is S = {0, 1,...,J} and the transitions are: n n + 1 at rate n n 1 at rate en (ν + c ) J n (J n)

17 MASCOS AustMS Meeting, October Page 15 A continuous-time stochastic model The state space of the Markov chain (n t, t 0) is S = {0, 1,...,J} and the transitions are: n n + 1 at rate n n 1 at rate en (ν + c ) J n (J n) This an example of Feller s stochastic logistic (SL) model, studied in detail by J.V. Ross. Ross, J.V. (2006) Stochastic models for mainland-island metapopulations in static and dynamic landscapes. Bulletin of Mathematical Biology 68, Feller, W. (1939) Die grundlagen der volterraschen theorie des kampfes ums dasein in wahrscheinlichkeitsteoretischer behandlung. Acta Biotheoretica 5,

18 MASCOS AustMS Meeting, October Page 16 Accounting for life cycle Many species have life cycles (often annual) that consist of distinct phases, and the propensity for colonization and local extinction is different in each phase.

19 MASCOS AustMS Meeting, October Page 16 Accounting for life cycle Many species have life cycles (often annual) that consist of distinct phases, and the propensity for colonization and local extinction is different in each phase. Examples: The Vernal pool fairy shrimp (Branchinecta lynchi) and the California linderiella (Linderiella occidentalis), both listed under the Endangered Species Act (USA) The Jasper Ridge population of Bay checkerspot butterfly (Euphydryas editha bayensis), now extinct

20 MASCOS AustMS Meeting, October Page 17 Colonization and extinction phases For the butterfly, colonization is restricted to the adult phase and there is a greater propensity for local extinction in the non-adult phases.

21 MASCOS AustMS Meeting, October Page 17 Colonization and extinction phases For the butterfly, colonization is restricted to the adult phase and there is a greater propensity for local extinction in the non-adult phases. We will assume that that colonization (C) and extinction (E) occur in separate distinct phases.

22 MASCOS AustMS Meeting, October Page 17 Colonization and extinction phases For the butterfly, colonization is restricted to the adult phase and there is a greater propensity for local extinction in the non-adult phases. We will assume that that colonization (C) and extinction (E) occur in separate distinct phases. There are several ways to model this: A quasi-birth-death process with two phases A non-homogeneous continuous-time Markov chain (cycle between two sets of transition rates) A discrete-time Markov chain

23 MASCOS AustMS Meeting, October Page 18 Colonization and extinction phases For the butterfly, colonization is restricted to the adult phase and there is a greater propensity for local extinction in the non-adult phases. We will assume that that colonization (C) and extinction (E) occur in separate distinct phases. There are several ways to model this: A quasi-birth-death process with two phases A non-homogeneous continuous-time Markov chain (cycle between two sets of transition rates) A discrete-time Markov chain

24 MASCOS AustMS Meeting, October Page 19 A discrete-time Markovian model Recall that there are J patches and that n t is the number of occupied patches at time t. We suppose that (n t, t = 0, 1,...) is a discrete-time Markov chain taking values in S = {0, 1,...,J} with a 1-step transition matrix P = (p ij ) constructed as follows.

25 MASCOS AustMS Meeting, October Page 19 A discrete-time Markovian model Recall that there are J patches and that n t is the number of occupied patches at time t. We suppose that (n t, t = 0, 1,...) is a discrete-time Markov chain taking values in S = {0, 1,...,J} with a 1-step transition matrix P = (p ij ) constructed as follows. The extinction and colonization phases are governed by their own transition matrices, E = (e ij ) and C = (c ij ). We let P = EC if the census is taken after the colonization phase or P = CE if the census is taken after the extinction phase.

26 MASCOS AustMS Meeting, October Page 20 EC versus CE P = EC { t 1 t t + 1 t + 2 P = CE { t 1 t t + 1 t + 2

27 MASCOS AustMS Meeting, October Page 21 Assumptions The number of extinctions when there are i patches occupied follows a Bin(i,e) law (0 < e < 1): e i,i k = ( ) i e k (1 e) i k k (k = 0, 1,...,i). (e ij = 0 if j > i.) The number of colonizations when there are i patches occupied follows a Bin(J i,c i ) law: c i,i+k = (c ij = 0 if j < i.) ( ) J i c k i (1 c i ) J i k (k = 0, 1,...,J i). k

28 MASCOS AustMS Meeting, October Page 22 Examples of c i c i = (i/j)c, where c (0, 1] is the maximum colonization potential. (This entails c 0j = δ 0j, so that 0 is an absorbing state and {1,...,J} is a communicating class.)

29 MASCOS AustMS Meeting, October Page 22 Examples of c i c i = (i/j)c, where c (0, 1] is the maximum colonization potential. (This entails c 0j = δ 0j, so that 0 is an absorbing state and {1,...,J} is a communicating class.) c i = c, where c (0, 1] is a fixed colonization potential mainland colonization dominant. (Now {0, 1,...,J} is irreducible.)

30 MASCOS AustMS Meeting, October Page 22 Examples of c i c i = (i/j)c, where c (0, 1] is the maximum colonization potential. (This entails c 0j = δ 0j, so that 0 is an absorbing state and {1,...,J} is a communicating class.) c i = c, where c (0, 1] is a fixed colonization potential mainland colonization dominant. (Now {0, 1,...,J} is irreducible.) Other possibilities include c i = c(1 (1 c 1 /c) i ), c i = 1 exp( iβ/j) and c i = d + (i/j)c, where c + d (0, 1] (mainland and island colonization).

31 MASCOS AustMS Meeting, October Page 23 The proportion of occupied patches Henceforth we shall be concerned with X t (J) the proportion of occupied patches at time t. = n t /J,

32 MASCOS AustMS Meeting, October Page 24 Simulation: P = EC with c i = c 1 Mainland-Island simulation P = EC (J =100, x 0 =0.05, e =0.01, c =0.05) X (J) t t

33 MASCOS AustMS Meeting, October Page 25 The proportion of occupied patches Henceforth we shall be concerned with X t (J) the proportion of occupied patches at time t. = n t /J,

34 MASCOS AustMS Meeting, October Page 25 The proportion of occupied patches Henceforth we shall be concerned with X t (J) the proportion of occupied patches at time t. = n t /J, In the case c i = c, where the distribution of n t can be evaluated explicitly, we have established large-j deterministic and Gaussian approximations for (X t (J) ). Buckley, F.M. and Pollett, P.K. (2009) Analytical methods for a stochastic mainlandisland metapopulation model. In (Eds. Anderssen, R.S., Braddock, R.D. and Newham, L.T.H.) Proceedings of the 18th World IMACS Congress and MODSIM09 International Congress on Modelling and Simulation, Modelling and Simulation Society of Australia and New Zealand and International Association for Mathematics and Computers in Simulation, July 2009, pp

35 MASCOS AustMS Meeting, October Page 26 Simulation: P = EC with c i = c 1 Mainland-Island simulation P = EC (J =100, x 0 =0.05, e =0.01, c =0.05) X (J) t t

36 MASCOS AustMS Meeting, October Page 27 Simulation: P = EC (Deterministic path) 1 Mainland-Island simulation P = EC (J =100, x 0 =0.05, e =0.01, c =0.05) X (J) t Deterministic path t

37 MASCOS AustMS Meeting, October Page 28 Simulation: P = EC (Gaussian approx.) 1 Mainland-Island simulation P = EC (J =100, x 0 =0.05, e =0.01, c =0.05) X (J) t Deterministic path ± two standard deviations t

38 MASCOS AustMS Meeting, October Page 29 Gaussian approximations Can we establish deterministic and Gaussian approximations for the basic J-patch models (where the distribution of n t is not known explicitly)?

39 MASCOS AustMS Meeting, October Page 29 Gaussian approximations Can we establish deterministic and Gaussian approximations for the basic J-patch models (where the distribution of n t is not known explicitly)? Is there a general theory of convergence for discrete-time Markov chains that share the salient features of the patch-occupancy models presented here?

40 MASCOS AustMS Meeting, October Page 30 Simulation: P = EC with c i = (i/j)c 100 Metapopulation simulation P = EC (J =100, n 0 =95, e =0.3, c =0.8) nt t

41 MASCOS AustMS Meeting, October Page 31 General structure: density dependence We have a sequence of Markov chains (n (J) t ) indexed by J, together with a function f such that E(n (J) t+1 n(j) t ) = Jf(n (J) t /J), or, more generally, a sequence of functions (f (J) ) such that E(n (J) t+1 n(j) t ) = Jf (J) (n (J) t /J) and f (J) converges uniformly to f.

42 MASCOS AustMS Meeting, October Page 32 General structure: density dependence We have a sequence of Markov chains (n (J) t ) indexed by J, together with a function f such that E(n (J) t+1 n(j) t ) = Jf(n (J) t /J), or, more generally, a sequence of functions (f (J) ) such that E(n (J) t+1 n(j) t ) = Jf (J) (n (J) t /J) and f (J) converges uniformly to f. We then define (X t (J) ) by X t (J) = n (J) t /J.

43 MASCOS AustMS Meeting, October Page 33 General structure: density dependence We have a sequence of Markov chains (n (J) t ) indexed by J, together with a function f such that E(X (J) t+1 X(J) t ) = f(x (J) t ), or, more generally, a sequence of functions (f (J) ) such that E(X (J) t+1 X(J) t ) = f (J) (X (J) t ) and f (J) converges uniformly to f.

44 MASCOS AustMS Meeting, October Page 34 General structure: density dependence We have a sequence of Markov chains (n (J) t ) indexed by J, together with a function f such that E(n (J) t+1 n(j) t ) = Jf(n (J) t /J), or, more generally, a sequence of functions (f (J) ) such that E(n (J) t+1 n(j) t ) = Jf (J) (n (J) t /J) and f (J) converges uniformly to f. We then define (X t (J) ) by X t (J) = n t (J) /J. We hope that if X (J) 0 x 0 as J, then (X t (J) ) FDD (x t ), where (x t ) satisfies x t+1 = f(x t ) (the limiting deterministic model).

45 MASCOS AustMS Meeting, October Page 35 General structure: density dependence Next we suppose that there is a function s such that Var(n (J) t+1 n(j) t ) = Js(n (J) t /J), or, more generally, a sequence of functions (s (J) ) such that Var(n (J) t+1 n(j) t ) = Js (J) (n t (J) /J) and s (J) converges uniformly to s.

46 MASCOS AustMS Meeting, October Page 36 General structure: density dependence Next we suppose that there is a function s such that J Var(X (J) t t+1 X(J) ) = s(x (J) t ), or, more generally, a sequence of functions (s (J) ) such that J Var(X (J) t t+1 X(J) and s (J) converges uniformly to s. ) = s (J) (X (J) t )

47 MASCOS AustMS Meeting, October Page 37 General structure: density dependence Next we suppose that there is a function s such that Var(n (J) t+1 n(j) t ) = Js(n (J) t /J), or, more generally, a sequence of functions (s (J) ) such that Var(n (J) t+1 n(j) t ) = Js (J) (n t (J) /J) and s (J) converges uniformly to s. We then define (Z t (J) ) by Z t (J) = J(X (J) t x t ).

48 MASCOS AustMS Meeting, October Page 38 General structure: density dependence Next we suppose that there is a function s such that Var(Z (J) t t+1 X(J) ) = s(x (J) t ), or, more generally, a sequence of functions (s (J) ) such that Var(Z (J) t t+1 X(J) and s (J) converges uniformly to s. ) = s (J) (X (J) t ) We then define (Z t (J) ) by Z t (J) = J(X (J) t x t ).

49 MASCOS AustMS Meeting, October Page 39 General structure: density dependence Next we suppose that there is a function s such that Var(n (J) t+1 n(j) t ) = Js(n (J) t /J), or, more generally, a sequence of functions (s (J) ) such that Var(n (J) t+1 n(j) t ) = Js (J) (n t (J) /J) and s (J) converges uniformly to s. We then define (Z t (J) ) by Z t (J) that if J(X (J) = J(X (J) t x t ). We hope 0 x 0 ) z 0, then (Z (J) (Z t ), where (Z t ) is a Gaussian Markov chain with Z 0 = z 0. t ) FDD

50 MASCOS AustMS Meeting, October Page 40 General structure: density dependence What will be the form of this chain?

51 MASCOS AustMS Meeting, October Page 40 General structure: density dependence What will be the form of this chain? Consider the simplest case, f (J) = f and s (J) = s.

52 MASCOS AustMS Meeting, October Page 40 General structure: density dependence What will be the form of this chain? Consider the simplest case, f (J) = f and s (J) = s. Formally, by Taylor s theorem, f(x t (J) ) f(x t ) = (X t (J) x t )f (x t ) + and so, since E(X (J) t t+1 X(J) ) = f(x (J) t ) and x t+1 = f(x t ), E(Z (J) t+1 ) = J (E(X (J) t+1 ) f(x t)) = f (x t ) E(Z (J) t ) +, suggesting that E(Z t+1 ) = a t E(Z t ), where a t = f (x t ).

53 MASCOS AustMS Meeting, October Page 41 General structure: density dependence We have Var(X (J) t+1 ) = Var(E(X(J) So, since J Var(X (J) t+1 X(J) Var(Z (J) t+1 ) = J Var(X(J) t t t+1 X(J) ) = s(x (J) t ), t+1 ) = J Var(f(X(J) t )) + E(Var(X (J) t+1 X(J) t )). )) + E(s(X (J) t )) a 2 tj Var(X t (J) ) + E(s(X t (J) )) (where a t = f (x t )) = a 2 t Var(Z (J) t ) + E(s(X (J) t )), suggesting that Var(Z t+1 ) = a 2 t Var(Z t ) + s(x t ).

54 MASCOS AustMS Meeting, October Page 41 General structure: density dependence We have Var(X (J) t+1 ) = Var(E(X(J) So, since J Var(X (J) t+1 X(J) Var(Z (J) t+1 ) = J Var(X(J) t t t+1 X(J) ) = s(x (J) t ), t+1 ) = J Var(f(X(J) t )) + E(Var(X (J) t+1 X(J) t )). )) + E(s(X (J) t )) a 2 tj Var(X t (J) ) + E(s(X t (J) )) (where a t = f (x t )) = a 2 t Var(Z (J) t ) + E(s(X (J) t )), suggesting that Var(Z t+1 ) = a 2 t Var(Z t ) + s(x t ). And, since (Z t ) will be Markovian,...

55 MASCOS AustMS Meeting, October Page 42 General structure: density dependence And, since (Z t ) will be Markovian, we might hope that Z t+1 = a t Z t + E t (Z 0 = z 0 ), where a t = f (x t ) and E t (t = 0, 1,...) are independent Gaussian random variables with E t N(0,s(x t )).

56 MASCOS AustMS Meeting, October Page 42 General structure: density dependence And, since (Z t ) will be Markovian, we might hope that Z t+1 = a t Z t + E t (Z 0 = z 0 ), where a t = f (x t ) and E t (t = 0, 1,...) are independent Gaussian random variables with E t N(0,s(x t )). If x eq is a fixed point of f, and J(X (J) 0 x eq ) z 0, then we might hope that (Z t (J) ) FDD (Z t ), where (Z t ) is the AR-1 process defined by Z t+1 = az t + E t, Z 0 = z 0, where a = f (x eq ) and E t (t = 0, 1,...) are iid Gaussian N(0,s(x eq )) random variables.

57 MASCOS AustMS Meeting, October Page 43 Convergence of Markov chains We can adapt results of Alan Karr for our purpose. Karr, A.F. (1975) Weak convergence of a sequence of Markov chains. Probability Theory and Related Fields 33, He considered a sequence of time-homogeneous Markov chains (X (n) t ) on a general state space (Ω, F) = (E, E) N with transition kernels (K n (x,a), x E,A E) and initial distributions (π n (A),A E). He proved that if (i) π n π and (ii) x n x in E implies K n (x n, ) K(x, ), then the corresponding probability measures (P π n n ) on (Ω, F) also converge: P π n n P π.

58 MASCOS AustMS Meeting, October Page 44 J-patch models: convergence Theorem For the J-patch models with c i = (i/j)c, if X (J) 0 x 0 as J, then (X (J) t 1,X (J) t 2,...,X (J) t n ) P (x t1,x t2,...,x tn ), for any finite sequence of times t 1, t 2,..., t n, where (x t ) is defined by the recursion x t+1 = f(x t ) with EC-model: f(x) = (1 e)(1 + c c(1 e)x)x CE-model: f(x) = (1 e)(1 + c cx)x

59 MASCOS AustMS Meeting, October Page 45 J-patch models: convergence Theorem If, additionally, J(X (J) 0 x 0 ) z 0, then (Z t (J) ) FDD (Z t ), where (Z t ) is the Gaussian Markov chain defined by Z t+1 = f (x t )Z t + E t (Z 0 = z 0 ), where E t (t = 0, 1,...) are independent Gaussian random variables with E t N(0,s(x t )) and EC-model: s(x) = (1 e)[c(1 (1 e)x)(1 c(1 e)x) +e(1 + c 2c(1 e)x) 2 ]x CE-model: s(x) = (1 e)[e + c(1 x)(1 c(1 e)x)]x

60 MASCOS AustMS Meeting, October Page 46 Simulation: P = EC 1 Metapopulation simulation P = EC (J =100, x 0 =0.95, e =0.4, c =0.8) X (J) t t

61 MASCOS AustMS Meeting, October Page 47 Simulation: P = EC (Deterministic path) 1 Metapopulation simulation P = EC (J =100, x 0 =0.95, e =0.4, c =0.8) Deterministic path X (J) t t

62 MASCOS AustMS Meeting, October Page 48 Simulation: P = EC (Gaussian approx.) 1 Metapopulation simulation P = EC (J =100, x 0 =0.95, e =0.4, c =0.8) Deterministic path ± two standard deviations X (J) t t

63 MASCOS AustMS Meeting, October Page 49 J-patch models: convergence In both cases (EC and CE) the deterministic model has two equilibria, x = 0 and x = x, given by EC-model: x = 1 1 e CE-model: x = 1 ( 1 e c(1 e) e c(1 e) )

64 MASCOS AustMS Meeting, October Page 49 J-patch models: convergence In both cases (EC and CE) the deterministic model has two equilibria, x = 0 and x = x, given by EC-model: x = 1 1 e CE-model: x = 1 ( 1 e c(1 e) e c(1 e) Indeed, we may write f(x) = x (1 + r (1 x/x )), r = c(1 e) e for both models (the form of the discrete-time logistic model), and we obtain the condition c > e/(1 e) for x to be positive and then stable. )

65 MASCOS AustMS Meeting, October Page 50 J-patch models: convergence Corollary If c > e/(1 e), so that x given above is stable, and J(X (J) 0 x ) z 0, then (Z t (J) ) FDD (Z t ), where (Z t ) is the AR-1 process defined by Z t+1 = (1 + e c(1 e))z t + E t (Z 0 = z 0 ), where E t (t = 0, 1,...) are independent Gaussian N(0,σ 2 ) random variables with EC-model: σ 2 = (1 e)[c(1 (1 e)x )(1 c(1 e)x ) +e(1 + c 2c(1 e)x ) 2 ]x CE-model: σ 2 = (1 e)[e + c(1 x )(1 c(1 e)x )]x

66 MASCOS AustMS Meeting, October Page 51 Simulation: P = EC 1 Metapopulation simulation P = EC (J =100, x 0 =0.95, e =0.3, c =0.8) X (J) t x = t

67 MASCOS AustMS Meeting, October Page 52 Simulation: P = EC (AR-1 approx.) 1 Metapopulation simulation P = EC (J =100, x 0 =0.95, e =0.3, c =0.8) X (J) t x = t

68 MASCOS AustMS Meeting, October Page 53 AR-1 Simulation: P = EC 1 AR-1 simulation P = EC (J =100, x 0 = , e =0.3, c =0.8) X (J) t x = t

Metapopulations with infinitely many patches

Metapopulations with infinitely many patches Metapopulations with infinitely many patches Phil. Pollett The University of Queensland UQ ACEMS Research Group Meeting 10th September 2018 Phil. Pollett (The University of Queensland) Infinite-patch metapopulations

More information

Limit theorems for discrete-time metapopulation models

Limit theorems for discrete-time metapopulation models MASCOS Insiu de Mahémaiques de Toulouse, June 2010 - Page 1 Limi heorems for discree-ime meapopulaion models Phil Polle Deparmen of Mahemaics The Universiy of Queensland hp://www.mahs.uq.edu.au/ pkp AUSTRALIAN

More information

Limit theorems for discrete-time metapopulation models

Limit theorems for discrete-time metapopulation models Probability Surveys Vol. 7 (2010) 53 83 ISSN: 1549-5787 DOI: 10.1214/10-PS158 Limit theorems for discrete-time metapopulation models F.M. Buckley e-mail: fbuckley@maths.uq.edu.au and P.K. Pollett e-mail:

More information

Quasi-stationary distributions

Quasi-stationary distributions Quasi-stationary distributions Phil Pollett Discipline of Mathematics The University of Queensland http://www.maths.uq.edu.au/ pkp AUSTRALIAN RESEARCH COUNCIL Centre of Excellence for Mathematics and Statistics

More information

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS (February 25, 2004) EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS, University of Queensland PHIL POLLETT, University of Queensland Abstract The birth, death and catastrophe

More information

Robust optimal observation of a metapopulation

Robust optimal observation of a metapopulation 18th World IMACS / MODSIM Congress, Cairns, Australia 13-17 July 29 http://mssanz.org.au/modsim9 Robust optimal observation of a metapopulation Pagendam, D.E. 1 and P.K. Pollett 1 1 Department of Mathematics,

More information

Stochastic models for mainland-island metapopulations in static and dynamic landscapes

Stochastic models for mainland-island metapopulations in static and dynamic landscapes Stochastic models for mainland-island metapopulations in static and dynamic landscapes J.V. Ross April 21, 2005 Abstract This paper has three primary aims: to establish an effective means for modelling

More information

Models for predicting extinction times: shall we dance (or walk or jump)?

Models for predicting extinction times: shall we dance (or walk or jump)? Models for predicting extinction times: shall we dance (or walk or jump)? 1,2 Cairns, B.J., 1 J.V. Ross and 1 T. Taimre 1 School of Physical Sciences, The University of Queensland, St Lucia QLD 4072, Australia.

More information

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS J. Appl. Prob. 41, 1211 1218 (2004) Printed in Israel Applied Probability Trust 2004 EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS and P. K. POLLETT, University of Queensland

More information

arxiv: v1 [math.ap] 17 May 2017

arxiv: v1 [math.ap] 17 May 2017 ON A HOMOGENIZATION PROBLEM arxiv:1705.06174v1 [math.ap] 17 May 2017 J. BOURGAIN Abstract. We study a homogenization question for a stochastic divergence type operator. 1. Introduction and Statement Let

More information

Rare Event Simulation using Importance Sampling and Cross-Entropy p. 1

Rare Event Simulation using Importance Sampling and Cross-Entropy p. 1 Rare Event Simulation using Importance Sampling and Cross-Entropy Caitlin James Supervisor: Phil Pollett Rare Event Simulation using Importance Sampling and Cross-Entropy p. 1 500 Simulation of a population

More information

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution 1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne

More information

Lectures on Probability and Statistical Models

Lectures on Probability and Statistical Models Lectures on Probability and Statistical Models Phil Pollett Professor of Mathematics The University of Queensland c These materials can be used for any educational purpose provided they are are not altered

More information

Proving the central limit theorem

Proving the central limit theorem SOR3012: Stochastic Processes Proving the central limit theorem Gareth Tribello March 3, 2019 1 Purpose In the lectures and exercises we have learnt about the law of large numbers and the central limit

More information

Chapter 5 Lecture. Metapopulation Ecology. Spring 2013

Chapter 5 Lecture. Metapopulation Ecology. Spring 2013 Chapter 5 Lecture Metapopulation Ecology Spring 2013 5.1 Fundamentals of Metapopulation Ecology Populations have a spatial component and their persistence is based upon: Gene flow ~ immigrations and emigrations

More information

6 Metapopulations of Butterflies (sketch of the chapter)

6 Metapopulations of Butterflies (sketch of the chapter) 6 Metapopulations of Butterflies (sketch of the chapter) Butterflies inhabit an unpredictable world. Consider the checkerspot butterfly, Melitaea cinxia, also known as the Glanville Fritillary. They depend

More information

Generating Functions. STAT253/317 Winter 2013 Lecture 8. More Properties of Generating Functions Random Walk w/ Reflective Boundary at 0

Generating Functions. STAT253/317 Winter 2013 Lecture 8. More Properties of Generating Functions Random Walk w/ Reflective Boundary at 0 Generating Function STAT253/37 Winter 203 Lecture 8 Yibi Huang January 25, 203 Generating Function For a non-negative-integer-valued random variable T, the generating function of T i the expected value

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages ELEC206 Probability and Random Processes, Fall 2014 Gil-Jin Jang gjang@knu.ac.kr School of EE, KNU page 1 / 15 Chapter 7. Sums of Random

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

Modeling with Itô Stochastic Differential Equations

Modeling with Itô Stochastic Differential Equations Modeling with Itô Stochastic Differential Equations 2.4-2.6 E. Allen presentation by T. Perälä 27.0.2009 Postgraduate seminar on applied mathematics 2009 Outline Hilbert Space of Stochastic Processes (

More information

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains

More information

Markov decision processes and interval Markov chains: exploiting the connection

Markov decision processes and interval Markov chains: exploiting the connection Markov decision processes and interval Markov chains: exploiting the connection Mingmei Teo Supervisors: Prof. Nigel Bean, Dr Joshua Ross University of Adelaide July 10, 2013 Intervals and interval arithmetic

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking Lecture 7: Simulation of Markov Processes Pasi Lassila Department of Communications and Networking Contents Markov processes theory recap Elementary queuing models for data networks Simulation of Markov

More information

Integrals for Continuous-time Markov chains

Integrals for Continuous-time Markov chains Integrals for Continuous-time Markov chains P.K. Pollett Abstract This paper presents a method of evaluating the expected value of a path integral for a general Markov chain on a countable state space.

More information

Extinction risk and the persistence of fragmented populations

Extinction risk and the persistence of fragmented populations Extinction risk and the persistence of fragmented populations Dipartimento di Elettronica e Informazione Politecnico di Milano Milano, Italy Dick Levins 1 http://www.elet.polimi.it/upload/gatto/spatial_biology

More information

Lecture 2 : CS6205 Advanced Modeling and Simulation

Lecture 2 : CS6205 Advanced Modeling and Simulation Lecture 2 : CS6205 Advanced Modeling and Simulation Lee Hwee Kuan 21 Aug. 2013 For the purpose of learning stochastic simulations for the first time. We shall only consider probabilities on finite discrete

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

Metapopulation modeling: Stochastic Patch Occupancy Model (SPOM) by Atte Moilanen

Metapopulation modeling: Stochastic Patch Occupancy Model (SPOM) by Atte Moilanen Metapopulation modeling: Stochastic Patch Occupancy Model (SPOM) by Atte Moilanen 1. Metapopulation processes and variables 2. Stochastic Patch Occupancy Models (SPOMs) 3. Connectivity in metapopulation

More information

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2 Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution Defn: Z R 1 N(0,1) iff f Z (z) = 1 2π e z2 /2 Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) (a column

More information

A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime

A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime P.K. Pollett a and V.T. Stefanov b a Department of Mathematics, The University of Queensland Queensland

More information

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018 ECE534, Spring 2018: s for Problem Set #4 Due Friday April 6, 2018 1. MMSE Estimation, Data Processing and Innovations The random variables X, Y, Z on a common probability space (Ω, F, P ) are said to

More information

Concentration inequalities and the entropy method

Concentration inequalities and the entropy method Concentration inequalities and the entropy method Gábor Lugosi ICREA and Pompeu Fabra University Barcelona what is concentration? We are interested in bounding random fluctuations of functions of many

More information

Optimal Sampling and Problematic Likelihood Functions in a Simple Population Model

Optimal Sampling and Problematic Likelihood Functions in a Simple Population Model Optimal Sampling and Problematic Likelihood Functions in a Simple Population Model 1 Pagendam, D.E. and 1 Pollett, P.K. 1 Department of Mathematics, School of Physical Sciences University of Queensland,

More information

The SIS and SIR stochastic epidemic models revisited

The SIS and SIR stochastic epidemic models revisited The SIS and SIR stochastic epidemic models revisited Jesús Artalejo Faculty of Mathematics, University Complutense of Madrid Madrid, Spain jesus_artalejomat.ucm.es BCAM Talk, June 16, 2011 Talk Schedule

More information

= p(t)(1 λδt + o(δt)) (from axioms) Now p(0) = 1, so c = 0 giving p(t) = e λt as required. 5 For a non-homogeneous process we have

= p(t)(1 λδt + o(δt)) (from axioms) Now p(0) = 1, so c = 0 giving p(t) = e λt as required. 5 For a non-homogeneous process we have . (a (i I: P(exactly event occurs in [t, t + δt = λδt + o(δt, [o(δt/δt 0 as δt 0]. II: P( or more events occur in [t, t + δt = o(δt. III: Occurrence of events after time t is indeendent of occurrence of

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Partitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12

Partitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12 Partitioned Covariance Matrices and Partial Correlations Proposition 1 Let the (p + q (p + q covariance matrix C > 0 be partitioned as ( C11 C C = 12 C 21 C 22 Then the symmetric matrix C > 0 has the following

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Lecture Notes 3 Convergence (Chapter 5)

Lecture Notes 3 Convergence (Chapter 5) Lecture Notes 3 Convergence (Chapter 5) 1 Convergence of Random Variables Let X 1, X 2,... be a sequence of random variables and let X be another random variable. Let F n denote the cdf of X n and let

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Renewal theory and its applications

Renewal theory and its applications Renewal theory and its applications Stella Kapodistria and Jacques Resing September 11th, 212 ISP Definition of a Renewal process Renewal theory and its applications If we substitute the Exponentially

More information

Dynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models

Dynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models 6 Dependent data The AR(p) model The MA(q) model Hidden Markov models Dependent data Dependent data Huge portion of real-life data involving dependent datapoints Example (Capture-recapture) capture histories

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Elementary Applications of Probability Theory

Elementary Applications of Probability Theory Elementary Applications of Probability Theory With an introduction to stochastic differential equations Second edition Henry C. Tuckwell Senior Research Fellow Stochastic Analysis Group of the Centre for

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

A fixed-point approximation accounting for link interactions in a loss network

A fixed-point approximation accounting for link interactions in a loss network A fixed-point approximation accounting for link interactions in a loss network MR Thompson PK Pollett Department of Mathematics The University of Queensland Abstract This paper is concerned with evaluating

More information

Ecology Regulation, Fluctuations and Metapopulations

Ecology Regulation, Fluctuations and Metapopulations Ecology Regulation, Fluctuations and Metapopulations The Influence of Density on Population Growth and Consideration of Geographic Structure in Populations Predictions of Logistic Growth The reality of

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Statistics 3657 : Moment Generating Functions

Statistics 3657 : Moment Generating Functions Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

Problem 1: (Chernoff Bounds via Negative Dependence - from MU Ex 5.15)

Problem 1: (Chernoff Bounds via Negative Dependence - from MU Ex 5.15) Problem 1: Chernoff Bounds via Negative Dependence - from MU Ex 5.15) While deriving lower bounds on the load of the maximum loaded bin when n balls are thrown in n bins, we saw the use of negative dependence.

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

LECTURE NOTES: Discrete time Markov Chains (2/24/03; SG)

LECTURE NOTES: Discrete time Markov Chains (2/24/03; SG) LECTURE NOTES: Discrete time Markov Chains (2/24/03; SG) A discrete time Markov Chains is a stochastic dynamical system in which the probability of arriving in a particular state at a particular time moment

More information

Spectral representations and ergodic theorems for stationary stochastic processes

Spectral representations and ergodic theorems for stationary stochastic processes AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Estimation of arrival and service rates for M/M/c queue system

Estimation of arrival and service rates for M/M/c queue system Estimation of arrival and service rates for M/M/c queue system Katarína Starinská starinskak@gmail.com Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics

More information

A Primer of Ecology. Sinauer Associates, Inc. Publishers Sunderland, Massachusetts

A Primer of Ecology. Sinauer Associates, Inc. Publishers Sunderland, Massachusetts A Primer of Ecology Fourth Edition NICHOLAS J. GOTELLI University of Vermont Sinauer Associates, Inc. Publishers Sunderland, Massachusetts Table of Contents PREFACE TO THE FOURTH EDITION PREFACE TO THE

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS 63 2.1 Introduction In this chapter we describe the analytical tools used in this thesis. They are Markov Decision Processes(MDP), Markov Renewal process

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

Problem set 2 The central limit theorem.

Problem set 2 The central limit theorem. Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

3. Review of Probability and Statistics

3. Review of Probability and Statistics 3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture

More information

Stochastic analysis of biochemical reaction networks with absolute concentration robustness

Stochastic analysis of biochemical reaction networks with absolute concentration robustness Stochastic analysis of biochemical reaction networks with absolute concentration robustness David F. Anderson anderson@math.wisc.edu Department of Mathematics University of Wisconsin - Madison Boston University

More information

Manual for SOA Exam MLC.

Manual for SOA Exam MLC. Chapter 4. Life Insurance. Section 4.9. Computing APV s from a life table. Extract from: Arcones Manual for the SOA Exam MLC. Fall 2009 Edition. available at http://www.actexmadriver.com/ 1/29 Computing

More information

Stochastic models in biology and their deterministic analogues

Stochastic models in biology and their deterministic analogues Stochastic models in biology and their deterministic analogues Alan McKane Theory Group, School of Physics and Astronomy, University of Manchester Newton Institute, May 2, 2006 Stochastic models in biology

More information

CALCULUS JIA-MING (FRANK) LIOU

CALCULUS JIA-MING (FRANK) LIOU CALCULUS JIA-MING (FRANK) LIOU Abstract. Contents. Power Series.. Polynomials and Formal Power Series.2. Radius of Convergence 2.3. Derivative and Antiderivative of Power Series 4.4. Power Series Expansion

More information

Uncertainty Quantification in Computational Science

Uncertainty Quantification in Computational Science DTU 2010 - Lecture I Uncertainty Quantification in Computational Science Jan S Hesthaven Brown University Jan.Hesthaven@Brown.edu Objective of lectures The main objective of these lectures are To offer

More information

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

At the boundary states, we take the same rules except we forbid leaving the state space, so,. Birth-death chains Monday, October 19, 2015 2:22 PM Example: Birth-Death Chain State space From any state we allow the following transitions: with probability (birth) with probability (death) with probability

More information

Solution: (Course X071570: Stochastic Processes)

Solution: (Course X071570: Stochastic Processes) Solution I (Course X071570: Stochastic Processes) October 24, 2013 Exercise 1.1: Find all functions f from the integers to the real numbers satisfying f(n) = 1 2 f(n + 1) + 1 f(n 1) 1. 2 A secial solution

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K

Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K Attahiru Sule Alfa Department of Mechanical and Industrial Engineering University of Manitoba Winnipeg, Manitoba

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011 UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, 2014 Homework Set #6 Due: Thursday, May 22, 2011 1. Linear estimator. Consider a channel with the observation Y = XZ, where the signal X and

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

SIMILAR MARKOV CHAINS

SIMILAR MARKOV CHAINS SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Application of the Cross-Entropy Method to Clustering and Vector Quantization

Application of the Cross-Entropy Method to Clustering and Vector Quantization Application of the Cross-Entropy Method to Clustering and Vector Quantization Dirk P. Kroese and Reuven Y. Rubinstein and Thomas Taimre Faculty of Industrial Engineering and Management, Technion, Haifa,

More information

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO QUESTION BOOKLET EE 126 Spring 26 Midterm #2 Thursday, April 13, 11:1-12:3pm DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO You have 8 minutes to complete the midterm. The midterm consists

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information