Stochastic Petri Net

Size: px
Start display at page:

Download "Stochastic Petri Net"

Transcription

1 Stochastic Petri Net Serge Haddad LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA Petri Nets 2016, June 20th Stochastic Petri Net 2 Markov Chain 3 Markovian Stochastic Petri Net 4 Generalized Markovian Stochastic Petri Net (GSPN) 5 Product-form Petri Nets 1/45

2 Plan 1 Stochastic Petri Net Markov Chain Markovian Stochastic Petri Net Generalized Markovian Stochastic Petri Net (GSPN) Product-form Petri Nets 2/45

3 Stochastic Petri Net versus Time Petri Net In TPN, the delays are non deterministically chosen. In Stochastic Petri Net (SPN), the delays are randomly chosen by sampling distributions associated with transitions.... but these distributions are not sufficient to eliminate non determinism. Policies for a net One needs to define: The choice policy. What is the next transition to fire? The service policy. What is the influence of the enabling degree of a transition on the process? The memory policy. What become the samplings of distributions that have not be used? 3/45

4 Stochastic Petri Net versus Time Petri Net In TPN, the delays are non deterministically chosen. In Stochastic Petri Net (SPN), the delays are randomly chosen by sampling distributions associated with transitions.... but these distributions are not sufficient to eliminate non determinism. Policies for a net One needs to define: The choice policy. What is the next transition to fire? The service policy. What is the influence of the enabling degree of a transition on the process? The memory policy. What become the samplings of distributions that have not be used? 3/45

5 Choice Policy In the net, associate a distribution D i and a weight w i with every transition t i. Preselection w.r.t. a marking m and enabled transitions T m Normalize weights wi of the enabled transitions: w i w i t j Tm wj Sample the distribution defined by the w i s. Let ti be the selected transition, sample D i giving the value d i. versus Race policy with postselection w.r.t. a marking m For every ti T m, sample D i giving the value d i. Let T be the subset of T m with the smallest delays. Normalize weights w i of transitions of T : w i w i t j T wj Sample the distribution defined by the w i s yielding some t i. Priorities between transitions could added to refine the selection. 4/45

6 Choice Policy In the net, associate a distribution D i and a weight w i with every transition t i. Preselection w.r.t. a marking m and enabled transitions T m Normalize weights wi of the enabled transitions: w i w i t j Tm wj Sample the distribution defined by the w i s. Let ti be the selected transition, sample D i giving the value d i. versus Race policy with postselection w.r.t. a marking m For every ti T m, sample D i giving the value d i. Let T be the subset of T m with the smallest delays. Normalize weights w i of transitions of T : w i w i t j T wj Sample the distribution defined by the w i s yielding some t i. Priorities between transitions could added to refine the selection. 4/45

7 Choice Policy: Illustration t 1 (D 1,w 1 ) t 2 (D 2,w 2 ) t 3 (D 3,w 3 ) w 1 =1 w 2 =2 w 3 =2 Preselection Race Policy Sample (1/5,2/5,2/5) Outcome t 1 Sample D 1 Outcome 4.2 Sample (D 1,D 2,D 3 ) Outcome(3.2,6.5,3.2) Sample(1/3,-,2/3) Outcome t 3 5/45

8 Server Policy A transition t can be viewed as server for firings: A single server t allows a single instance of firings in m if m[t. An infinite server t allows d instances of firings in m where d = min( m(p) p t) is the enabling degree. P re(p,t) A multiple server t with bound b allows min(b, d) instances of firings in m. 3 t 1 t 2 t 3 t 3 single server Sample (D 1,D 2,D 3 ) t 3 infinite server Sample (D 1,D 2,D 3,D 3,D 3 ) t 3 2-server Sample (D 1,D 2,D 3,D 3 ) This can be generalised by marking-dependent services. 6/45

9 Memory Policy (1) d 1 <min(d 2,d 3 ) t 1 t 2 t 3 t 1 t 2 t 3 What happens to d 2 and d 3? Resampling Memory Every sampling not used is forgotten. This could correspond to a crash transition. 7/45

10 Memory Policy (2) d 1 <min(d 2,d 3 ) t 1 t 2 t 3 t 1 t 2 t 3 What happens to d 2 and d 3? Enabling Memory The samplings associated with still enabled transitions are kept and decremented (d 3 = d 3 d 1 ). The samplings associated with disabled transitions are forgotten (like d2 ). Disabling a transition could correspond to abort a service. 8/45

11 Memory Policy (3) d 1 <min(d 2,d 3 ) t 1 t 2 t 3 t 1 t 2 t 3 What happens to d 2 and d 3? Age Memory All the samplings are kept and decremented (d 3 = d 3 d 1 and d 2 = d 2 d 1 ). The sampling associated with a disabled transition is frozen until the transition become again enabled (like d 2). Disabling a transition could correspond to suspend a service. 9/45

12 Memory Policy (4) Specification of memory policy To be fully expressive, it should be defined w.r.t. any pair of transitions. 3 2 d 2 <min(d 1,d 1 ',d 1 '') t 1 t 2 t 1 t 2 What happens to d 1, d 1 ' and d 1 ''? Interaction between memory policy and service policy Assume enabling memory for t 1 when firing t 2 and infinite server policy for t 1. Which sample should be forgotten? The last sample performed, The first sample performed, The greatest sample, etc. Warning: This choice may have a critical impact on the complexity of analysis. 10/45

13 Plan Stochastic Petri Net 2 Markov Chain Markovian Stochastic Petri Net Generalized Markovian Stochastic Petri Net (GSPN) Product-form Petri Nets 11/45

14 Discrete Time Markov Chain (DTMC) A DTMC is a stochastic process which fulfills: For all n, Tn is the constant 1 The process is memoryless P r(s n+1 = s j S 0 = s i0,..., S n 1 = s in 1, S n = s i ) = P r(s n+1 = s j S n = s i ) P[i, j] A DTMC is defined by S 0 and P P /45

15 Analysis: the State Status The transient analysis is easy and effective in the finite case: π n = π 0 P n with π n the distribution of S n The steady-state analysis (? lim n π n ) requires theoretical developments. Classification of states w.r.t. the asymptotic behaviour of the DTMC A state is transient if the probability of a return after a visit is less than one. Hence the probability of its occurrence will go to zero. (p < 1/2) A state is recurrent null if the probability of a return after a visit is one but the mean time of this return is infinite. Hence the probability of its occurrence will go to zero. (p = 1/2) A state is recurrent non null if the probability of a return after a visit is one and the mean time of this return is finite. (p > 1/2) 1 1-p 1-p 1-p 0 p 1 p 2 p 3... p 13/45

16 In a finite DTMC State Status in Finite DTMC The status of a state only depends on the graph associated with the chain. A state is transient iff it belongs to a non terminal strongly connected component (scc) of the graph. A state is recurrent non null iff it belongs to a terminal scc T={1,2,3} C 1 ={4,5} C 2 ={6,7,8} 14/45

17 Analysis: Irreducibility and Periodicity Irreducibility and Periodicity A chain is irreducible if its graph is strongly connected. The periodicity of an irreducible chain is the greatest integer p such that: the set of states can be partionned in p subsets S 0,..., S p 1 where every transition goes from S i to S i+1%p for some i. Computation of the periodicity height height height periodicity=gcd(0,2,4)=2 height 3 height 4 15/45

18 Analysis of a DTMC: a Particular Case A particular case The chain is irreducible and aperiodic (i.e. its periodicity is 1) π lim n π n exists and its value is independent from π 0. π is the unique solution of X = X P X 1 = 1 where one can omit an arbitrary equation of the first system = 1/8 7/16 7/16 π 1 = 0.3π π 2 π 2 = 0.7π π 3 π 3 = π 2 16/45

19 Analysis of a DTMC: the General Case Almost general case: every terminal scc is aperiodic π exists. π = s S π 0(s) i I preach i [s] πi where: 1. S is the set of states, 2. {C i} i I is the set of terminal scc, 3. π i is the steady-state distribution of C i, 4. and preach i [s] is the probability to reach C i starting from s. Computation of the reachability probability for transient states Let T be the set of transient states (i.e. not belonging to a terminal scc) Let PT,T be the submatrix of P restricted to transient states Let PT,i be the submatrix of P transitions from T to C i Then preachi = ( n N (P T,T ) n ) P T,i 1 = (Id P T,T ) 1 P T,i 1 17/45

20 Analysis of a DTMC: the General Case Almost general case: every terminal scc is aperiodic π exists. π = s S π 0(s) i I preach i [s] πi where: 1. S is the set of states, 2. {C i} i I is the set of terminal scc, 3. π i is the steady-state distribution of C i, 4. and preach i [s] is the probability to reach C i starting from s. Computation of the reachability probability for transient states Let T be the set of transient states (i.e. not belonging to a terminal scc) Let PT,T be the submatrix of P restricted to transient states Let PT,i be the submatrix of P transitions from T to C i Then preachi = ( n N (P T,T ) n ) P T,i 1 = (Id P T,T ) 1 P T,i 1 17/45

21 Illustration: SCC and Matrices P T,T = P T,1.1= T={1,2,3},C 1 ={4,5},C 2 ={6,7,8} = P T,2.1= = /45

22 Continuous Time Markov Chain (CTMC) A CTMC is a stochastic process which fulfills: Memoryless state change P r(s n+1 = s j S 0 = s i0,..., S n 1 = s in 1, T 0 < τ 0,..., T n < τ n, S n = s i ) = P r(s n+1 = s j S n = s i ) P[i, j] Memoryless transition delay P r(t n < τ S 0 = s i0,..., S n 1 = s in 1, T 0 < τ 0,..., T n 1 < τ n 1, S n = s i ) = P r(t n < τ S n = s i ) = 1 e λiτ Notations and properties P defines an embedded DTMC (the chain of state changes) Let π(τ) the distribution de X(τ), for δ going to 0 it holds that: π(τ + δ)(s i ) π(τ)(s i )(1 λ i δ) + j π(τ)(s j)λ j δp[j, i] Hence, let Q the infinitesimal generator defined by: Q[i, j] λ i P[i, j] for j i and Q[i, i] j i Q[i, j] dπ Then: dτ = π Q 19/45

23 Continuous Time Markov Chain (CTMC) A CTMC is a stochastic process which fulfills: Memoryless state change P r(s n+1 = s j S 0 = s i0,..., S n 1 = s in 1, T 0 < τ 0,..., T n < τ n, S n = s i ) = P r(s n+1 = s j S n = s i ) P[i, j] Memoryless transition delay P r(t n < τ S 0 = s i0,..., S n 1 = s in 1, T 0 < τ 0,..., T n 1 < τ n 1, S n = s i ) = P r(t n < τ S n = s i ) = 1 e λiτ Notations and properties P defines an embedded DTMC (the chain of state changes) Let π(τ) the distribution de X(τ), for δ going to 0 it holds that: π(τ + δ)(s i ) π(τ)(s i )(1 λ i δ) + j π(τ)(s j)λ j δp[j, i] Hence, let Q the infinitesimal generator defined by: Q[i, j] λ i P[i, j] for j i and Q[i, i] j i Q[i, j] dπ Then: dτ = π Q 19/45

24 The exponential distribution Let F be defined by: F (τ) = 1 e λτ Then F is the exponential distribution with rate λ > 0. The exponential distribution is memoryless. Let X be a random variable with a λ-exponential distribution. Pr(X > τ X > τ) = Pr(X > τ ) Pr(X > τ) = e λτ e λτ = e λ(τ τ) = Pr(X > τ τ) The minimum of exponential distributions is an exponential distribution. Let Y be independent from X with µ-exponential distribution. Pr(min(X, Y ) > τ) = e λτ e µτ = e (λ+µ)τ The minimal variable is selected proportionally to its rate. Pr(X < Y ) = 0 Pr(Y > τ)f X {dτ} = 0 e µτ λe λτ dτ = λ λ + µ 20/45

25 Convoluting the exponential distribution The n th convolution of a distribution F is defined by: n def F = F F (n times) Let f n (resp. F n ) be the density (resp. distribution) of the n th convolution of the λ-exponential distribution. Then: λx (λx)n 1 f n (x) = λe (n 1)! and F n(x) = 1 e λx 0 m<n (λx) m m! Sketch of proof Recall that: f 1 (x) = λe λx. f n+1 (x) = x 0 f n(x u)f 1 (u)du = x Deduce F n+1 by: = λe λx x ( d dx e λx (λ 0 m n 0 λ (λ(x u))n 1 (n 1)! 1 e λx 0 m n 0 λe λ(x u) (λ(x u)) n 1 (n 1)! du = λe λx (λx) n n! ) (λx) m m! = (λx) m m! 0 m n 1 λ (λx)m m! ) = f n+1 (x) λe λu du 21/45

26 CTMC: Illustration and Uniformization A CTMC λ P λ' P' A uniform version of the CTMC (equivalent w.r.t. the states) Q 22/45

27 Analysis of a CTMC Transient Analysis Construction of a uniform version of the CTMC (λ, P) such that P[i, i] > 0 for all i. Computation by case decomposition w.r.t. the number of transitions: π(τ) = π(0) n N (e λτ ) τ n n! Pn Steady-state analysis The steady-state distribution of visits is given by the steady-state distribution of (λ, P) (by construction, the terminal scc are aperiodic)... equal to the steady-state distribution since the sojourn times follow the same distribution. A particular case: P irreducible the steady-state distribution π is the unique solution of X Q = 0 X 1 = 1 where one can omit an arbitrary equation of the first system. 23/45

28 Analysis of a CTMC Transient Analysis Construction of a uniform version of the CTMC (λ, P) such that P[i, i] > 0 for all i. Computation by case decomposition w.r.t. the number of transitions: π(τ) = π(0) n N (e λτ ) τ n n! Pn Steady-state analysis The steady-state distribution of visits is given by the steady-state distribution of (λ, P) (by construction, the terminal scc are aperiodic)... equal to the steady-state distribution since the sojourn times follow the same distribution. A particular case: P irreducible the steady-state distribution π is the unique solution of X Q = 0 X 1 = 1 where one can omit an arbitrary equation of the first system. 23/45

29 Plan Stochastic Petri Net Markov Chain 3 Markovian Stochastic Petri Net Generalized Markovian Stochastic Petri Net (GSPN) Product-form Petri Nets 24/45

30 Markovian Stochastic Petri Net Hypotheses The distribution of every transition ti has a density function e λiτ where the parameter λ i is called the rate of the transition. For simplicity reasons, the server policy is single server. First observations The weights for choice policy are no more required since equality of two samples has a null probability. (due to continuity of distributions) The residual delay dj d i of transition t j knowing that t i has fired (i.e. d i is the shortest delay) has the same distribution as the initial delay. Thus the memory policy is irrelevant. 25/45

31 Markovian Stochastic Petri Net Hypotheses The distribution of every transition ti has a density function e λiτ where the parameter λ i is called the rate of the transition. For simplicity reasons, the server policy is single server. First observations The weights for choice policy are no more required since equality of two samples has a null probability. (due to continuity of distributions) The residual delay dj d i of transition t j knowing that t i has fired (i.e. d i is the shortest delay) has the same distribution as the initial delay. Thus the memory policy is irrelevant. 25/45

32 Markovian Net and Markov Chain Key observation: given a marking m with T m = t 1,..., t k The sojourn time in m is an exponential distribution with rate i λ i. The probability that ti is the next transition to fire is λ i ( j λj). Thus the stochastic process is a CTMC whose states are markings and whose transitions are the transitions of the reachability graph. t 1 λ λ 2 λ 3 λ4 λ 5 t 2 t 3 λ 2 λ 3 λ 2 λ 3 λ 4 λ 5 λ 4 λ 5 t 4 t 5 λ 4 λ 2 λ λ t 6 λ 4 λ 5 λ 6 26/45

33 Plan Stochastic Petri Net Markov Chain Markovian Stochastic Petri Net 4 Generalized Markovian Stochastic Petri Net (GSPN) Product-form Petri Nets 27/45

34 Generalizing Distributions for Nets Modelling delays with exponential distributions is reasonable when: Only mean value information is known about distributions. Exponential distributions (or combination of them) are enough to approximate the real distributions. Modelling delays with exponential distributions is not reasonable when: The distribution of an event is known and is poorly approximable with exponential distributions: a time-out of 10 time units The delays of the events have different magnitude orders: executing an instruction versus performing a database request In the last case, the 0-Dirac distribution is required. 28/45

35 Generalized Markovian Stochastic Petri Net (GSPN) Generalized Markovian Stochastic Petri Nets (GSPN) are nets whose: timed transitions have exponential distributions, and immediate transitions have 0-Dirac distributions. Their analysis is based on Markovian Renewal Process, a generalization of Markov chains. 29/45

36 Markovian Renewal Process A Markovian Renewal Process (MRP) fulfills: a relative memoryless property P r(s n+1 = s j, T n < τ S 0 = s i0,..., S n 1 = s in 1, T 0 < τ 0,..., S n = s i ) = P r(s n+1 = s j, T n < τ S n = s i ) Q[i, j, τ] The embedded chain is defined by: P[i, j] = limτ Q[i, j, τ] The sojourn time Soj has a distribution defined by: P r(soj[i] < τ) = j Q[i, j, τ] Analysis of a MRP The steady-state distribution (if there exists) π is deduced from the steady-state distribution of the embedded chain π by: π(s i ) = π (s i)e(soj[i]) j π (s j)e(soj[j]) Transient analysis is much harder... but the reachability probabilities only depend on the embedded chain. 30/45

37 Markovian Renewal Process A Markovian Renewal Process (MRP) fulfills: a relative memoryless property P r(s n+1 = s j, T n < τ S 0 = s i0,..., S n 1 = s in 1, T 0 < τ 0,..., S n = s i ) = P r(s n+1 = s j, T n < τ S n = s i ) Q[i, j, τ] The embedded chain is defined by: P[i, j] = limτ Q[i, j, τ] The sojourn time Soj has a distribution defined by: P r(soj[i] < τ) = j Q[i, j, τ] Analysis of a MRP The steady-state distribution (if there exists) π is deduced from the steady-state distribution of the embedded chain π by: π(s i ) = π (s i)e(soj[i]) j π (s j)e(soj[j]) Transient analysis is much harder... but the reachability probabilities only depend on the embedded chain. 30/45

38 t 4 t 5 t6 A GSPN is a Markovian Renewal Process Observations Weights are required for immediate transitions. The restricted reachability graph corresponds to the embedded DTMC. t 1 t t t 2 t 3 4 t 5 t 2 t 3 t 2 t 3 t 2 t 3 t 4 t 5 t 4 t 5 t 4 t 5 2 t 6 tangible marking vanishing marking 31/45

39 Steady-State Analysis of a GSPN (1) Standard method for MRP Build the restricted reachability graph equivalent to the embedded DTMC Deduce the probability matrix P Compute π the steady-state distribution of the visits of markings: π = π P Compute π the steady-state distribution of the sojourn in tangible markings: π (m)soj(m) π(m) = m tangible π (m )Soj(m ) How to eliminate the vanishing markings sooner in the computation? 32/45

40 Steady-State Analysis of a GSPN (2) An alternative method As before, compute the transition probability matrix P. Compute the transition probability matrix P between tangible markings. Compute π the (relative) steady-state distribution of the visits of tangible markings: π = π P. Compute π the steady-state distribution of the sojourn in tangible markings: π (m)soj(m) π(m) = m tangible π (m )Soj(m ) Computation of P Let PX,Y the probability transition matrix from subset X to subset Y. Let V (resp. T ) be the set of vanishing (resp. tangible) markings. P = P T,T + P T,V ( n N P n V,V )P V,T = P T,T + P T,V (Id P V,V ) 1 P V,T Iterative (resp. direct) computations uses the first (resp. second) expression. 33/45

41 Steady-State Analysis: Illustration 1 c d/λ 1 p 2 p (p 2 ) 2 (p 3 ) 2 3 2dp 2 p 3 /(λ 4 +λ 4 ) p 2 p 3 p 2 p 3 2p 2 p 3 c(p 2 ) 2 2cp 2 p 3 c(p 3 ) 2 d(p 2 ) 2 /λ 4 d(p 3 ) 2 /λ 5 1 p 5 p p 5 p 4 1 c((p 2 ) 2 +2p 5 p 2 p 3 ) c((p 3 ) 2 +2p 4 p 2 p 3 ) d((p 2 ) 2 +2p 5 p 2 p 3 )/λ 4 d((p 3 ) 2 +2p 4 p 2 p 3 )/λ c d/λ 6 p 2 =w 2 /(w 2 +w 3 ) p 3 =w 3 /(w 2 +w 3 ) p 4 =λ 4 /(λ 4 +λ 5 ) p 5 =λ 5 /(λ 4 +λ 5 ) c and d are normalizing constants 34/45

42 Plan Stochastic Petri Net Markov Chain Markovian Stochastic Petri Net Generalized Markovian Stochastic Petri Net (GSPN) 5 Product-form Petri Nets 35/45

43 Steady-State Analysis of a Queue Client arrivals Service time λ λ λ λ μ μ μ μ A (Markovian) queue is a CTMC Interarrival time: exponential distribution with parameter λ Service time: exponential distribution with parameter µ Let ρ = λ µ be the utilization The steady-state distribution π exists iff ρ < 1 The probability of n clients in the queue is π (n) = ρ n (1 ρ) 36/45

44 Steady-State Analysis of a Queue Client arrivals Service time λ λ λ λ μ μ μ μ A (Markovian) queue is a CTMC Interarrival time: exponential distribution with parameter λ Service time: exponential distribution with parameter µ Let ρ = λ µ be the utilization The steady-state distribution π exists iff ρ < 1 The probability of n clients in the queue is π (n) = ρ n (1 ρ) 36/45

45 Analysis of Two Queues in Tandem λ μ δ... μ... μ... λ λ λ 0,2 1,2 2,2 μ μ δ δ δ λ λ λ 0,1 1,1 2,1 μ μ δ δ δ λ λ λ 0,0 1,0 2,0 μ μ Observation. The associated Markov chain is more complex than the one corresponding to two isolated queues. However... Assume ρ 1 = λ µ < 1 and ρ 2 = λ δ < 1 The steady-state distribution π exists. The probability of n1 clients in queue 1 and n 2 clients in queue 2 is π (n 1, n 2 ) = ρ n1 1 (1 ρ 1)ρ n2 2 (1 ρ 2) It is the product of the steady-state distributions corresponding to two isolated queues. 37/45

46 Analysis of Two Queues in Tandem λ μ δ... μ... μ... λ λ λ 0,2 1,2 2,2 μ μ δ δ δ λ λ λ 0,1 1,1 2,1 μ μ δ δ δ λ λ λ 0,0 1,0 2,0 μ μ Observation. The associated Markov chain is more complex than the one corresponding to two isolated queues. However... Assume ρ 1 = λ µ < 1 and ρ 2 = λ δ < 1 The steady-state distribution π exists. The probability of n1 clients in queue 1 and n 2 clients in queue 2 is π (n 1, n 2 ) = ρ n1 1 (1 ρ 1)ρ n2 2 (1 ρ 2) It is the product of the steady-state distributions corresponding to two isolated queues. 37/45

47 Analysis of an Open Queuing Network q δq λ μ δ p 1-p 1-q λ μp δ(1-q) μ(1-p) In a steady-state Define the (input and output) flow through queue 1 (resp. 2) as γ1 (resp. γ 2 ). Then γ1 = λ + qγ 2 and γ 2 = pγ 1. Thus γ 1 = λ 1 pq and γ 2 = pλ 1 pq Assume ρ 1 = γ 1 µ < 1 and ρ 2 = γ 2 δ < 1 The steady-state distribution π exists. The probability of n1 clients in queue 1 and n 2 clients in queue 2 is π (n 1, n 2 ) = ρ n1 1 (1 ρ 1)ρ n n 2 (1 ρ 2 ) It is the product of the steady-state distributions corresponding to two isolated queues. 38/45

48 Analysis of an Open Queuing Network q δq λ μ δ p 1-p 1-q λ μp δ(1-q) μ(1-p) In a steady-state Define the (input and output) flow through queue 1 (resp. 2) as γ1 (resp. γ 2 ). Then γ1 = λ + qγ 2 and γ 2 = pγ 1. Thus γ 1 = λ 1 pq and γ 2 = pλ 1 pq Assume ρ 1 = γ 1 µ < 1 and ρ 2 = γ 2 δ < 1 The steady-state distribution π exists. The probability of n1 clients in queue 1 and n 2 clients in queue 2 is π (n 1, n 2 ) = ρ n1 1 (1 ρ 1)ρ n n 2 (1 ρ 2 ) It is the product of the steady-state distributions corresponding to two isolated queues. 38/45

49 Analysis of a Closed Queuing Network q δq λ μ p 1-p Visit ratios (up to a constant) 1-q Define the visit ratio flow of queue i as vi. δ λ 3 2 μp 1 μ(1-p) δ(1-q) Then v1 = v 3 + qv 2, v 2 = pv 1 and v 3 = (1 p)v 1 + (1 q)v 2. Thus v 1 = 1, v 2 = p and v 3 = 1 pq. Define ρ 1 = v 1 µ, ρ 2 = v 2 δ and ρ 3 = v 3 λ The steady-state probability of ni clients in queue i is π (n 1, n 2, n 3 ) = 1 G ρn1 1 ρn2 2 ρn3 3 (with n 1 + n 2 + n 3 = n) where G the normalizing constant can be efficiently computed by dynamic programming. 39/45

50 Analysis of a Closed Queuing Network q δq λ μ p 1-p Visit ratios (up to a constant) 1-q Define the visit ratio flow of queue i as vi. δ λ 3 2 μp 1 μ(1-p) δ(1-q) Then v1 = v 3 + qv 2, v 2 = pv 1 and v 3 = (1 p)v 1 + (1 q)v 2. Thus v 1 = 1, v 2 = p and v 3 = 1 pq. Define ρ 1 = v 1 µ, ρ 2 = v 2 δ and ρ 3 = v 3 λ The steady-state probability of ni clients in queue i is π (n 1, n 2, n 3 ) = 1 G ρn1 1 ρn2 2 ρn3 3 (with n 1 + n 2 + n 3 = n) where G the normalizing constant can be efficiently computed by dynamic programming. 39/45

51 Queuing Networks and Petri Nets Observations A (single client class) queuing network can easily be represented by a Petri net. Such a Petri net is a state machine: every transition has at most a single input and a single output place. Can we define a more general subclass of Petri nets with a product form for the steady-state distribution? 40/45

52 Product Form Stochastic Petri Nets (PFSPN) t 1 t 2 t 3 p1 p 2 p 3 t 4 t 5 t 6 p4 p 5 p 6 t 7 Principles Transitions can be partionned into subsets corresponding to several classes of clients with their specific activities Places model resources shared between the clients. Client states are implicitely represented. 41/45

53 Bags and Transitions in PFSPN t 1 t 2 t 3 p1 p 2 p 3 t 3 p1 +p 4 t 1 p 2 t 2 p 3 t 4 t 5 t 6 p4 p 5 p 6 t 4 t p 5 4 p 5 p 6 t 7 t 7 t 6 The resource graph The vertices are the input and the ouput bags of the transitions. Every transition of the net t yields a graph transition t t t Client classes correspond to the connected components of the graph. First requirement: The connected components of the graph must be strongly connected. 42/45

54 Witnesses in PFSPN t 1 t 2 t 3 p1 p 2 p 3 t 3 p1 +p 4 t 1 t 4 t 5 t 6 p4 p 5 p 6 Vector -p 2 -p 3 is a witness for bag p 1 +p 4 : (-p 2 -p 3 ).W(t 3 )=1 (-p 2 -p 3 ).W(t 1 )=-1 t 7 (-p 2 -p 3 ).W(t)=0 for every other t Witness for a bag b where W is the incidence matrix Let In(b) (resp. Out(b)) the transitions with input (resp. output) b. Let v be a place vector, v is a witness for b if: t In(b) v W (t) = 1 (where W (t) is the incidence of t) t Out(b) v W (t) = 1 t / In(b) Out(b) v W (t) = 0 Second requirement: Every bag must have a witness. 43/45

55 Steady-State Distributions of PFSPN p1 t 1 p t 2 t 3 2 p 3 2 The reachability space: t 4 t 5 t 6 m(p p4 p 5 p 1 )+ m(p 2 )+ m(p 3 )=2 6 3 m(p 4 )+ m(p 5 )+ m(p 6 )=m(p 1 )+1 t 7 Steady-state distribution Assume the requirements are fulfilled, with w(b) the witness for bag b. Compute the ratio visit of bags v(b) on the resource graph. The output rate of a bag b is µ(b) = t t=b µ(t) with µ(t) the rate of t. Then: π (m) = 1 ( ) w(b) m v(b) G b µ(b) Observation. The normalizing constant can be efficiently computed if the reachability space is characterized by linear place invariants. 44/45

56 Some References M. Ajmone Marsan, G. Balbo, G. Conte, S. Donatelli, G. Franceschinis Modelling with Generalized Stochastic Petri Nets Wiley series in parallel computing, John Wiley & Sons, 1995 Freely available on the Web site of GreatSPN S. Haddad, P. Moreaux Chapter 7: Stochastic Petri Nets Petri Nets: Fundamental Models and Applications Wiley pp , 2009 S. Haddad, P. Moreaux, M. Sereno, M. Silva Product-form and stochastic Petri nets: a structural approach. Performance Evaluation, 59: , S. Haddad, J. Mairesse and H.-T. Nguyen Synthesis and Analysis of Product-form Petri Nets. Fundamenta Informaticae 122(1-2), pages , /45

Stochastic Petri Net

Stochastic Petri Net Stochastic Petri Net Serge Haddad LSV ENS Cachan & CNRS & INRIA haddad@lsv.ens-cachan.fr Petri Nets 2013, June 24th 2013 1 Stochastic Petri Net 2 Markov Chain 3 Markovian Stochastic Petri Net 4 Generalized

More information

From Stochastic Processes to Stochastic Petri Nets

From Stochastic Processes to Stochastic Petri Nets From Stochastic Processes to Stochastic Petri Nets Serge Haddad LSV CNRS & ENS Cachan & INRIA Saclay Advanced Course on Petri Nets, the 16th September 2010, Rostock 1 Stochastic Processes and Markov Chains

More information

Time(d) Petri Net. Serge Haddad. Petri Nets 2016, June 20th LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA

Time(d) Petri Net. Serge Haddad. Petri Nets 2016, June 20th LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA Time(d) Petri Net Serge Haddad LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA haddad@lsv.ens-cachan.fr Petri Nets 2016, June 20th 2016 1 Time and Petri Nets 2 Time Petri Net: Syntax and Semantic

More information

A REACHABLE THROUGHPUT UPPER BOUND FOR LIVE AND SAFE FREE CHOICE NETS VIA T-INVARIANTS

A REACHABLE THROUGHPUT UPPER BOUND FOR LIVE AND SAFE FREE CHOICE NETS VIA T-INVARIANTS A REACHABLE THROUGHPUT UPPER BOUND FOR LIVE AND SAFE FREE CHOICE NETS VIA T-INVARIANTS Francesco Basile, Ciro Carbone, Pasquale Chiacchio Dipartimento di Ingegneria Elettrica e dell Informazione, Università

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Probabilistic Aspects of Computer Science: Markovian Models

Probabilistic Aspects of Computer Science: Markovian Models Probabilistic Aspects of Computer Science: Markovian Models S. Haddad September 29, 207 Professor at ENS Cachan, haddad@lsv.ens-cachan.fr, http://www.lsv.ens-cachan.fr/ haddad/ Abstract These lecture notes

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Stochastic Petri Net. Ben, Yue (Cindy) 2013/05/08

Stochastic Petri Net. Ben, Yue (Cindy) 2013/05/08 Stochastic Petri Net 2013/05/08 2 To study a formal model (personal view) Definition (and maybe history) Brief family tree: the branches and extensions Advantages and disadvantages for each Applications

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Stochastic Petri Nets. Jonatan Lindén. Modelling SPN GSPN. Performance measures. Almost none of the theory. December 8, 2010

Stochastic Petri Nets. Jonatan Lindén. Modelling SPN GSPN. Performance measures. Almost none of the theory. December 8, 2010 Stochastic Almost none of the theory December 8, 2010 Outline 1 2 Introduction A Petri net (PN) is something like a generalized automata. A Stochastic Petri Net () a stochastic extension to Petri nets,

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Time and Timed Petri Nets

Time and Timed Petri Nets Time and Timed Petri Nets Serge Haddad LSV ENS Cachan & CNRS & INRIA haddad@lsv.ens-cachan.fr DISC 11, June 9th 2011 1 Time and Petri Nets 2 Timed Models 3 Expressiveness 4 Analysis 1/36 Outline 1 Time

More information

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions Electronic Notes in Theoretical Computer Science Vol. 85 No. 4 (2003) URL: http://www.elsevier.nl/locate/entsc/volume85.html Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

More information

Complexity Analysis of Continuous Petri Nets

Complexity Analysis of Continuous Petri Nets Fundamenta Informaticae 137 (2015) 1 28 1 DOI 10.3233/FI-2015-1167 IOS Press Complexity Analysis of Continuous Petri Nets Estíbaliz Fraca Instituto de Investigación en Ingeniería de Aragón (I3A) Universidad

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Time Reversibility and Burke s Theorem

Time Reversibility and Burke s Theorem Queuing Analysis: Time Reversibility and Burke s Theorem Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. Yannis A. Korilis. Outline Time-Reversal

More information

Modelling M/G/1 queueing systems with server vacations using stochastic Petri nets

Modelling M/G/1 queueing systems with server vacations using stochastic Petri nets Volume 22 (2), pp. 131 154 http://www.orssa.org.za ORiON ISSN 529-191-X c 26 Modelling M/G/1 queueing systems with server vacations using stochastic Petri nets K Ramanath P Lakshmi Received: 12 November

More information

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011 Exercise 6.5: Solutions to Homework 0 6.262 Discrete Stochastic Processes MIT, Spring 20 Consider the Markov process illustrated below. The transitions are labelled by the rate q ij at which those transitions

More information

HYPENS Manual. Fausto Sessego, Alessandro Giua, Carla Seatzu. February 7, 2008

HYPENS Manual. Fausto Sessego, Alessandro Giua, Carla Seatzu. February 7, 2008 HYPENS Manual Fausto Sessego, Alessandro Giua, Carla Seatzu February 7, 28 HYPENS is an open source tool to simulate timed discrete, continuous and hybrid Petri nets. It has been developed in Matlab to

More information

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline). Large-Time Behavior for Continuous-Time Markov Chains Friday, December 02, 2011 10:58 AM Homework 4 due on Thursday, December 15 at 5 PM (hard deadline). How are formulas for large-time behavior of discrete-time

More information

STRUCTURED SOLUTION OF STOCHASTIC DSSP SYSTEMS

STRUCTURED SOLUTION OF STOCHASTIC DSSP SYSTEMS STRUCTURED SOLUTION OF STOCHASTIC DSSP SYSTEMS J. Campos, S. Donatelli, and M. Silva Departamento de Informática e Ingeniería de Sistemas Centro Politécnico Superior, Universidad de Zaragoza jcampos,silva@posta.unizar.es

More information

Lecturer: Olga Galinina

Lecturer: Olga Galinina Renewal models Lecturer: Olga Galinina E-mail: olga.galinina@tut.fi Outline Reminder. Exponential models definition of renewal processes exponential interval distribution Erlang distribution hyperexponential

More information

Interlude: Practice Final

Interlude: Practice Final 8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point

More information

Composition of product-form Generalized Stochastic Petri Nets: a modular approach

Composition of product-form Generalized Stochastic Petri Nets: a modular approach Composition of product-form Generalized Stochastic Petri Nets: a modular approach Università Ca Foscari di Venezia Dipartimento di Informatica Italy October 2009 Markov process: steady state analysis Problems

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Petri Net Modeling of Irrigation Canal Networks

Petri Net Modeling of Irrigation Canal Networks Petri Net Modeling of Irrigation Canal Networks Giorgio Corriga, Alessandro Giua, Giampaolo Usai DIEE: Dip. di Ingegneria Elettrica ed Elettronica Università di Cagliari P.zza d Armi 09123 CAGLIARI, Italy

More information

Laboratoire Spécification & Vérification. Importance Sampling for Model Checking of Continuous Time Markov Chains

Laboratoire Spécification & Vérification. Importance Sampling for Model Checking of Continuous Time Markov Chains Importance Sampling for Model Checking of Continuous Time Markov Chains Benoît Barbot, Serge Haddad, Claudine Picaronny May 2012 Research report LSV-12-08 Laboratoire Spécification & Vérification École

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

GI/M/1 and GI/M/m queuing systems

GI/M/1 and GI/M/m queuing systems GI/M/1 and GI/M/m queuing systems Dmitri A. Moltchanov moltchan@cs.tut.fi http://www.cs.tut.fi/kurssit/tlt-2716/ OUTLINE: GI/M/1 queuing system; Methods of analysis; Imbedded Markov chain approach; Waiting

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

ADVANCED ROBOTICS. PLAN REPRESENTATION Generalized Stochastic Petri nets and Markov Decision Processes

ADVANCED ROBOTICS. PLAN REPRESENTATION Generalized Stochastic Petri nets and Markov Decision Processes ADVANCED ROBOTICS PLAN REPRESENTATION Generalized Stochastic Petri nets and Markov Decision Processes Pedro U. Lima Instituto Superior Técnico/Instituto de Sistemas e Robótica September 2009 Reviewed April

More information

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010 Exercises Stochastic Performance Modelling Hamilton Institute, Summer Instruction Exercise Let X be a non-negative random variable with E[X ]

More information

2. Stochastic Time Petri Nets

2. Stochastic Time Petri Nets 316 A. Horváth et al. / Performance Evaluation 69 (2012) 315 335 kernels can be expressed in closed-form in terms of the exponential of the matrix describing the subordinated CTMC [8] and evaluated numerically

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory Contents Preface... v 1 The Exponential Distribution and the Poisson Process... 1 1.1 Introduction... 1 1.2 The Density, the Distribution, the Tail, and the Hazard Functions... 2 1.2.1 The Hazard Function

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1 Queueing systems Renato Lo Cigno Simulation and Performance Evaluation 2014-15 Queueing systems - Renato Lo Cigno 1 Queues A Birth-Death process is well modeled by a queue Indeed queues can be used to

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.262 Discrete Stochastic Processes Midterm Quiz April 6, 2010 There are 5 questions, each with several parts.

More information

LECTURE #6 BIRTH-DEATH PROCESS

LECTURE #6 BIRTH-DEATH PROCESS LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

µ n 1 (v )z n P (v, )

µ n 1 (v )z n P (v, ) Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic

More information

T. Liggett Mathematics 171 Final Exam June 8, 2011

T. Liggett Mathematics 171 Final Exam June 8, 2011 T. Liggett Mathematics 171 Final Exam June 8, 2011 1. The continuous time renewal chain X t has state space S = {0, 1, 2,...} and transition rates (i.e., Q matrix) given by q(n, n 1) = δ n and q(0, n)

More information

Introduction to Markov Chains, Queuing Theory, and Network Performance

Introduction to Markov Chains, Queuing Theory, and Network Performance Introduction to Markov Chains, Queuing Theory, and Network Performance Marceau Coupechoux Telecom ParisTech, departement Informatique et Réseaux marceau.coupechoux@telecom-paristech.fr IT.2403 Modélisation

More information

Part II: continuous time Markov chain (CTMC)

Part II: continuous time Markov chain (CTMC) Part II: continuous time Markov chain (CTMC) Continuous time discrete state Markov process Definition (Markovian property) X(t) is a CTMC, if for any n and any sequence t 1

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Performance Evaluation. Transient analysis of non-markovian models using stochastic state classes

Performance Evaluation. Transient analysis of non-markovian models using stochastic state classes Performance Evaluation ( ) Contents lists available at SciVerse ScienceDirect Performance Evaluation journal homepage: www.elsevier.com/locate/peva Transient analysis of non-markovian models using stochastic

More information

Queueing Networks G. Rubino INRIA / IRISA, Rennes, France

Queueing Networks G. Rubino INRIA / IRISA, Rennes, France Queueing Networks G. Rubino INRIA / IRISA, Rennes, France February 2006 Index 1. Open nets: Basic Jackson result 2 2. Open nets: Internet performance evaluation 18 3. Closed nets: Basic Gordon-Newell result

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer

More information

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K Queueing Theory I Summary Little s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K " Little s Law a(t): the process that counts the number of arrivals

More information

Probabilistic Aspects of Computer Science: Probabilistic Automata

Probabilistic Aspects of Computer Science: Probabilistic Automata Probabilistic Aspects of Computer Science: Probabilistic Automata Serge Haddad LSV, ENS Paris-Saclay & CNRS & Inria M Jacques Herbrand Presentation 2 Properties of Stochastic Languages 3 Decidability Results

More information

A comment on Boucherie product-form results

A comment on Boucherie product-form results A comment on Boucherie product-form results Andrea Marin Dipartimento di Informatica Università Ca Foscari di Venezia Via Torino 155, 30172 Venezia Mestre, Italy {balsamo,marin}@dsi.unive.it Abstract.

More information

MODELLING DYNAMIC RELIABILITY VIA FLUID PETRI NETS

MODELLING DYNAMIC RELIABILITY VIA FLUID PETRI NETS MODELLING DYNAMIC RELIABILITY VIA FLUID PETRI NETS Daniele Codetta-Raiteri, Dipartimento di Informatica, Università di Torino, Italy Andrea Bobbio, Dipartimento di Informatica, Università del Piemonte

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

Performance Evaluation of Queuing Systems

Performance Evaluation of Queuing Systems Performance Evaluation of Queuing Systems Introduction to Queuing Systems System Performance Measures & Little s Law Equilibrium Solution of Birth-Death Processes Analysis of Single-Station Queuing Systems

More information

Analysis of Deterministic and Stochastic Petri Nets

Analysis of Deterministic and Stochastic Petri Nets Analysis of Deterministic and Stochastic Petri Nets Gianfranco Ciardo Christoph Lindemann Department of Computer Science College of William and Mary Williamsburg, VA 23187-8795, USA ciardo@cs.wm.edu Institut

More information

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) 2.1 Classification of DTMC states Prof. Dr. Anne Remke Design and Analysis of Communication Systems University

More information

Bulk input queue M [X] /M/1 Bulk service queue M/M [Y] /1 Erlangian queue M/E k /1

Bulk input queue M [X] /M/1 Bulk service queue M/M [Y] /1 Erlangian queue M/E k /1 Advanced Markovian queues Bulk input queue M [X] /M/ Bulk service queue M/M [Y] / Erlangian queue M/E k / Bulk input queue M [X] /M/ Batch arrival, Poisson process, arrival rate λ number of customers in

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011 Exercise 1 Solutions to Homework 6 6.262 Discrete Stochastic Processes MIT, Spring 2011 Let {Y n ; n 1} be a sequence of rv s and assume that lim n E[ Y n ] = 0. Show that {Y n ; n 1} converges to 0 in

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.262 Discrete Stochastic Processes Midterm Quiz April 6, 2010 There are 5 questions, each with several parts.

More information

Assignment 3 with Reference Solutions

Assignment 3 with Reference Solutions Assignment 3 with Reference Solutions Exercise 3.: Poisson Process Given are k independent sources s i of jobs as shown in the figure below. The interarrival time between jobs for each source is exponentially

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

QUEUING SYSTEM. Yetunde Folajimi, PhD

QUEUING SYSTEM. Yetunde Folajimi, PhD QUEUING SYSTEM Yetunde Folajimi, PhD Part 2 Queuing Models Queueing models are constructed so that queue lengths and waiting times can be predicted They help us to understand and quantify the effect of

More information

Stochastic Models in Computer Science A Tutorial

Stochastic Models in Computer Science A Tutorial Stochastic Models in Computer Science A Tutorial Dr. Snehanshu Saha Department of Computer Science PESIT BSC, Bengaluru WCI 2015 - August 10 to August 13 1 Introduction 2 Random Variable 3 Introduction

More information

SIMILAR MARKOV CHAINS

SIMILAR MARKOV CHAINS SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in

More information

Readings: Finish Section 5.2

Readings: Finish Section 5.2 LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

Integrating synchronization with priority into a Kronecker representation

Integrating synchronization with priority into a Kronecker representation Performance Evaluation 44 (2001) 73 96 Integrating synchronization with priority into a Kronecker representation Susanna Donatelli a, Peter Kemper b, a Dip. di Informatica, Università di Torino, Corso

More information

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017 CPSC 531: System Modeling and Simulation Carey Williamson Department of Computer Science University of Calgary Fall 2017 Motivating Quote for Queueing Models Good things come to those who wait - poet/writer

More information

NATCOR: Stochastic Modelling

NATCOR: Stochastic Modelling NATCOR: Stochastic Modelling Queueing Theory II Chris Kirkbride Management Science 2017 Overview of Today s Sessions I Introduction to Queueing Modelling II Multiclass Queueing Models III Queueing Control

More information

AN INTRODUCTION TO DISCRETE-EVENT SIMULATION

AN INTRODUCTION TO DISCRETE-EVENT SIMULATION AN INTRODUCTION TO DISCRETE-EVENT SIMULATION Peter W. Glynn 1 Peter J. Haas 2 1 Dept. of Management Science and Engineering Stanford University 2 IBM Almaden Research Center San Jose, CA CAVEAT: WE ARE

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Modelling in Systems Biology

Modelling in Systems Biology Modelling in Systems Biology Maria Grazia Vigliotti thanks to my students Anton Stefanek, Ahmed Guecioueur Imperial College Formal representation of chemical reactions precise qualitative and quantitative

More information

UNIVERSITY OF LONDON IMPERIAL COLLEGE LONDON

UNIVERSITY OF LONDON IMPERIAL COLLEGE LONDON UNIVERSITY OF LONDON IMPERIAL COLLEGE LONDON BSc and MSci EXAMINATIONS (MATHEMATICS) MAY JUNE 23 This paper is also taken for the relevant examination for the Associateship. M3S4/M4S4 (SOLUTIONS) APPLIED

More information

Queuing Theory. Using the Math. Management Science

Queuing Theory. Using the Math. Management Science Queuing Theory Using the Math 1 Markov Processes (Chains) A process consisting of a countable sequence of stages, that can be judged at each stage to fall into future states independent of how the process

More information

Recent results on Timed Systems

Recent results on Timed Systems Recent results on Timed Systems Time Petri Nets and Timed Automata Béatrice Bérard LAMSADE Université Paris-Dauphine & CNRS berard@lamsade.dauphine.fr Based on joint work with F. Cassez, S. Haddad, D.

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

An Analysis of the Preemptive Repeat Queueing Discipline

An Analysis of the Preemptive Repeat Queueing Discipline An Analysis of the Preemptive Repeat Queueing Discipline Tony Field August 3, 26 Abstract An analysis of two variants of preemptive repeat or preemptive repeat queueing discipline is presented: one in

More information

Availability. M(t) = 1 - e -mt

Availability. M(t) = 1 - e -mt Availability Availability - A(t) the probability that the system is operating correctly and is available to perform its functions at the instant of time t More general concept than reliability: failure

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

DISCRETE-TIME MARKOVIAN STOCHASTIC PETRI NETS

DISCRETE-TIME MARKOVIAN STOCHASTIC PETRI NETS 20 DISCRETE-TIME MARKOVIAN STOCHASTIC PETRI NETS Gianfranco Ciardo 1 Department of Computer Science, College of William and Mary Williamsburg, VA 23187-8795, USA ciardo@cs.wm.edu ABSTRACT We revisit and

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

M/G/1 and M/G/1/K systems

M/G/1 and M/G/1/K systems M/G/1 and M/G/1/K systems Dmitri A. Moltchanov dmitri.moltchanov@tut.fi http://www.cs.tut.fi/kurssit/elt-53606/ OUTLINE: Description of M/G/1 system; Methods of analysis; Residual life approach; Imbedded

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

Timed Petri Nets and Timed Automata: On the Discriminating Power of Zeno Sequences

Timed Petri Nets and Timed Automata: On the Discriminating Power of Zeno Sequences Timed Petri Nets and Timed Automata: On the Discriminating Power of Zeno Sequences Patricia Bouyer 1, Serge Haddad 2, Pierre-Alain Reynier 1 1 LSV, CNRS & ENS Cachan, France 2 LAMSADE, CNRS & Université

More information

TPN are devoted to specify and verify properties of systems where timing is a critical parameter that may affect the behavior of the system. In this l

TPN are devoted to specify and verify properties of systems where timing is a critical parameter that may affect the behavior of the system. In this l URL: http://www.elsevier.nl/locate/entcs/volume52.html 8 pages Petri Nets with Discrete Phase Type Timing: A Bridge Between Stochastic and Functional Analysis Andrea Bobbio DISTA, Universit a del Piemonte

More information