Stochastic Petri Net
|
|
- Erika Ford
- 5 years ago
- Views:
Transcription
1 Stochastic Petri Net Serge Haddad LSV ENS Cachan & CNRS & INRIA Petri Nets 2013, June 24th Stochastic Petri Net 2 Markov Chain 3 Markovian Stochastic Petri Net 4 Generalized Markovian Stochastic Petri Net (GSPN) 1/33
2 Plan 1 Stochastic Petri Net Markov Chain Markovian Stochastic Petri Net Generalized Markovian Stochastic Petri Net (GSPN) 2/33
3 Stochastic Petri Net versus Time Petri Net In TPN, the delays are non deterministically chosen. In Stochastic Petri Net (SPN), the delays are randomly chosen by sampling distributions associated with transitions.... but these distributions are not sufficient to eliminate non determinism. Policies for a net One needs to define: The choice policy. What is the next transition to fire? The service policy. What is the influence of the enabling degree of a transition on the process? The memory policy. What become the samplings of distributions that have not be used? 3/33
4 Stochastic Petri Net versus Time Petri Net In TPN, the delays are non deterministically chosen. In Stochastic Petri Net (SPN), the delays are randomly chosen by sampling distributions associated with transitions.... but these distributions are not sufficient to eliminate non determinism. Policies for a net One needs to define: The choice policy. What is the next transition to fire? The service policy. What is the influence of the enabling degree of a transition on the process? The memory policy. What become the samplings of distributions that have not be used? 3/33
5 Choice Policy In the net, associate a distribution D i and a weight w i with every transition t i. Preselection w.r.t. a marking m and enabled transitions T m Normalize weights wi of the enabled transitions: w i w i t j Tm wj Sample the distribution defined by the w i s. Let ti be the selected transition, sample D i giving the value d i. versus Race policy with postselection w.r.t. a marking m For every ti T m, sample D i giving the value d i. Let T be the subset of T m with the smallest delays. Normalize weights w i of transitions of T : w i w i t j T wj Sample the distribution defined by the w i s yielding some t i. (postselection can also be handled with priorities) 4/33
6 Choice Policy In the net, associate a distribution D i and a weight w i with every transition t i. Preselection w.r.t. a marking m and enabled transitions T m Normalize weights wi of the enabled transitions: w i w i t j Tm wj Sample the distribution defined by the w i s. Let ti be the selected transition, sample D i giving the value d i. versus Race policy with postselection w.r.t. a marking m For every ti T m, sample D i giving the value d i. Let T be the subset of T m with the smallest delays. Normalize weights w i of transitions of T : w i w i t j T wj Sample the distribution defined by the w i s yielding some t i. (postselection can also be handled with priorities) 4/33
7 Choice Policy: Illustration t 1 (D 1,w 1 ) t 2 (D 2,w 2 ) t 3 (D 3,w 3 ) w 1 =1 w 2 =2 w 3 =2 Preselection Race Policy Sample (1/5,2/5,2/5) Outcome t 1 Sample D 1 Outcome 4.2 Sample (D 1,D 2,D 3 ) Outcome(3.2,6.5,3.2) Sample(1/3,-,2/3) Outcome t 3 5/33
8 Server Policy A transition t can be viewed as server for firings: A single server t allows a single instance of firings in m if m[t. An infinite server t allows d instances of firings in m where d = min( m(p) p t) is the enabling degree. P re(p,t) A multiple server t with bound b allows min(b, d) instances of firings in m. 3 t 1 t 2 t 3 t 3 single server Sample (D 1,D 2,D 3 ) t 3 infinite server Sample (D 1,D 2,D 3,D 3,D 3 ) t 3 2-server Sample (D 1,D 2,D 3,D 3 ) This can be generalised by marking-dependent rates. 6/33
9 Memory Policy (1) d 1 <min(d 2,d 3 ) t 1 t 2 t 3 t 1 t 2 t 3 What happens to d 2 and d 3? Resampling Memory Every sampling not used is forgotten. This could correspond to a crash transition. 7/33
10 Memory Policy (2) d 1 <min(d 2,d 3 ) t 1 t 2 t 3 t 1 t 2 t 3 What happens to d 2 and d 3? Enabling Memory The samplings associated with still enabled transitions are kept and decremented (d 3 = d 3 d 1 ). The samplings associated with disabled transitions are forgotten (like d2 ). Disabling a transition could correspond to abort a service. 8/33
11 Memory Policy (3) d 1 <min(d 2,d 3 ) t 1 t 2 t 3 t 1 t 2 t 3 What happens to d 2 and d 3? Age Memory All the samplings are kept and decremented (d 3 = d 3 d 1 and d 2 = d 2 d 1 ). The sampling associated with a disabled transition is frozen until the transition become again enabled (like d 2). Disabling a transition could correspond to suspend a service. 9/33
12 Memory Policy (4) Specification of memory policy To be fully expressive, it should be defined w.r.t. any pair of transitions. 3 2 d 2 <min(d 1,d 1 ',d 1 '') t 1 t 2 t 1 t 2 What happens to d 1, d 1 ' and d 1 ''? Interaction between memory policy and service policy Assume enabling memory for t 1 when firing t 2 and infinite server policy for t 1. Which sample should be forgotten? The last sample performed, The first sample performed, The greatest sample, etc. Warning: This choice may have a critical impact on the complexity of analysis. 10/33
13 Plan Stochastic Petri Net 2 Markov Chain Markovian Stochastic Petri Net Generalized Markovian Stochastic Petri Net (GSPN) 11/33
14 Discrete Time Markov Chain (DTMC) A DTMC is a stochastic process which fulfills: For all n, Tn is the constant 1 The process is memoryless P r(s n+1 = s j S 0 = s i0,..., S n 1 = s in 1, S n = s i ) = P r(s n+1 = s j S n = s i ) P[i, j] A DTMC is defined by S 0 and P P /33
15 Analysis: the State Status The transient analysis is easy and effective in the finite case: π n = π 0 P n with π n the distribution of S n The steady-state analysis (? lim n π n ) requires theoretical developments. Classification of states w.r.t. the asymptotic behaviour of the DTMC A state is transient if the probability of a return after a visit is less than one. Hence the probability of its occurrence will go to zero. (p < 1/2) A state is recurrent null if the probability of a return after a visit is one but the mean time of this return is infinite. Hence the probability of its occurrence will go to zero. (p = 1/2) A state is recurrent non null if the probability of a return after a visit is one and the mean time of this return is finite. (p > 1/2) 1 1-p 1-p 1-p 0 p 1 p 2 p 3... p 13/33
16 In a finite DTMC State Status in Finite DTMC The status of a state only depends on the graph associated with the chain. A state is transient iff it belongs to a non terminal strongly connected component (scc) of the graph. A state is recurrent non null iff it belongs to a terminal scc T={1,2,3} C 1 ={4,5} C 2 ={6,7,8} 14/33
17 Analysis: Irreducibility and Periodicity Irreducibility and Periodicity A chain is irreducible if its graph is strongly connected. The periodicity of an irreducible chain is the greatest integer p such that: the set of states can be partionned in p subsets S 0,..., S p 1 where every transition goes from S i to S i+1%p for some i. Computation of the periodicity height height height periodicity=gcd(0,2,4)=2 height 3 height 4 15/33
18 Analysis of a DTMC: a Particular Case A particular case The chain is irreducible and aperiodic (i.e. its periodicity is 1) π lim n π n exists and its value is independent from π 0. π is the unique solution of X = X P X 1 = 1 where one can omit an arbitrary equation of the first system = 1/8 7/16 7/16 π 1 = 0.3π π 2 π 2 = 0.7π π 3 π 3 = π 2 16/33
19 Analysis of a DTMC: the General Case Almost general case: every terminal scc is aperiodic π exists. π = s S π 0(s) i I preach i [s] πi where: 1. S is the set of states, 2. {C i} i I is the set of terminal scc, 3. π i is the steady-state distribution of C i, 4. and preach i [s] is the probability to reach C i starting from s. Computation of the reachability probability for transient states Let T be the set of transient states (i.e. not belonging to a terminal scc) Let PT,T be the submatrix of P restricted to transient states Let PT,i be the submatrix of P transitions from T to C i Then preachi = ( n N (P T,T ) n ) P T,i 1 = (Id P T,T ) 1 P T,i 1 17/33
20 Analysis of a DTMC: the General Case Almost general case: every terminal scc is aperiodic π exists. π = s S π 0(s) i I preach i [s] πi where: 1. S is the set of states, 2. {C i} i I is the set of terminal scc, 3. π i is the steady-state distribution of C i, 4. and preach i [s] is the probability to reach C i starting from s. Computation of the reachability probability for transient states Let T be the set of transient states (i.e. not belonging to a terminal scc) Let PT,T be the submatrix of P restricted to transient states Let PT,i be the submatrix of P transitions from T to C i Then preachi = ( n N (P T,T ) n ) P T,i 1 = (Id P T,T ) 1 P T,i 1 17/33
21 Illustration: SCC and Matrices P T,T = P T,1.1= T={1,2,3},C 1 ={4,5},C 2 ={6,7,8} = P T,2.1= = /33
22 Continuous Time Markov Chain (CTMC) A CTMC is a stochastic process which fulfills: Memoryless state change P r(s n+1 = s j S 0 = s i0,..., S n 1 = s in 1, T 0 < τ 0,..., T n < τ n, S n = s i ) = P r(s n+1 = s j S n = s i ) P[i, j] Memoryless transition delay P r(t n < τ S 0 = s i0,..., S n 1 = s in 1, T 0 < τ 0,..., T n 1 < τ n 1, S n = s i ) = P r(t n < τ S n = s i ) = 1 e λiτ Notations and properties P defines an embedded DTMC (the chain of state changes) Let π(τ) the distribution de X(τ), for δ going to 0 it holds that: π(τ + δ)(s i ) π(τ)(s i )(1 λ i δ) + j π(τ)(s j)λ j δp[j, i] Hence, let Q the infinitesimal generator defined by: Q[i, j] λ i P[i, j] for j i and Q[i, i] j i Q[i, j] dπ Then: dτ = π Q 19/33
23 Continuous Time Markov Chain (CTMC) A CTMC is a stochastic process which fulfills: Memoryless state change P r(s n+1 = s j S 0 = s i0,..., S n 1 = s in 1, T 0 < τ 0,..., T n < τ n, S n = s i ) = P r(s n+1 = s j S n = s i ) P[i, j] Memoryless transition delay P r(t n < τ S 0 = s i0,..., S n 1 = s in 1, T 0 < τ 0,..., T n 1 < τ n 1, S n = s i ) = P r(t n < τ S n = s i ) = 1 e λiτ Notations and properties P defines an embedded DTMC (the chain of state changes) Let π(τ) the distribution de X(τ), for δ going to 0 it holds that: π(τ + δ)(s i ) π(τ)(s i )(1 λ i δ) + j π(τ)(s j)λ j δp[j, i] Hence, let Q the infinitesimal generator defined by: Q[i, j] λ i P[i, j] for j i and Q[i, i] j i Q[i, j] dπ Then: dτ = π Q 19/33
24 CTMC: Illustration and Uniformization A CTMC λ P λ' P' A uniform version of the CTMC (equivalent w.r.t. the states) Q 20/33
25 Analysis of a CTMC Transient Analysis Construction of a uniform version of the CTMC (λ, P) such that P[i, i] > 0 for all i. Computation by case decomposition w.r.t. the number of transitions: π(τ) = π(0) n N (e λτ ) τ n n! Pn Steady-state analysis The steady-state distribution of visits is given by the steady-state distribution of (λ, P) (by construction, the terminal scc are aperiodic)... equal to the steady-state distribution since the sojourn times follow the same distribution. A particular case: P irreducible the steady-state distribution π is the unique solution of X Q = 0 X 1 = 1 where one can omit an arbitrary equation of the first system. 21/33
26 Analysis of a CTMC Transient Analysis Construction of a uniform version of the CTMC (λ, P) such that P[i, i] > 0 for all i. Computation by case decomposition w.r.t. the number of transitions: π(τ) = π(0) n N (e λτ ) τ n n! Pn Steady-state analysis The steady-state distribution of visits is given by the steady-state distribution of (λ, P) (by construction, the terminal scc are aperiodic)... equal to the steady-state distribution since the sojourn times follow the same distribution. A particular case: P irreducible the steady-state distribution π is the unique solution of X Q = 0 X 1 = 1 where one can omit an arbitrary equation of the first system. 21/33
27 Plan Stochastic Petri Net Markov Chain 3 Markovian Stochastic Petri Net Generalized Markovian Stochastic Petri Net (GSPN) 22/33
28 Markovian Stochastic Petri Net Hypotheses The distribution of every transition ti has a density function e λiτ where the parameter λ i is called the rate of the transition. For simplicity reasons, the server policy is single server. First observations The weights for choice policy are no more required since equality of two samples has a null probability. (due to continuity of distributions) The residual delay dj d i of transition t j knowing that t i has fired (i.e. d i is the shortest delay) has the same distribution as the initial delay. Thus the memory policy is irrelevant. 23/33
29 Markovian Stochastic Petri Net Hypotheses The distribution of every transition ti has a density function e λiτ where the parameter λ i is called the rate of the transition. For simplicity reasons, the server policy is single server. First observations The weights for choice policy are no more required since equality of two samples has a null probability. (due to continuity of distributions) The residual delay dj d i of transition t j knowing that t i has fired (i.e. d i is the shortest delay) has the same distribution as the initial delay. Thus the memory policy is irrelevant. 23/33
30 Markovian Net and Markov Chain Key observation: given a marking m with T m = t 1,..., t k The sojourn time in m is an exponential distribution with rate i λ i. The probability that ti is the next transition to fire is λ i ( j λj). Thus the stochastic process is a CTMC whose states are markings and whose transitions are the transitions of the reachability graph. t 1 λ λ 2 λ 3 λ4 λ 5 t 2 t 3 λ 2 λ 3 λ 2 λ 3 λ 4 λ 5 λ 4 λ 5 t 4 t 5 λ 4 λ 2 λ λ t 6 λ 4 λ 5 λ 6 24/33
31 Plan Stochastic Petri Net Markov Chain Markovian Stochastic Petri Net 4 Generalized Markovian Stochastic Petri Net (GSPN) 25/33
32 Generalizing Distributions for Nets Modelling delays with exponential distributions is reasonable when: Only mean value information is known about distributions. Exponential distributions (or combination of them) are enough to approximate the real distributions. Modelling delays with exponential distributions is not reasonable when: The distribution of an event is known and is poorly approximable with exponential distributions: a time-out of 10 time units The delays of the events have different magnitude orders: executing an instruction versus performing a database request In the last case, the 0-Dirac distribution is required. 26/33
33 Generalized Markovian Stochastic Petri Net (GSPN) Generalized Markovian Stochastic Petri Nets (GSPN) are nets whose: timed transitions have exponential distributions, and immediate transitions have 0-Dirac distributions. Their analysis is based on Markovian Renewal Process, a generalization of Markov chains. 27/33
34 Markovian Renewal Process A Markovian Renewal Process (MRP) fulfills: a relative memoryless property P r(s n+1 = s j, T n < τ S 0 = s i0,..., S n 1 = s in 1, T 0 < τ 0,..., S n = s i ) = P r(s n+1 = s j, T n < τ S n = s i ) Q[i, j, τ] The embedded chain is defined by: P[i, j] = limτ Q[i, j, τ] The sojourn time Soj has a distribution defined by: P r(soj[i] < τ) = j Q[i, j, τ] Analysis of a MRP The steady-state distribution (if there exists) π is deduced from the steady-state distribution of the embedded chain π by: π(s i ) = π (s i)e(soj[i]) j π (s j)e(soj[j]) Transient analysis is much harder... but the reachability probabilities only depend on the embedded chain. 28/33
35 Markovian Renewal Process A Markovian Renewal Process (MRP) fulfills: a relative memoryless property P r(s n+1 = s j, T n < τ S 0 = s i0,..., S n 1 = s in 1, T 0 < τ 0,..., S n = s i ) = P r(s n+1 = s j, T n < τ S n = s i ) Q[i, j, τ] The embedded chain is defined by: P[i, j] = limτ Q[i, j, τ] The sojourn time Soj has a distribution defined by: P r(soj[i] < τ) = j Q[i, j, τ] Analysis of a MRP The steady-state distribution (if there exists) π is deduced from the steady-state distribution of the embedded chain π by: π(s i ) = π (s i)e(soj[i]) j π (s j)e(soj[j]) Transient analysis is much harder... but the reachability probabilities only depend on the embedded chain. 28/33
36 t 4 t 5 t6 A GSPN is a Markovian Renewal Process Observations Weights are required for immediate transitions. The restricted reachability graph corresponds to the embedded DTMC. t 1 t t t 2 t 3 4 t 5 t 2 t 3 t 2 t 3 t 2 t 3 t 4 t 5 t 4 t 5 t 4 t 5 2 t 6 tangible marking vanishing marking 29/33
37 Steady-State Analysis of a GSPN (1) Standard method for MRP Build the restricted reachability graph equivalent to the embedded DTMC Deduce the probability matrix P Compute π the steady-state distribution of the visits of markings: π = π P Compute π the steady-state distribution of the sojourn in tangible markings: π (m)soj(m) π(m) = m tangible π (m )Soj(m ) How to eliminate the vanishing markings sooner in the computation? 30/33
38 Steady-State Analysis of a GSPN (2) An alternative method As before, compute the transition probability matrix P. Compute the transition probability matrix P between tangible markings. Compute π the (relative) steady-state distribution of the visits of tangible markings: π = π P. Compute π the steady-state distribution of the sojourn in tangible markings: π (m)soj(m) π(m) = m tangible π (m )Soj(m ) Computation of P Let PX,Y the probability transition matrix from subset X to subset Y. Let V (resp. T ) be the set of vanishing (resp. tangible) markings. P = P T,T + P T,V ( n N P n V,V )P V,T = P T,T + P T,V (Id P V,V ) 1 P V,T Iterative (resp. direct) computations uses the first (resp. second) expression. 31/33
39 Steady-State Analysis: Illustration 1 c d/λ 1 p 2 p (p 2 ) 2 (p 3 ) 2 3 2dp 2 p 3 /(λ 4 +λ 4 ) p 2 p 3 p 2 p 3 2p 2 p 3 c(p 2 ) 2 2cp 2 p 3 c(p 3 ) 2 d(p 2 ) 2 /λ 4 d(p 3 ) 2 /λ 5 1 p 5 p p 5 p 4 1 c((p 2 ) 2 +2p 5 p 2 p 3 ) c((p 3 ) 2 +2p 4 p 2 p 3 ) d((p 2 ) 2 +2p 5 p 2 p 3 )/λ 4 d((p 3 ) 2 +2p 4 p 2 p 3 )/λ c d/λ 6 p 2 =w 2 /(w 2 +w 3 ) p 3 =w 3 /(w 2 +w 3 ) p 4 =λ 4 /(λ 4 +λ 5 ) p 5 =λ 5 /(λ 4 +λ 5 ) c and d are normalizing constants 32/33
40 Some References M. Ajmone Marsan, G. Balbo, G. Conte, S. Donatelli, G. Franceschinis Modelling with Generalized Stochastic Petri Nets Wiley series in parallel computing, John Wiley & Sons, 1995 Freely available on the Web site of GreatSPN S. Haddad, P. Moreaux Chapter 7: Stochastic Petri Nets Petri Nets: Fundamental Models and Applications Wiley pp , /33
From Stochastic Processes to Stochastic Petri Nets
From Stochastic Processes to Stochastic Petri Nets Serge Haddad LSV CNRS & ENS Cachan & INRIA Saclay Advanced Course on Petri Nets, the 16th September 2010, Rostock 1 Stochastic Processes and Markov Chains
More informationStochastic Petri Net
Stochastic Petri Net Serge Haddad LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA haddad@lsv.ens-cachan.fr Petri Nets 2016, June 20th 2016 1 Stochastic Petri Net 2 Markov Chain 3 Markovian Stochastic
More informationStochastic Petri Net. Ben, Yue (Cindy) 2013/05/08
Stochastic Petri Net 2013/05/08 2 To study a formal model (personal view) Definition (and maybe history) Brief family tree: the branches and extensions Advantages and disadvantages for each Applications
More informationTime(d) Petri Net. Serge Haddad. Petri Nets 2016, June 20th LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA
Time(d) Petri Net Serge Haddad LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA haddad@lsv.ens-cachan.fr Petri Nets 2016, June 20th 2016 1 Time and Petri Nets 2 Time Petri Net: Syntax and Semantic
More informationProbabilistic Aspects of Computer Science: Markovian Models
Probabilistic Aspects of Computer Science: Markovian Models S. Haddad September 29, 207 Professor at ENS Cachan, haddad@lsv.ens-cachan.fr, http://www.lsv.ens-cachan.fr/ haddad/ Abstract These lecture notes
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationStochastic process. X, a series of random variables indexed by t
Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,
More informationStochastic Petri Nets. Jonatan Lindén. Modelling SPN GSPN. Performance measures. Almost none of the theory. December 8, 2010
Stochastic Almost none of the theory December 8, 2010 Outline 1 2 Introduction A Petri net (PN) is something like a generalized automata. A Stochastic Petri Net () a stochastic extension to Petri nets,
More informationTime and Timed Petri Nets
Time and Timed Petri Nets Serge Haddad LSV ENS Cachan & CNRS & INRIA haddad@lsv.ens-cachan.fr DISC 11, June 9th 2011 1 Time and Petri Nets 2 Timed Models 3 Expressiveness 4 Analysis 1/36 Outline 1 Time
More informationIrreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1
Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate
More informationA REACHABLE THROUGHPUT UPPER BOUND FOR LIVE AND SAFE FREE CHOICE NETS VIA T-INVARIANTS
A REACHABLE THROUGHPUT UPPER BOUND FOR LIVE AND SAFE FREE CHOICE NETS VIA T-INVARIANTS Francesco Basile, Ciro Carbone, Pasquale Chiacchio Dipartimento di Ingegneria Elettrica e dell Informazione, Università
More informationProxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions
Electronic Notes in Theoretical Computer Science Vol. 85 No. 4 (2003) URL: http://www.elsevier.nl/locate/entsc/volume85.html Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions
More informationQuantitative Model Checking (QMC) - SS12
Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over
More informationHYPENS Manual. Fausto Sessego, Alessandro Giua, Carla Seatzu. February 7, 2008
HYPENS Manual Fausto Sessego, Alessandro Giua, Carla Seatzu February 7, 28 HYPENS is an open source tool to simulate timed discrete, continuous and hybrid Petri nets. It has been developed in Matlab to
More informationComplexity Analysis of Continuous Petri Nets
Fundamenta Informaticae 137 (2015) 1 28 1 DOI 10.3233/FI-2015-1167 IOS Press Complexity Analysis of Continuous Petri Nets Estíbaliz Fraca Instituto de Investigación en Ingeniería de Aragón (I3A) Universidad
More informationQuantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)
Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) 2.1 Classification of DTMC states Prof. Dr. Anne Remke Design and Analysis of Communication Systems University
More informationModelling M/G/1 queueing systems with server vacations using stochastic Petri nets
Volume 22 (2), pp. 131 154 http://www.orssa.org.za ORiON ISSN 529-191-X c 26 Modelling M/G/1 queueing systems with server vacations using stochastic Petri nets K Ramanath P Lakshmi Received: 12 November
More informationData analysis and stochastic modeling
Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationPetri Net Modeling of Irrigation Canal Networks
Petri Net Modeling of Irrigation Canal Networks Giorgio Corriga, Alessandro Giua, Giampaolo Usai DIEE: Dip. di Ingegneria Elettrica ed Elettronica Università di Cagliari P.zza d Armi 09123 CAGLIARI, Italy
More informationISE/OR 760 Applied Stochastic Modeling
ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 1
MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter
More informationStochastic Models: Markov Chains and their Generalizations
Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction
More informationPart I Stochastic variables and Markov chains
Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca
More informationADVANCED ROBOTICS. PLAN REPRESENTATION Generalized Stochastic Petri nets and Markov Decision Processes
ADVANCED ROBOTICS PLAN REPRESENTATION Generalized Stochastic Petri nets and Markov Decision Processes Pedro U. Lima Instituto Superior Técnico/Instituto de Sistemas e Robótica September 2009 Reviewed April
More informationAnalysis of Deterministic and Stochastic Petri Nets
Analysis of Deterministic and Stochastic Petri Nets Gianfranco Ciardo Christoph Lindemann Department of Computer Science College of William and Mary Williamsburg, VA 23187-8795, USA ciardo@cs.wm.edu Institut
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationSTRUCTURED SOLUTION OF STOCHASTIC DSSP SYSTEMS
STRUCTURED SOLUTION OF STOCHASTIC DSSP SYSTEMS J. Campos, S. Donatelli, and M. Silva Departamento de Informática e Ingeniería de Sistemas Centro Politécnico Superior, Universidad de Zaragoza jcampos,silva@posta.unizar.es
More information2. Stochastic Time Petri Nets
316 A. Horváth et al. / Performance Evaluation 69 (2012) 315 335 kernels can be expressed in closed-form in terms of the exponential of the matrix describing the subordinated CTMC [8] and evaluated numerically
More informationMODELLING DYNAMIC RELIABILITY VIA FLUID PETRI NETS
MODELLING DYNAMIC RELIABILITY VIA FLUID PETRI NETS Daniele Codetta-Raiteri, Dipartimento di Informatica, Università di Torino, Italy Andrea Bobbio, Dipartimento di Informatica, Università del Piemonte
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationInterlude: Practice Final
8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point
More informationHomework 4 due on Thursday, December 15 at 5 PM (hard deadline).
Large-Time Behavior for Continuous-Time Markov Chains Friday, December 02, 2011 10:58 AM Homework 4 due on Thursday, December 15 at 5 PM (hard deadline). How are formulas for large-time behavior of discrete-time
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationComposition of product-form Generalized Stochastic Petri Nets: a modular approach
Composition of product-form Generalized Stochastic Petri Nets: a modular approach Università Ca Foscari di Venezia Dipartimento di Informatica Italy October 2009 Markov process: steady state analysis Problems
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationContents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory
Contents Preface... v 1 The Exponential Distribution and the Poisson Process... 1 1.1 Introduction... 1 1.2 The Density, the Distribution, the Tail, and the Hazard Functions... 2 1.2.1 The Hazard Function
More informationTPN are devoted to specify and verify properties of systems where timing is a critical parameter that may affect the behavior of the system. In this l
URL: http://www.elsevier.nl/locate/entcs/volume52.html 8 pages Petri Nets with Discrete Phase Type Timing: A Bridge Between Stochastic and Functional Analysis Andrea Bobbio DISTA, Universit a del Piemonte
More informationLecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.
Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution
More informationECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains
ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer
More informationProbabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford
Probabilistic Model Checking Michaelmas Term 20 Dr. Dave Parker Department of Computer Science University of Oxford Next few lectures Today: Discrete-time Markov chains (continued) Mon 2pm: Probabilistic
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model
More informationAvailability. M(t) = 1 - e -mt
Availability Availability - A(t) the probability that the system is operating correctly and is available to perform its functions at the instant of time t More general concept than reliability: failure
More informationLaboratoire Spécification & Vérification. Importance Sampling for Model Checking of Continuous Time Markov Chains
Importance Sampling for Model Checking of Continuous Time Markov Chains Benoît Barbot, Serge Haddad, Claudine Picaronny May 2012 Research report LSV-12-08 Laboratoire Spécification & Vérification École
More informationIntegrating synchronization with priority into a Kronecker representation
Performance Evaluation 44 (2001) 73 96 Integrating synchronization with priority into a Kronecker representation Susanna Donatelli a, Peter Kemper b, a Dip. di Informatica, Università di Torino, Corso
More informationMarkov chains. Randomness and Computation. Markov chains. Markov processes
Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space
More informationT. Liggett Mathematics 171 Final Exam June 8, 2011
T. Liggett Mathematics 171 Final Exam June 8, 2011 1. The continuous time renewal chain X t has state space S = {0, 1, 2,...} and transition rates (i.e., Q matrix) given by q(n, n 1) = δ n and q(0, n)
More informationToward a Definition of Modeling Power for Stochastic Petri Net Models
Toward a Definition of Modeling Power for Stochastic Petri Net Models Gianfranco Ciardo Duke University Proc. of the Int. Workshop on Petri Nets and Performance Models, Madison, Wisconsin, August 1987
More informationTime Reversibility and Burke s Theorem
Queuing Analysis: Time Reversibility and Burke s Theorem Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. Yannis A. Korilis. Outline Time-Reversal
More informationPerformance Evaluation. Transient analysis of non-markovian models using stochastic state classes
Performance Evaluation ( ) Contents lists available at SciVerse ScienceDirect Performance Evaluation journal homepage: www.elsevier.com/locate/peva Transient analysis of non-markovian models using stochastic
More informationExercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010
Exercises Stochastic Performance Modelling Hamilton Institute, Summer Instruction Exercise Let X be a non-negative random variable with E[X ]
More informationMarkov Chains (Part 3)
Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationIEOR 6711: Professor Whitt. Introduction to Markov Chains
IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine
More informationSection Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018
Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationComparison of Different Semantics for Time Petri Nets
Comparison of Different Semantics for Time Petri Nets B. Bérard 1, F. Cassez 2, S. Haddad 1, Didier Lime 3, O.H. Roux 2 1 LAMSADE, Paris, France E-mail: {beatrice.berard serge.haddad}@lamsade.dauphine.fr
More information1 IEOR 4701: Continuous-Time Markov Chains
Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change
More informationModelling in Systems Biology
Modelling in Systems Biology Maria Grazia Vigliotti thanks to my students Anton Stefanek, Ahmed Guecioueur Imperial College Formal representation of chemical reactions precise qualitative and quantitative
More information= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1
Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The
More informationLecture 9 Classification of States
Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and
More informationHeuristic Search Algorithms
CHAPTER 4 Heuristic Search Algorithms 59 4.1 HEURISTIC SEARCH AND SSP MDPS The methods we explored in the previous chapter have a serious practical drawback the amount of memory they require is proportional
More informationIEOR 6711, HMWK 5, Professor Sigman
IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.
More informationLecturer: Olga Galinina
Renewal models Lecturer: Olga Galinina E-mail: olga.galinina@tut.fi Outline Reminder. Exponential models definition of renewal processes exponential interval distribution Erlang distribution hyperexponential
More informationSPN 2003 Preliminary Version. Translating Hybrid Petri Nets into Hybrid. Automata 1. Dipartimento di Informatica. Universita di Torino
SPN 2003 Preliminary Version Translating Hybrid Petri Nets into Hybrid Automata 1 Marco Gribaudo 2 and Andras Horvath 3 Dipartimento di Informatica Universita di Torino Corso Svizzera 185, 10149 Torino,
More informationµ n 1 (v )z n P (v, )
Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic
More informationRandom Walk on a Graph
IOR 67: Stochastic Models I Second Midterm xam, hapters 3 & 4, November 2, 200 SOLUTIONS Justify your answers; show your work.. Random Walk on a raph (25 points) Random Walk on a raph 2 5 F B 3 3 2 Figure
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationCDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical
CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one
More informationDISCRETE-TIME MARKOVIAN STOCHASTIC PETRI NETS
20 DISCRETE-TIME MARKOVIAN STOCHASTIC PETRI NETS Gianfranco Ciardo 1 Department of Computer Science, College of William and Mary Williamsburg, VA 23187-8795, USA ciardo@cs.wm.edu ABSTRACT We revisit and
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More informationAN INTRODUCTION TO DISCRETE-EVENT SIMULATION
AN INTRODUCTION TO DISCRETE-EVENT SIMULATION Peter W. Glynn 1 Peter J. Haas 2 1 Dept. of Management Science and Engineering Stanford University 2 IBM Almaden Research Center San Jose, CA CAVEAT: WE ARE
More informationMarkovian techniques for performance analysis of computer and communication systems
Markovian techniques for performance analysis of computer and communication systems Miklós Telek C.Sc./Ph.D. of technical science Dissertation Department of Telecommunications Technical University of Budapest
More informationQueueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1
Queueing systems Renato Lo Cigno Simulation and Performance Evaluation 2014-15 Queueing systems - Renato Lo Cigno 1 Queues A Birth-Death process is well modeled by a queue Indeed queues can be used to
More informationA FRAMEWORK FOR PERFORMABILITY MODELLING USING PROXELS. Sanja Lazarova-Molnar, Graham Horton
A FRAMEWORK FOR PERFORMABILITY MODELLING USING PROXELS Sanja Lazarova-Molnar, Graham Horton University of Magdeburg Department of Computer Science Universitaetsplatz 2, 39106 Magdeburg, Germany sanja@sim-md.de
More informationMarkov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015
Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of
More informationLIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE
International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION
More informationProceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A. M. Uhrmacher, eds.
Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A. M. Uhrmacher, eds. EFFICIENT SIMULATION OF STOCHASTIC WELL-FORMED NETS THROUGH SYMMETRY EXPLOITATION
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationProbabilistic Aspects of Computer Science: Probabilistic Automata
Probabilistic Aspects of Computer Science: Probabilistic Automata Serge Haddad LSV, ENS Paris-Saclay & CNRS & Inria M Jacques Herbrand Presentation 2 Properties of Stochastic Languages 3 Decidability Results
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationSolutions to Homework Discrete Stochastic Processes MIT, Spring 2011
Exercise 6.5: Solutions to Homework 0 6.262 Discrete Stochastic Processes MIT, Spring 20 Consider the Markov process illustrated below. The transitions are labelled by the rate q ij at which those transitions
More informationMQNA - Markovian Queueing Networks Analyser
FACULDADE DE INFORMÁTICA PUCRS - Brazil http://www.pucrs.br/inf/pos/ MQNA - Markovian Queueing Networks Analyser L. Brenner, P. Fernandes, A. Sales Technical Report Series Number 030 September, 2003 Contact:
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More informationIEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.
IEOR 46: Introduction to Operations Research: Stochastic Models Spring, Professor Whitt Class Lecture Notes: Tuesday, March. Continuous-Time Markov Chains, Ross Chapter 6 Problems for Discussion and Solutions.
More informationCensoring Markov Chains and Stochastic Bounds
Censoring Markov Chains and Stochastic Bounds J-M. Fourneau 1,2, N. Pekergin 2,3,4 and S. Younès 2 1 INRIA Project MESCAL, Montbonnot, France 2 PRiSM, University Versailles-Saint-Quentin, 45, Av. des Etats-Unis
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices
Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant
More informationQueueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "
Queueing Theory I Summary Little s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K " Little s Law a(t): the process that counts the number of arrivals
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationThe Theory behind PageRank
The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability
More informationLearning Automata Based Adaptive Petri Net and Its Application to Priority Assignment in Queuing Systems with Unknown Parameters
Learning Automata Based Adaptive Petri Net and Its Application to Priority Assignment in Queuing Systems with Unknown Parameters S. Mehdi Vahidipour, Mohammad Reza Meybodi and Mehdi Esnaashari Abstract
More informationLECTURE #6 BIRTH-DEATH PROCESS
LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death
More informationMarkov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)
Markov chains and the number of occurrences of a word in a sequence (4.5 4.9,.,2,4,6) Prof. Tesler Math 283 Fall 208 Prof. Tesler Markov Chains Math 283 / Fall 208 / 44 Locating overlapping occurrences
More informationPowerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.
Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.
More informationApproximate Counting and Markov Chain Monte Carlo
Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam
More informationAggregation and Numerical Techniques for Passage Time Calculations in Large semi-markov Models
Aggregation and Numerical Techniques for Passage Time Calculations in Large semi-markov Models Marcel Christoph Günther mcg05@doc.ic.ac.uk June 18, 2009 Marker: Dr. Jeremy Bradley Second marker: Dr. William
More informationPhase-Type Distributions in Stochastic Automata Networks
Phase-Type Distributions in Stochastic Automata Networks I. Sbeity (), L. Brenner (), B. Plateau () and W.J. Stewart (2) April 2, 2007 () Informatique et Distribution - 5, Av. Jean Kuntzman, 38330 Montbonnot,
More information