Integrating synchronization with priority into a Kronecker representation

Size: px
Start display at page:

Download "Integrating synchronization with priority into a Kronecker representation"

Transcription

1 Performance Evaluation 44 (2001) Integrating synchronization with priority into a Kronecker representation Susanna Donatelli a, Peter Kemper b, a Dip. di Informatica, Università di Torino, Corso Svizzera 185, Torino, Italy b Informatik IV, Universität Dortmund, D Dortmund, Germany Abstract The compositional representation of a Markov chain using Kronecker algebra, according to a compositional model representation as a superposed generalized stochastic Petri net or a stochastic automata network, has been studied for a while. In this paper we describe a Kronecker expression and associated data structures, that allows to handle nets with synchronization over activities of different levels of priority. New algorithms for these structures are provided to perform an iterative solution method of Jacobi or Gauss Seidel type. These algorithms are implemented in the APNN Toolbox. We use this implementation in combination with GreatSPN and exercise an example that illustrates characteristics of the presented algorithms Elsevier Science B.V. All rights reserved. Keywords: Stochastic Petri nets; Performance evaluation tools; Numerical algorithms 1. Introduction/motivation Kronecker based approaches [15,17,20,23] for stochastic automata and stochastic Petri nets (SPN) are centered around the idea of compositionality. One can follow one of the two perspectives, either compose a system of subsystems by some composition operators or decompose a complete system into a set of interacting components. We follow the latter approach. If a system can be decomposed into a set of interacting components N i, we can consider the state space of the complete net as a subset of the cross-product of the state spaces of the components. Matrix operators and (Kronecker operators) allow us to compose matrices built from the reachability graph and/or the infinitesimal generators of the N i into a Kronecker expression that completely characterizes the reachability graph RG and the infinitesimal generator Q of the complete net. Reachability and steady state/transient probabilities can be computed using a Kronecker expression instead of explicitly storing RG or Q. The method results in a (usually) large saving in storage and, for matrices of N i that are not too sparse, also in a saving in execution time [6,23]. This research is partially supported by CNR and Deutsche Forschungsgemeinschaft, SFB 559. Corresponding author. addresses: susi@di.unito.it (S. Donatelli), kemper@ls4.cs.uni-dortmund.de (P. Kemper) /01/$ see front matter 2001 Elsevier Science B.V. All rights reserved. PII: S (00)

2 74 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) The method also applies to generalized stochastic Petri nets (GSPN) [1], with the limitation that components interact through timed transitions [17,20], or through places that are connected only to timed transitions [11]. GSPNs are SPNs where certain transitions (named immediate ) can fire in zero time; these transitions have priorities over timed ones, and among immediate transitions it is possible to define different priority levels. The constraint of timed synchronization is a severe limitation for a number of applications, since many activities connected to the acquisition of resources, to communications of the rendezvous style, or to choices, are more appropriately modeled by immediate (zero timed) transitions. Moreover the assignment of different priority levels to transitions is common practice to avoid the problem of probability specification for immediate transitions known as confusion [1], or simply to implement priority of a process over another one in the acquisition of a resource. The problem of synchronization over immediate transitions was tackled in [14], for the case of all immediates belonging to the same priority level, but the prototype implementation presented in [24] is not available nowadays. The approach followed in [14] is to work at the embedded discrete time Markov chain level, and to use a diagonal matrix to remove the anomalies in the matrix due to the presence of priorities. The contribution of this paper is to define a theoretical framework for the Kronecker-based solution of GSPNs that interact through transitions that are either timed or immediate at different priority levels. We extend the result in [14], provide a new proof for it, and show how appropriate data structures for the reachability sets at the different priority levels can be used to provide an efficient solution using the Jacobi and the Gauss Seidel iteration schemes for numerical analysis. The solutions described in this paper are now part of the APNN Toolbox [3]. The paper is organized as follows. Section 2 introduces basic definitions. Section 3 presents the extension of the Kronecker expression to the case of synchronization over immediate transitions. Sections 4 and 5 discuss algorithmic aspects, related, respectively, on the generation of a Kronecker expression for a GSPN which consists of a set of subnets and on the analysis algorithms to perform steady-state analysis using Jacobi or Gauss Seidel iteration schemes, exemplified through a running example. Section 6 discusses the integration of the above implementation into the APNN Toolbox [3] and GreatSPN [9] and compares the proposed approach with alternatives solution methods. Section 7 summarizes the paper and discusses possible extensions. 2. Basic definitions We assume that the reader is somewhat familiar with GSPNs and their dynamic behavior [1], and we briefly recall definitions in order to fix the notation. Definition 1. A GSPN is an eight-tuple (P,T,α,I,O,H,W,M 0 ) where P is the set of places, T the set of transitions such that T P =, α : T N the priority function, I,O,H : T Bag(P ) are the input, output, and inhibition functions, respectively, where Bag(P ) is the multiset on P, W : T R + is called weight function for transitions t with α(t) 1, and rate function for transitions t with α(t) = 0, M 0 : P N 0 is the initial marking: a function that assigns a nonnegative integer value to each place. T 0 ={t T α(t) = 0} is the set of timed transitions, T I = T \ T 0 is the set of immediate transitions. A transition t such that α(t) = k is said to be a level k transition, and we can partition the set T accordingly: T ={T 0,T 1,...,T K }, where T k ={t α(t) = k}. We denote t (t and t) the set of input (output and

3 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) inhibitor) places of transition t. Let I (t)(p) denote the number of elements p in I(t), analogously for O(t)(p) and H (t)(p). For marking M : P N 0 we use also the vector notation (M as a vector in N P 0 ). A transition t has concession in marking M iff p t : M(p) I (t)(p) and p t : M(p) < H (t)(p). A transition t is enabled in marking M (denoted by M[t ), if it has concession, and t that has concession in M, with α(t )>α(t). The firing of t in M produces the marking M (p) = M(p) I (t)(p) + O(t)(p) for all p P, and it is denoted as M[t M. As a consequence of the distinction between concession and enabling, all transitions enabled in a given marking belong to the same priority level, and therefore the set RS of reachable markings can be partitioned into K + 1 subsets RS k ={M RS such that t T k and M[t }. RS 0 is usually named tangible reachability set, since states which enable only timed transitions are termed tangible while all other states are vanishing. For GSPNs, well-known techniques apply to derive a transition rate matrix R from the tangible reachability graph, such that the underlying CTMC has a generator matrix Q = R D with diagonal matrix D, where D[i, j] = kr[i, k] ifi = j and 0 otherwise (D = rowsum(r)). Superposed GSPNs [17,20] are GSPNs where, additionally, a partition of the set of places is defined, such that an SGSPN can be seen as a set of GSPNs which are synchronized by certain transitions. The following definition differs from the classical one in [17,20] since it allows synchronization on transitions of whatever priority level. Definition 2. An SGSPN is a 10-tuple (P,T,α,I,O,H,W,M 0,Π,TS) where (P,T,α,I,O,H,W, M 0 ) is a GSPN, Π ={P 1,...,P N } is a partition of P with index set IS ={1,...,N}. An SGSPN contains N components (P i,t i,α i,i i,o i,h i,w i,m i 0 ) for i IS, with T i = (P i ) (P i ) (P i ) and α i,i i,o i,h i,w i,m i 0 are functions α,i,o,h,w,m 0 restricted to P i, resp., T i. IC(t) ={i IS t T i } is the set of involved components for t T.TS={t T : IC(t) > 1} is the set of synchronized transitions. Note that Π induces a partition of transitions on T \ TS since for t T \ TS there exists a unique i IS: t t t P i. Consequently transitions in T \ TS are called local transitions. The case of SGSPNs where synchronization transitions are timed has been largely treated in the literature: the partition into components is used to represent the infinitesimal generator matrix Q by a sum of Kronecker product defined on matrices which result from isolated components. The definition of the Kronecker product and sum can be found, for example, in [15]; let us just remember here that there are two equivalent ways to refer to rows (or columns) of a Kronecker product/sum of N matrices of dimension n j n j :asann-tuple (s 1,...,s N ), with 0 s i <n i, or as an integer s corresponding to (s 1,...,s N ) in the mixed based representation (n 1,...,n N ). Any component i of an SGSPN is a GSPN. It can be analyzed in isolation yielding RS i, assuming it is finite. Finiteness is an important point to be taken into account for a partition into components, since it may be the case that an SGSPN with a finite state space has components that, taken in isolation, have infinite state spaces. Bounds on the marking of places, like those computable from P-semi-flows, can be used to tackle the problem [20], and from now on we shall assume that all RS i are finite. The product state space PS (also known as potential state space) is defined as PS = N i=1 RSi, resp., PS 0 = N i=1 RSi 0 if only tangible states are considered. Synchronized transitions may cause RS 0 PS 0 as observed in [17,23], an undesired effect which is possible for PS as well. Each component in isolation has a generator matrix Q i = R i D i. Matrix R i does not contain vanishing markings any more, i.e., the elimination of vanishing markings is applied separately in each component, since the condition that synchronized transitions have

4 76 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) to be timed makes the behavior of immediate transitions local to their own component, such that a global normalization is not necessary [2]. Matrix R i can be seen as a sum of matrices R i = t T iw(t)ri t, such that nonzero entries are separated according to the timed transition t which contributes to that entry. Matrices of unsynchronized (local) timed transitions are summed up in R i L = t T i \TS W(t)Ri t. Subscript L will be used to denote local matrices summarizing local transitions and subscript t to denote a specific synchronized transition. Theorem 1 (Donatelli [17], Kemper [20]). Let G be a matrix of dimension PS PS with PS = N i=1 RSi 0, defined as follows: G = N i=1 Ri L + t TS W(t) N i=1 Ri t where Ri t is the identity matrix if t/ T i. The rate matrix is the projection of G over RS 0 and it is impossible to move from a reachable to an unreachable state: (G) RS0,RS 0 = R, and (G) RS0,PS\RS 0 = 0. The corresponding diagonal matrix D has a structured representation as well, but we do not show it here, since in our implementation the diagonal is precomputed and kept in memory. Despite the fact that RS 0 PS, the advantageous aspect of the Kronecker based approach is that the Kronecker expression can be directly used by appropriate solution algorithms [6,23], without the need to store the infinitesimal generator and data structures for a Kronecker matrix vector multiplication can be of size RS 0, instead of PS according to [12,20]. Finally, let us recall that performance analysis of GSPNs usually takes place by considering the associated semi-markov process which is either transformed: (a) to an embedded discrete time Markov chain (DTMC) with a ( RS RS ) matrix of transition probabilities P or (b) to a reduced continuous time Markov chain (CTMC), by an elimination of vanishing states, with a ( RS 0 RS 0 ) generator matrix Q. A solution of the DTMC requires computation of πp = π, while the solution of the CTMC requires to compute π with πq = 0 and i π i = 1. The latter is recommended due to its smaller matrix dimension, but, as we shall see in the next section, to deal with composition over immediate actions we shall be forced to work with a DTMC defined over the RS set, including both tangible and vanishing states. The computation of the embedded DTMC of the semi-markov process defined by a GSPN is a well established procedure, and we recall it here since it will be used in the next section P = D 1 R, where R[i, j] = W(t). (1) t:i[t>j For D 1 to be well defined, all rows of D should have nonnull contribution, which is always the case if the system does not have any deadlocks, moreover it is important to observe that there is no need to distinguish vanishing from tangible states in the definition of the DTMC, since the same normalization takes place in the two cases. 3. Kronecker expression with prioritized synchronization An extension towards synchronization over immediate transitions faces two obstacles: (a) it is necessary to keep (at least certain) vanishing states, with the disadvantages explained above, so that the solution of the GSPN amounts to the solution of the embedded DTMC, and (b) another, more serious, difficulty arises from the fact that priorities in a GSPN are global, and this may have unexpected consequences.

5 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Fig. 1. Synchronization over immediate actions. Observe for example the SGSPN of Fig. 1, where two components A and B are present, identified by place names starting with a and b, respectively. Transition tb1 is a synchronized immediate transition of priority α 1, while ta, timed, is local to A and immediate transition tb2, local to B has priority α 2. When places a1, b1 and b2 are all marked, all transitions have concession, but only one can fire (either tb2 or tb1, depending on the values of α 1 and α 2 ). If instead a1 is marked and b1 and b2 are not, then the timed transition ta is enabled. It is obvious that it is not possible to locally decide whether ta is enabled or not, indeed if b1 is not marked and b2 is, then tb2 disables ta. We can say that concession is compositional, but enabling is global. Let us consider an SGSPN S consisting of N components with indices 1, 2,...,N, and K + 1 priority levels {0, 1,...,K}. For each component we can build its reachability set RS i, assuming it is finite, and a modified reachability set RS i, obtained using a firing rule that uses concession instead of enabling, that is to say a transition can fire if it has concession, despite the fact that in the same marking a higher priority transition may have concession as well. As explained in [1, Chapter 2], using concession (which is equivalent to ignoring priorities) the state space generated is a superset of the original one. Observe that, even if the original system is finite, by using a firing rule based only on concession it is possible to generate an infinite state space (of course this may happen only in nets that are not covered by P-invariants). We can therefore state that RS i RS i moreover, on RS i we can build PS = N i=1 RS i, which is a superset of the potential state space PS = N i=1 RSi : PS PS. Using only concession it is also possible to define, for each component and for each transition, matrices R t i, the analogous to Ri t for the concession case: R t i[si,s i ] = 1ifft has concession in s i and the firing of t in s i produces state s i, and 0 otherwise; moreover, we define R L i = t T i \TS W(t) R t i, and R L i k = R t Tk i\ts t i.

6 78 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Although priorities are not used to build the reachability sets RS i, we are going to use them to artificially slice the set of potentially reachable states PS as follows: PS = K PS k, k=0 where a PS k iff a PS and t T k enabled in a and the symbol stands for disjoint union; again, t is enabled in a means: t has concession in a and no t of higher priority has concession in a. Additionally, we can define PS k = k PS h=0 h, the set of all states for which no transition of priority higher than k is enabled. We want to use the concession-based matrices to define a Kronecker expression for the original SGSPN S, and for this aim we assume that we are able to compute the following PS PS selection matrices { S k [a,b] = 1 ifa = b and a PS k, 0 otherwise. (2) Observe that S k is a diagonal matrix, and, if A is a PS PS matrix, by premultiplying it with S k,we get a matrix in which all rows of A that correspond to states in PS \ PS k are set to zero. Of course S K is the identity matrix. We can then state the following theorem. Theorem 2. Given an SGSPN S and matrices R t i and R L i built on concession as explained above, and the selection matrices S k defined in Eq. (2), the transition probability matrix P of system S is expressed by ( K ) P = D 1 X k, k=0 RS,RS where D is a normalizing, diagonal matrix to ensure that all rows sum up to 1, (D is of size RS RS, and D[i, i] = (1/rowsum( K k=0 X k))) while matrices X k, for k {1,...,K} are defined as follows: N X k = S k R L i k + N W(t). (3) i=1 t T k TS i=1 R i t Proof. A central observation is that, if we impose all matrices S k to be identities, then, according to Theorem 1, the expression for P correctly computes the transition probability matrix of the embedded DTMC for the SGSPN obtained from S by considering a firing rule based only on concession, so that the proof amounts only to show that, by using the selection matrices S k, only the contribution of enabled transitions is taken into account. Let us first consider the case of a reachable state a that gives concession to a transition t T K of highest priority (that is to say, a PS K RS) yielding a state a. The contribution due to t is included in P thanks to X K, since, indeed, S K is the identity matrix, and D 1 [a,a] ( Ri LK + W(t) N R i=1 t i)[a,a ] is the correct transition probability, since concession and enabling coincide in this case. Note that t is either

7 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) synchronized and D 1 [a,a] (W (t) N R i=1 t i)[a,a ] provides the corresponding value ( Ri LK = 0) or t is local and its contribution is within D 1 [a,a] ( Ri LK )[a,a ], in summary the expression is exactly that obtained according to the definition of rate matrix given in Theorem 1, and to the formula for passing from the rate matrix to the transition probability matrix, Eq. (1). Let us now consider the case of a reachable state a that gives concession to a transition t T k, with k<k, then the row corresponding to state a, ( Ri Lk + W(t) N R i=1 t i )[a, ], is nonnull, but, due to the multiplication by S k, there is a corresponding nonnull entry in X k only if S k [a,a] = 1, but this means that a PS k, and therefore, since t has concession, and no transition t of priority greater than k is enabled, then t is actually enabled in a. The idea of using a matrix to mask the effect of transitions that have concession, but that are not enabled, is taken from [14]: we have here extended it to the case of multiple priority levels and we have tried to state the theorem and the proof in a way that we believe to be more intuitive. Moreover we shall see, in Section 5, that there is actually no need for a multiplication with selection matrices S k in the actual implementation, thanks to an appropriate choice of the data structures. Theorem 2 is not very usable in its present form, since it assumes the presence of an oracle that is able to compute the S k matrices. We now show how the S k matrices can be computed using the X j matrices, with j>k. Proposition 1. The following expression correctly computes the S k matrices defined by Eq. (2): S k = I K δ(x j ), j=k+1 where δ(a) is the following diagonal matrix: δ(a)[a,b] = { 1 if a = b and c: A[a,c] 0, 0 otherwise. (4) Proof. Since S k is expressed as the difference of an identity matrix and of a diagonal matrix whose nonnull elements are equal to 1, then also S k is a diagonal matrix whose nonnull elements are equal to 1, and we need to prove that (S k )[a,a] = 1 a PS k, S k [a,a] = 1 implies that K h=k+1 δ(x h) = 0, so there is no h [k+1,...,k] with K h=k+1 δ(x h) = 1, and therefore there does not exist a transition t of priority greater than k enabled in a, which implies that a PS k. a PS k implies that t T h, with h [k +1,...,K], enabled in a, and therefore δ(x h )[a] = 0, which implies that S k [a,a] = 1.

8 80 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Implementation: generation of the Kronecker representation The application of Theorem 2 for analysis requires to compute R t i and R L i k, S k, and D 1 for all transitions, priorities and components; to this aim we basically follow the approach proposed in, e.g. [10,13,21] for classical SGSPNs with certain adjustments, so that the solution procedure includes three steps: (a) generation of matrices R t i for each component i in isolation, (b) a state space exploration to obtain the set of reachable states as RS = K k=0 RS k, and (c) iterative numerical method for either steady state using, e.g. Jacobi or Gauss Seidel type iterations or transient analysis using randomization. Step (b) is optional in the sense that one may use PS = K k=0 PS k analogously but we recommend our approach to allow solution algorithms to work with data structures of size RS instead of PS. The three steps of the solution process will be illustrated by the SGSPN of Fig. 2. This running example is inspired by models of software with hardware resource acquisition, an application domain with an intensive use of synchronization over immediate transitions [5]. The software consists of two types of processes A and B which are considered in increasing numbers of incarnations NA, NB to create increasingly large state spaces. Processes of type A and B follow the same pattern as illustrated by the net of Fig. 2, where place names starting with A (resp. B) refer to tasks of type A (resp. B): after a fork operation into two subprocesses, one subprocess acquires a shared resource R while the other subprocesses can be performed independently without resources until a join operation is executed and a cyclic restart takes Fig. 2. The example model.

9 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) place. Resource R is shared among processes of type A and B but with different priority: in case of conflict, processes of type A have a higher priority than processes of type B. Resource allocation and release is performed without any delay, i.e. it is modeled by immediate transitions. After using the resource, it takes the resource a certain time modeled by a timed transition to recover and to become available again Generation of matrices R i t, R i L K The generation of the R i t and R i L K according to Theorem 2 requires to perform a state space exploration for each component i in isolation with firing based on concession instead of enabling. To gain efficiency we have pushed this idea a little bit further, by implementing yet another firing rule, that exploits the fact that a synchronizing transition t will never fire in marking M if there is a local transition of higher priority that has concession in M, is stated by the following proposition. Proposition 2. Let (P i,t i,α i,i i,o i,h i,w i,m0 i i ) be the ith component of an SGSPN. Let t T and t T i \ TS with α(t)<α(t ). For an arbitrary marking M i the following holds: if both t and t have concession at M i then for any markings M j of other components j i : t cannot fire at M = (M 1,...,M i 1,M i,m i+1,...,m N ). Proof. By contradiction. Assume M = (M 1,...,M i 1,M i,m i+1,...,m N ) where t can fire at M and additionally t has concession at M i. Since t T i \ TS, i.e. t is local, t has also concession at M. Since α(t)<α(t ) and t has concession at M, transition t cannot be enabled and cannot fire at M. We use Proposition 2 to define the following firing rule for local exploration of RS: Definition 3. A transition t T i can fire in a marking M i iff the conjunction of the following conditions is fulfilled: 1. M i (p) I (t)(p) for all p t P i. 2. M i (p) < H (t)(p) for all p t P i. 3. If no t T i \ TS with α(t )>α(t)has concession. Conditions (1) and (2) state that t must have concession at M i with respect to component i and it directly corresponds to the argumentation of Section 3. Condition (3) is a necessary condition for firing transition t, according to Proposition 2. The added condition improves the construction of Section 3, where the focus was on clarity, to prevent RS i from being unnecessary large. The firing rule above can become even more strict if one observes upper limits provided by P-invariants [20], since computation and validity of P-invariants does not change in the presence of priorities. From the reachability graph of each component, built according to Definition 3, we obtain matrices R i t [Mi,M i ] = 1ifM i [t>m i and 0 otherwise. However, the resulting RS i is a superset of the projection of RS on P i, since the firing rule for the local state space exploration has less strict conditions than the original firing rule for the complete SGSPN. To give a feeling of the size of memory required for the Kronecker expression, let us come back to the running example. For all model configurations we consider again the single resource case, and we increase the number of processes of each type A and B by values of 1.

10 82 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Table 1 Results of example analysis: dimensions of state spaces RS i and Kronecker matrices N State spaces Memory RS A RS B RS R PS RS Vector Matrices Table 1 illustrates the dimensions of the DTMC and its Kronecker representation. The first column gives the model configuration, columns 2 5 of Table 1 provide cardinalities of the state spaces of components A, B, and R, and the set of potential states PS. Note that PS is significantly larger than RS in column 6. Memory requirements are given in bytes to represent a single iteration vector (double precision 8 bytes per entry) in column 7 and to represent all Kronecker matrices in total in column 8. The memory requirements for iteration vectors are clearly dominating, they are the bottleneck which limits the numerical analysis of large DTMCs. For this example all Kronecker representations consist of one Kronecker sum which collects local transitions of priority 0, and four Kronecker products, one per immediate synchronized transition. Since we have synchronized transitions of two different priorities, Kronecker products belong to different levels of priority accordingly. Note that a model with immediate local transitions would result in multiple Kronecker sums which can be treated as well Exploring RS and generating RS 0,...,RS K Since PS is usually a large superset of RS it is often useful to restrict analysis algorithms to RS as, e.g. in [20]. In order to identify RS we perform a state space exploration using the Kronecker representation as in [21], to get a compact representation of RS, using, of course, the firing rule which considers priorities. Definition 4. Firing rule for Kronecker representation: transition t T can fire at marking M = (M 1,...,M N ) iff the conjunction of the following conditions holds: 1. t has concession in all components i IC(t) at M i, i.e. an R i t [Mi, ] 0 exists, and 2. if no t T with α(t )>α(t)has concession. Firing of a transition t at (M 1,...,M N ) gives (M 1,...,M N ) using column indices of R i t [Mi,M i ]. The resulting set RS is represented by a decision diagram as in [12, Section 3.1], resp. a directed acyclic graph of [7] enhanced with full vectors per node. This structure encodes a reachable state by a path of length N 1 visiting nodes (M 1,...,M N ) and its position in an index set {0, 1,...,RS 1} can be computed by summation of offsets along this path. The advantages of using this data structure will be discussed together with the numerical solution algorithms, where DDs are used for two purposes: (a) to distinguish reachable states from unreachable ones, and (b) to translate row/column indices of the Kronecker representation with range RS i uniquely to an index set {0, 1,...,RS 1} and back. The latter is used, e.g. in numerical analysis to address the position in an iteration vector π of length RS.

11 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Fig. 3. Representation of RS 0,RS 1 by a DD. The set RS is additionally partitioned into sets RS 0,...,RS K using a decision diagram (DD) again, which contains K + 1 root nodes to access each set. The usefulness of this data structure will be shown in subsequent analysis, since instead of masking matrix entries using S k, one can recognize any state that belongs to slice k by the members of RS k, thus avoiding the need to build the S k matrices. Overall, the implementation uses K+2 DDs, one for RS denoted by DD, and one for each RS 0,...,RS K denoted by DD 0,...,DD K. Clearly, an implementation keeps all these DDs in a single DD with K + 2 root nodes, in order to maximize sharing of substructures. The notation DD k (s 1,...,s i ) identifies a DD node for component i + 1 in a path along states s 1,...,s i starting at the root node for RS k, while function off (node, state) gives the offset for a state by an O(1) index determination from the nodes full vector. Fig. 3 shows an example DD with RS 0 and RS 1 for a model with three components. The path along the dotted line encodes three states due to three states in the terminal node, state (1, 2, 3) RS 1 on this path is shaded and its unique position in {0, 1,...,RS 1} is 6 = The full vectors are used to determine in O(1) whether a state exists in a node and its position in the state vector of the node, see [12] for details. This datastructure is related to binary and especially multi-valued decision diagrams (MDD) [19], however it differs from an MDD which contains less information per node and has an additional level for terminal nodes. MDDs and DDs share the key idea of all DDs: isomorphic substructures are represented only once. Note that sets RS 0,...,RS K and RS must use the same mapping from RS i to {0, 1,...,RS 1}. This property is not only required for the correctness of subsequent analysis algorithms but it is also positive for sharing of isomorphic substructures. The mapping itself is arbitrary but unique with respect to RS RS i, we use the one that matches the lexicographical order on RS i (which is also implied by Kronecker operators). Note that knowledge of RS and its projection to RS i allows us to remove unused rows and columns of matrices R t i, R L i. To provide an indication of the amount of memory required by the above data structures we come back to our running example, where we consider only a single resource to be available while increasing the number of processes of each type A and B by values of 1. Table 2 describes dimensions of state spaces and the amount of memory used by DDs to represent these. The first column lists the number of processes of each type A and B, where NA = NB = N, columns 2 5 list cardinalities of state spaces, with RS = K k=0 RS k for K = 2, e.g. for the largest model configuration with six processes of each type A and B and one resource we obtain about 30 million tangible states in an overall state space of about 50 million states. The total number of nodes to represent RS 0, RS 1, RS 2 by a DD is constantly 14 over all configurations, but it takes an increasing amount of memory given in bytes in column 6. Column 7 gives the number of bytes to represent RS by a DD, which has constantly five nodes over all configurations.

12 84 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Table 2 Results of example analysis: dimensions of RS k and DDs N State spaces size Memory for DD RS 0 RS 1 RS 2 RS RS 0, RS 1, RS 2 RS Note that the space used for DDs is negligible with respect to the cardinalities of the represented sets and a partition of RS into subsets RS 0, RS 1, RS 2 nicely retains the space efficient structure. 5. Implementation: numerical solution We consider iterative methods of Jacobi and Gauss Seidel type to solve πp = π. An iteration scheme according to the method of Jacobi for a stochastic matrix P turns out to be equivalent to the power method which gives π = πp. A Gauss Seidel iteration considers a matrix splitting P = B+U+L into a lower (upper) triangular matrix L(U) and a diagonal matrix B and performs π = πlb 1 +π UB 1, where B = I = B 1 for a stochastic matrix if we require that any transition fulfills O I 0 such that Gauss Seidel simplifies to π = πl + π U. Applied to the Kronecker expression of Theorem 2, we get for Jacobi ( K ) K N π = πd 1 X k = πd 1 R L i k + N. (5) k=0 S k k=0 i=1 t TS k W(t) For Gauss Seidel we need to make a difference between already computed, available entries of π which can be used if values of π are computed in an increasing order. A formulation by matrices requires us to distinguish L and U matrices for the Kronecker representation as well. We assume that all matrices R t i 0, because otherwise a corresponding transition would be dead and the Kronecker term could safely by omitted. Furthermore, we assume that for any transition O I 0. Hence, for any matrix R t i = L i + U i + B i either B i = 0orL i + U i = 0 and there exists at least one component i for any t where L i + U i 0. We define L t i = Li if i is the smallest component index for which L i + U i 0 and L t i = R t i otherwise. Matrices L t i are used to achieve a Kronecker representation of a lower triangular matrix below. We analogously define Ûi t and finally for matrices R L i k = L i + U i + B i covering local transitions, let Ûi L k = U i and L L i k = B i + L i. With this notation we can formulate Gauss Seidel as K N π = πd 1 L L i k + N S k k=0 K +π D 1 S k k=0 i=1 t TS k W(t) N ÛL i k + i=1 i=1 t TS k W(t) L i t N i=1 Û i t i=1 R i t. (6)

13 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) The separation into two terms allows us to use different algorithms for the first and second term which may yield an iteration scheme as, e.g. in [16]. However an implementation of Gauss Seidel type need not explicitly partition Kronecker matrices into L t i and Ûi t. It is common practice to use a single iteration vector, where computed values of π overwrite values of π, such that an algorithm automatically selects values of π instead of π if available Algorithm for a Jacobi iteration Fig. 4 describes a single iteration step for Jacobi in pseudo code. Diagonal matrix D 1 is represented by a single vector B of length RS since unreachable states need no normalization. Iteration vectors π, π have length RS as well. DDs are used to translate vector positions in π, π, and B into the row/column indices of Kronecker matrices R t i, R L i. The basic ideas of the algorithm are: (a) to premultiply π with D 1 in line 1, (b) to partition states according to priorities into DDs and to consider sets of states in an order according to priorities, such that for each state in this set (represented by DD k ) only the Kronecker representation for transitions of the appropriate priority are selected in lines 2 and 3. This is an algorithmic reflection of the matrix formalization S k. Furthermore (c) the index j of the column state s is obtained from the DD of RS since it can belong to any DD of DD 0,...,DD K, in lines 5, 9 and 16. (d) The matrix vector multiplication follows the one recognized as most efficient for Jacobi type iteration in [6]. The condition for each R t i[s i,s i ] 0 do in lines 4, 8 and 14 identifies more than one matrix entry only if a transition firing can result in several outputs for a component, this can be the case if elimination of vanishing states takes place (due to some optimization) or if probabilistic outputs are allowed (which is not the case here). Fig. 5 illustrates lines 3 17 of the Jacobi algorithm shown in Fig. 4 for N = 3 and a single t TS k. Shaded parts of a DD and vectors π, π are possibly considered for π := π + π (W(t) N R i=1 t i) RS,RS. Shaded parts in the Kronecker matrices R t i denote currently considered rows s i, columns s i and entries R t i[s i,s i ]. Informally, for Jacobi we push probability values from left to right while we enumerate elements of DD k in combination with entries in R t i for a t T k. Matrices R L i k of a Kronecker sum summarize firings of possibly many local transitions, such that several entries per row can appear frequently in line 24 in Fig. 4. Note that one has to follow each level, even in case of identity matrices to obtain all indices i and j and to consider all relevant matrix entries. The treatment of a Kronecker sum as a sum of Kronecker products clearly illustrates this point. Reachability of states and states which enable t are mutually checked at each level as in the pairs of lines 3/4, 7/8, 13/14, and 23/24 such that a failure of one condition can be quickly recognized and precomputed parts of i 1,i 2,...,j 1,j 2,... and x 1,x 2,... are subject to reuse. Matrices of the Kronecker representation are accessed by rows. Note that i N = N i=1 off(dd k(s 1,s 2,...,s i 1 ), s i ) is accumulated along the path from the root node of DD k to the bottom node and that j N is computed similarly but for DD instead of DD k and column indices s 1,...,s N achieved from Kronecker matrices R t i. A distinct feature of the Jacobi iteration above is that vector π accumulates values in an order which does not allow us to distinguish finished entries from incomplete ones during computation. This makes it useless for a Gauss Seidel type iteration which wants to profit from newer, completely calculated values in π instead of using current values of π.

14 86 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Fig. 4. Jacobi iteration with matrix vector multiplication by rows Gauss Seidel iteration A Gauss Seidel iteration need not explicitly split matrices as in Eq. (6) to select old values in π or new values from π, if only a single iteration vector π and one accumulator variable new is used. The reason for this is that newly computed values π (i) overwrite old values π(i) such that a multiplication algorithm automatically considers π (i) if available. This procedure is common practice for implementing

15 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Fig. 5. Jacobi type multiplication with Kronecker product. Gauss Seidel since using a single iteration vector is a significant saving in terms of space compared to Jacobi iterations. Fig. 6 describes this approach. It computes values of π one after the other and in an increasing lexicographical order in line 1. Column indices are indicated by s = (s 1,s 2,...,s N ) and index variable j. new is an accumulator for the value of π (j), i.e. π (s ) in lines 2, 13, 23 and 25. Since a state s can be reached by firing a transition of any priority level the algorithm needs to consider all transitions in lines 3, 4 and 14. If a transition t with priority k may cause a state transition (s, s ), it is still necessary to check t being enabled in s, i.e. to check whether s RS k. Hence the algorithm considers both conditions at each component (lines 5, 7 and 9) and can detect a failure before reaching component N in line 11. Gauss Seidel makes less use of precomputed results of higher levels than Jacobi since it keeps the s i values fixed, while Jacobi considers ranges over s i and s i at each level. Matrices R t i, R L i k are accessed by columns. The influence of D 1 is reflected in the last line. The impact of matrices S k is implicit in the algorithm by selecting only states s i in DD k. Fig. 7 illustrates Gauss Seidel: (a) for a certain (s 1,...,s N ) denoted by a shaded path in DD at the right side of the figure, (b) for a certain k {0,...,K}, and (c) for a certain t T k. Note that this part of the algorithm only reads values from vector π, addressed at position i N = N i=1 off(dd k(s 1,s 2,...,s i 1 ), s i ). Position i N is obtained by following the shaded path in DD k at the left side of the figure. The probability value accumulated in new is written on π at position j in line 25 of the algorithm. The key idea is that Gauss Seidel pulls values from left to right by enumerating elements of DD (i.e. set RS at the right side), remember that Jacobi rather pushes values from the left side by enumerating elements of DD k. In summary a numerical analysis of Jacobi or Gauss Seidel type can be performed by using the algorithms of Figs. 4 and 6 successively starting from an arbitrary initial distribution over RS, which, e.g. can be chosen as π 0 [M] = 1ifM = M 0 and 0 otherwise or as an equal distribution over RS, resp. the set of recurrent states. δ is used to compute max i RS ( π [i] π[i] ) as a simple but common criterion to detect convergence if δ falls beyond a given ɛ, alternatively one might compute residuals.

16 88 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Fig. 6. Gauss Seidel iteration with matrix vector multiplication by rows Additional optimizations for Gauss Seidel Apart from its numerical advantages, Gauss Seidel multiplications as formulated above show a significant disadvantage compared to Jacobi: for each state, all priority levels must be considered, which gives a higher number of transitions which are finally disabled due to priorities. One can improve on this situation by exploiting the fact that RS is enumerated in a fixed, typically lexicographical order for s RS i the latter is typical since it correlates to the order within a DD and Kronecker product. If we consider s after s where s 1 = s 1,...,s i = s i up to a certain component i, 1 i N and transition t is disabled at s for reasons located at a component j i then clearly t is disabled at s for the same reason and we can skip t TS k until we consider a state s, where s j s j. In our implementation we use a table of length TS to memorize the component with minimal index which disabled t at position t. A value N + 1 is assigned to the vector if t is enabled. If a Kronecker product results in at most one entry per column, one can also cache computed results on lines 4 to around 10 in Fig. 6 in order to reuse them in the computation for a subsequent state that does not differ up to a certain component i. The precondition holds as long as we do not allow for probabilistic

17 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Fig. 7. Gauss Seidel type multiplication with Kronecker product. output bags or elimination of vanishing states, since so far a Kronecker product considers only a single transition t and by M = M + C x t it is clear that for fixed C, M and a unit vector x t, that M must be unique. Data structures based on TS and caching help to reduce the computation times reported in [18] significantly. Compared to previous work [6,13], that do not allow synchronization over immediate, we use additional DDs for each priority level in order to be able to focus on states which enable transitions of a certain level of priority k. The Gauss Seidel approach can be further enhanced by matrix diagrams, as in [12], an approach which is complementary and which we do not develop further since it would change the focus of the paper. 6. Tools and comparisons of algorithms The Kronecker-based solution of GSPNs with synchronizing immediate transitions of different priority levels presented in this paper has been implemented within the APNN Toolbox [3]. This toolbox joints a set of independent tools that allow the generation of a Kronecker representation, the state space exploration using the approach of [7,10,21] which results in a DD of the set of reachable states, and the numerical analysis of Jacobi and Gauss Seidel type. Main motivations for this work has been that of making GSPN tools like APNN Toolbox and GreatSPN [9] more usable on real case studies. For this aim GreatSPN has been recently extended [4] with a compositionality facility that allows the parallel composition of GSPNs (and colored) models over transitions of equal label, as well as a form of buffer-like composition over labeled places. A resulting model is a single net, which is an SGSPN, whose components belong to different layers (a graphical facility of GreatSPN used to display only a portion of the net). This allows an automatic generation of the input for APNN Toolbox. The current implementation performs a straightforward translation of uncolored GSPNs in GreatSPN file format to the APNN format. The additional information to make it an SGSPN is

18 90 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Table 3 Results of example analysis: computation times for Jacobi and Gauss Seidel iterations N Gen Time per iteration step (s) No. of iterations J GS r J r GS formulated as a set of layers in GreatSPN and as a partition of places in APNN. The translating software automatically converts this additional information, such that an SGSPN specified in GreatSPN can be converted, visualized and analyzed using Kronecker methods within the APNN Toolbox. Right now the compositional facility of GreatSPN follows a straightforward syntactic rule for the assignment of priority and weight to synchronized transitions, since there is no general agreement on how this quantities should be computed from the component nets; it simply takes the priority and the weight of the first net operand. If this is not the right choice from a modeling point of view the user has to change the priority and weight through the graphical interface of GreatSPN. Once a model is specified as a GSPN, one could perform a Markov chain analysis. However, Markov chain analysis presupposes that a model is correct from a functional point of view, i.e. it describes what the modeler intends to describe. For complex, especially concurrent models this need not be the case. Hence, we enhanced available tools for functional analysis in the APNN Toolbox to consider our Kronecker representation with priorities as well. At the current stage we can check the DTMC for being recurrent and the liveness of the SGSPN. Since model checking techniques work well with Kronecker representations [22], the Kronecker representation presented here serves this purpose as well and the corresponding extension of the modelchecker for computational tree logic (CTL) in the APNN Toolbox is straightforward Comparisons Using the running example we present the following comparisons based on computations performed on a Sun Enterprise 250, 400 MHz CPU, 2 GB main memory running SunOS 5.7: 1. Jacobi versus Gauss Seidel implementation. 2. Proposed approach versus a straightforward sparse matrix implementation. 3. Proposed approach versus the standard Kronecker approach that does not allow synchronization over immediate transitions. Gauss Seidel and Jacobi were run for the given example configurations and Table 3 gives computation times as wall clock time in seconds for the generation of the Kronecker matrices and RS exploration in column 2, performing a single iteration step of Jacobi type (Algorithm 4) in column 3, and an iteration step of Gauss Seidel type (Algorithm 6) in column 4. The computational effort for the generation is higher than for a single iteration step but it is of minor importance compared to the overall solution

19 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Table 4 Results of second comparison: dimension of state spaces and Kronecker matrices N State spaces size Vectors Matrices RS RS 0 Kr Sparse Kr Sparse time. For this example, Gauss Seidel is slower than Jacobi by a factor of 2 3 per iteration step. However we also observed a different speed of convergence to achieve a maximum residual less than 10 8 for the computed steady-state distribution. The number of iteration steps are given in column 6 for Jacobi, column 8 for Gauss Seidel. Furthermore observe that these values vary with the selection of a relaxation factor (column labeled r in the table), we exercised over a range of relaxation factors and the table indeed gives the minimal number of steps we could achieve, corresponding relaxation vector for Jacobi are given in column 5, for Gauss Seidel in column 7. Obviously the increased computation times per iteration step for Gauss Seidel find a counter effect in the reduced number of iteration steps required for convergence compared to a Jacobi iteration. For this example and for the best selection of relaxation factors, the total amount of time seems almost balanced, however for practical applications one does not know the best selection of a relaxation factor in advance, such that it is very difficult to conclude superiority of a method at this point. In order to perform the second comparison (proposed approach versus sparse matrix implementation) one need to recognize that conventional steady-state analysis would consider the embedded CTMC over the tangible reachability set, rather than the DTMC over the whole state space as we do here. This results in the solution of a linear homogeneous system of dimension RS 0 RS 0 instead of RS RS. We shall reference this solution in the following with the term sparse. Table 4 provides cardinalities for state spaces and memory occupation for the proposed approach and for the sparse matrix representation, for increasing value of the process replicas N. The second and third columns list the number of states of the DTMC used by the proposed approach (RS) and of the CTMC used by the sparse one (RS 0 ), respectively, while the fourth and fifth columns list the size of the probability vectors in the two cases and the last two columns list the space used for the matrices; the sixth column reports the sum of the memory occupied by data structures for DDs and for component matrices, while the seventh one is the size for the sparse matrix representation of the CTMC. Please note that column 6 is the sum of the last column of Table 1 and of the last two columns of Table 2. The numbers indicate two well-known facts: the memory requirements for the generator matrix limit applicability of the sparse matrix approach, while the sparse one can profit by the fact that it can work on the (smaller) embedded Markov chain, obtained through the elimination of vanishing states, while the proposed method is forced to work on the DTMC. In principle the distance between the DTMC and the CTMC used by the two approaches can be made arbitrarily large by taking nets for the comparison that have a very large number of vanishing states. Table 5 gives the computation times for performing Gauss Seidel on a sparse matrix representation of the CTMC. Column 2 gives computation times for the sparse solution (in wall clock time in seconds) while

20 92 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Table 5 Results of computation times for sparse approach using Gauss Seidel N Time per iteration step (s) r No. of iterations Table 6 Results of third comparison: dimensions of RS k and DDs for the new partition N State space RS 0 = RS DD nodes RS Memory RS the remaining columns give relaxation factors and corresponding number of iteration steps to achieve an accuracy of 10 8 for the solution. Comparing these results with the one given in Table 3 we can observe that the sparse approach performs better, as expected, than the proposed one, by a factor 6 7 (disregarding the N = 1 case where rounding of the measures causes inaccuracy), as there is no overhead to compute the matrix entries; the only limitation is the size of the systems that can be solved (the case N = 6 cannot be solved due to the size of the Q). To compare the proposed approach with the classical Kronecker one that does not allow synchronization over immediate transitions, we can use the same example, so we have the same embedded Markov chain, but we are forced to change the partition into components, to obtain that all synchronizing transitions are timed; immediate transition become part of the resource component R and the synchronized transitions are T4, T6, T9 and T13, all timed as required. The resulting component nets have been enhanced by adding implicit places to gain a partition into three components with finite RS i. We have performed a numerical analysis using the conventional Kronecker representation as in [17,20]. Results for state spaces and computation times are given in Tables 6 8, again for varying values of N. Table 6 gives the Table 7 Results for third comparison: dimensions of state spaces RS i and Kronecker matrices for the new partition N State spaces Memory RS A RS B RS R PS Vector Matrices

21 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Table 8 Results of third comparison: computation times for Jacobi and Gauss Seidel iterations N Gen Time per iteration step (s) No. of iterations J-kron GS-kron r J-kron r GS-kron number of tangible states in column 2, that is left untouched by the new partition, the number of nodes a DD needs for its representation (column 3) and the corresponding number of bytes (column 4). A comparison with Table 2 shows the same number of tangible states. Moreover the implementation of the conventional Kronecker approach that we have used works at the embedded Markov chain level, thanks to the elimination of vanishing markings in the components, so that RS = RS 0 and all data structures connected to the size of the Markov chain have a smaller size in this case. Table 7 provides cardinalities for state spaces for modified components A, B, and R in columns 2 4. Note that the modification implied that the resource component R partly covers processes as well, such that the state space of R increases with an increasing number of processes. Column 5 gives the cardinality of the set of potential states. Columns 6 and 7 describe memory requirements in bytes for an iteration vector (column 6), and for a Kronecker representation (column 6). A comparison with Table 1 shows that the new partition has a larger PS than the previous one. Table 8 gives the corresponding computation times for performing a numerical analysis of Jacobi and Gauss Seidel type denoted by J-kron and GS-kron, respectively. Column 2 gives computation times to generate the Kronecker representation including RS exploration, for N = 5, 6 we used the approach of [7,10] to speed up generation. Columns 3 4 gives computation times in wall clock time in seconds for Jacobi (column 3) and Gauss Seidel iteration steps, while the remaining columns give relaxation factors and corresponding number of iteration steps to achieve an accuracy of 10 8 for the solution. The comparison of the results of Tables 7 and 8 with the analogous values computed in Tables 1 and 3, may appear slightly surprising; indeed since the proposed one works with the DTMC of size RS RS, while the conventional one work with (the smaller) CTMC of size RS 0 RS 0, thanks to a change in the partition, then we could expect a better performance of the conventional approach, while it is not the case, since they basically perform the same. The phenomenon is apparently surprising, but we should not forget that the two approaches have different implementations e.g. the improvement for Gauss Seidel for the DTMC solution is not present in the CTMC solution and, moreover, the partitions are different, indeed the one over immediate transitions give rise to a much smaller PS than the one over timed, and a different partition means different component matrices. 7. Conclusions We presented a Kronecker-based solution for SGSPNs where multiple priority levels are allowed, and synchronizing transitions can be immediate. This significantly enlarges the class of real application

22 94 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) models that can be treated with the Kronecker approach, and it also represents an additional step in the line of integration of tools developed at different universities, in this case the APNN Toolbox of the Dortmund group, and GreatSPN of the Torino one. The numerical solution is based on DTMC, instead of CTMC, with the disadvantages of being forced to keep all vanishing states. It is future work to define and implement vanishing marking elimination at least for subnets of immediate that are not involved in synchronization among components. The structure by slices of the expression may yield very sparse matrices, and it may be worth grouping components to try to reduce the number of matrices, or to reduce the number of slices by eliminating certain priority levels applying the ideas in [8]. Up to now we have studied the feasibility of the approach through a number of examples, but the study of the computational and storage complexity along the lines of [6] is seen as future work, and we can only say for the moment that the major impact of considering priorities is to have to take care of existence of a matrix entry (concession of the corresponding transition) plus its validity (enabling according to priorities). For Jacobi iterations, this has a minor effect due to a partitioning of states according to priorities, however for Gauss Seidel iterations one need to check both conditions during iterations. Another aspect that surely deserves investigation is the impact of the partition in the solution process; indeed the work presented in this paper allows us to choose the partition more freely than before, so that partitions can also be interpreted as system components, and no tricks are required to get a valid partition; whether this is a good choice in terms of the efficiency of the solution process is topic for future work. Finally, several authors, among others [6,14], have considered Kronecker representations for SGSPNs with marking dependent weight functions of a product form, i.e. the weight of a transition t at marking M is W(t,M) = w t i IC(t) W i (t, M i ) for a constant w t R + and component specific functions W i (t, M i ) for each component i. We claim that for marking dependent weight functions of this kind, an extension of our approach is straightforward by using the corresponding Kronecker representation for marking dependent weight functions to achieve an appropriate representation of numerical values for the generator matrix plus the selection matrices (or DDs) as given here to select matrix rows appropriately. One need to take care in the definition of local transitions, since for a transition to be local in this context, its weight function need to depend only on the marking of its component. We did not present this generalization since the notation gets even more heavy and it would move the focus of the paper. References [1] M. Ajmone Marsan, G. Balbo, G. Conte, S. Donatelli, G. Franceschinis, Modelling with Generalized Stochastic Petri Nets, Wiley, New York, [2] G. Balbo, G. Chiola, G. Franceschinis, G. Molinar-Roet, On the efficient construction of the tangible reachability graph of generalized stochastic Petri nets, in: Proceedings of the Second International Workshop on Petri Nets and Performance Models, Madison, WI, IEEE Computer Society Press, Silverspring, MD, [3] F. Bause, P. Buchholz, P. Kemper, A toolbox for functional and quantitative analysis of DEDS (extended abstract), in: Proceedings of the 10th International Conference on Computer Performance Evaluation, Modelling Techniques and Tools, Lecture Notes in Computer Science, Vol. 1469, Springer, Berlin, [4] S. Bernardi, S. Donatelli, A. Horváth, Compositionality in the GreatSPN tool and its application to the modelling of industrial applications, in: Proceedings of the Workshop on the Practical Use of High Level Petri Nets, Aarhus, Denmark, June 27, [5] O. Botti, S. Donatelli, G. Franceschinis, Assessing the performance of multiprocessor architectures through SWN models simulation: a case study in the field of plant automation systems, in: Proceedings of the 29th Annual Simulation Symposium, New Orleans, LA, April 8 11, IEEE Computer Society Press, Silverspring, MD, 1987.

23 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) [6] P. Buchholz, G. Ciardo, S. Donatelli, P. Kemper, Complexity of memory efficient Kronecker equations with applications to the solution of Markov models, INFORMS J. Comput. 3 (12) (2000) [7] P. Buchholz, P. Kemper, Modular state level analysis of distributed systems techniques and tool support, in: Proceedings of the Fifth International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Lecture Notes in Computer Science, Vol. 1579, Springer, Berlin, [8] G. Chiola, S. Donatelli, G. Franceschinis, GSPN versus SPN: what is the actual role of immediate transitions? in: Proceedings of the Fourth International Conference on Petri Nets and Performance Models, Melbourne, Australia, December [9] G. Chiola, G. Franceschinis, R. Gaeta, M. Ribaudo, GreatSPN 1.7: Graphical editor and analyzer for timed and stochastic Petri nets, Perform. Eval. (special issue on Performance Modelling Tools) 24 (1) (1996). [10] P. Buchholz, P. Kemper, Efficient computation and representation of large reachability sets for composed automata, in: Proceedings of the Fifth Workshop on Discrete Event Systems (WODES 2000), Discrete Event Systems Analysis and Control, Kluwer Academic Publishers, Dordrecht, [11] J. Campos, S. Donatelli, M. Silva, Structured solution of asynchronously communicating stochastic modules, IEEE Trans. Softw. Eng. 25 (2). [12] G. Ciardo, A. Miner, A data structure for the efficient Kronecker solution of GSPNs, in: Proceedings of the Eighth International Workshop on Petri Nets and Performance Models, IEEE Computer Society Press, Silver Spring, MD, [13] G. Ciardo, A. Miner, Efficient reachability set generation and storage using decision diagrams, in: Proceedings of the 20th International Conference on Application and Theory of Petri Nets, Lecture Notes in Computer Science, Vol. 1639, Springer, Berlin, [14] G. Ciardo, M. Tilgner, On the use of Kronecker operators for the solution of generalized stochastic Petri nets, ICASE Report 96-35, Institute for Computer Applications in Science and Engineering, Hampton, VA, May [15] M. Davio, Kronecker products and shuffle algebra, IEEE Trans. Comput. 30 (2) (1981) [16] T. Dayar, E. Uysal, Iterative methods based on splittings for stochastic automata networks, Eur. J. Oper. Res. 110 (1) (1998) [17] S. Donatelli, Superposed generalized stochastic Petri nets: definition and efficient solution, in: Proceedings of the 15th International Conference on Application and Theory of Petri Nets, Lecture Notes in Computer Science, Vol. 815, Springer, Berlin, [18] S. Donatelli, P. Kemper, Integrating synchronization with priority into a Kronecker representation, in: Proceedings of the 11th International Conference on TOOLS2000, Schaumburg, USA, Computer Performance Evaluation Modelling Techniques and Tools, Lecture Notes in Computer Science, Vol. 1786, Springer, Berlin, March [19] T. Kam, State minimization of finite state machines using implicit techniques, Ph.D. Thesis, University of California, Berkeley, CA, [20] P. Kemper, Numerical analysis of superposed GSPNs, IEEE Trans. Softw. Eng. 22 (9) (1996). [21] P. Kemper, Reachability analysis based on structured representations, in: Proceedings of the 17th International Conference on Application and Theory of Petri Nets, Lecture Notes in Computer Science, Vol. 1091, Springer, Berlin, [22] P. Kemper, R. Lübeck, Model checking based on Kronecker algebra, Forschungsbericht 669, Fachbereich Informatik, Universität Dortmund, Germany, [23] B. Plateau, On the stochastic structure of parallelism and synchronization models for distributed algorithms, in: Proceedings of the 1985 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, Austin, TX, May [24] M. Tilgner, Y. Takahashi, G. Ciardo, SNS: synchronized network solver, in: M. Silva, R. Valette, K. Takahashi (Eds.), Workshop on Manufacturing and Petri Nets, within 17th International Conference on Application and Theory of Petri Nets, Osaka, Japan, Susanna Donatelli received the Laurea degree in Computer Science from the University of Torino, Italy, in 1984, the Master of Science in Electrical and Computer Engineering from the University of Massachusetts at Amherst in 1987, and the Ph.D. in Computer Science from the University of Torino in From 1990 to 1998 she was a researcher at the University of Torino, where she is now an Associate Professor. She co-authored more than 50 papers in refereed journals and conferences, and a book on GSPN. Dr. Donatelli s main research interest is on modeling for the evaluation and verification of systems.

24 96 S. Donatelli, P. Kemper / Performance Evaluation 44 (2001) Peter Kemper holds a Diploma degree in computer science (Dipl.-Inform., 1992) and a Doctoral degree (Dr.rer.nat., 1996), both from the University of Dortmund. Currently, he is a Lecturer in the Department of Computer Science at the University of Dortmund. His main interests are in system analysis and include numerical analysis techniques for Markov chains, model checking techniques for discrete event dynamic systems with Kronecker representations. He has developed several tools for quantitative and qualitative analysis of SPN. Since 1998, Dr. Kemper contributes to the collaborative research center on modeling and analysis of large networks in logistics, SFB 559, funded by Deutsche Forschungsgemeinschaft.

Stochastic Petri Net

Stochastic Petri Net Stochastic Petri Net Serge Haddad LSV ENS Cachan & CNRS & INRIA haddad@lsv.ens-cachan.fr Petri Nets 2013, June 24th 2013 1 Stochastic Petri Net 2 Markov Chain 3 Markovian Stochastic Petri Net 4 Generalized

More information

From Stochastic Processes to Stochastic Petri Nets

From Stochastic Processes to Stochastic Petri Nets From Stochastic Processes to Stochastic Petri Nets Serge Haddad LSV CNRS & ENS Cachan & INRIA Saclay Advanced Course on Petri Nets, the 16th September 2010, Rostock 1 Stochastic Processes and Markov Chains

More information

Complexity of Memory-Efficient Kronecker Operations with Applications to the Solution of Markov Models

Complexity of Memory-Efficient Kronecker Operations with Applications to the Solution of Markov Models Complexity of Memory-Efficient Kronecker Operations with Applications to the Solution of Markov Models Peter Buchholz Department of Computer Science, Dresden University of Technology D-01062 Dresden, Germany

More information

STRUCTURED SOLUTION OF STOCHASTIC DSSP SYSTEMS

STRUCTURED SOLUTION OF STOCHASTIC DSSP SYSTEMS STRUCTURED SOLUTION OF STOCHASTIC DSSP SYSTEMS J. Campos, S. Donatelli, and M. Silva Departamento de Informática e Ingeniería de Sistemas Centro Politécnico Superior, Universidad de Zaragoza jcampos,silva@posta.unizar.es

More information

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions Electronic Notes in Theoretical Computer Science Vol. 85 No. 4 (2003) URL: http://www.elsevier.nl/locate/entsc/volume85.html Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

More information

DES. 4. Petri Nets. Introduction. Different Classes of Petri Net. Petri net properties. Analysis of Petri net models

DES. 4. Petri Nets. Introduction. Different Classes of Petri Net. Petri net properties. Analysis of Petri net models 4. Petri Nets Introduction Different Classes of Petri Net Petri net properties Analysis of Petri net models 1 Petri Nets C.A Petri, TU Darmstadt, 1962 A mathematical and graphical modeling method. Describe

More information

Methods for the specification and verification of business processes MPB (6 cfu, 295AA)

Methods for the specification and verification of business processes MPB (6 cfu, 295AA) Methods for the specification and verification of business processes MPB (6 cfu, 295AA) Roberto Bruni http://www.di.unipi.it/~bruni 17 - Diagnosis for WF nets 1 Object We study suitable diagnosis techniques

More information

Composition of product-form Generalized Stochastic Petri Nets: a modular approach

Composition of product-form Generalized Stochastic Petri Nets: a modular approach Composition of product-form Generalized Stochastic Petri Nets: a modular approach Università Ca Foscari di Venezia Dipartimento di Informatica Italy October 2009 Markov process: steady state analysis Problems

More information

Stochastic Petri Nets. Jonatan Lindén. Modelling SPN GSPN. Performance measures. Almost none of the theory. December 8, 2010

Stochastic Petri Nets. Jonatan Lindén. Modelling SPN GSPN. Performance measures. Almost none of the theory. December 8, 2010 Stochastic Almost none of the theory December 8, 2010 Outline 1 2 Introduction A Petri net (PN) is something like a generalized automata. A Stochastic Petri Net () a stochastic extension to Petri nets,

More information

Stochastic Petri Net

Stochastic Petri Net Stochastic Petri Net Serge Haddad LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA haddad@lsv.ens-cachan.fr Petri Nets 2016, June 20th 2016 1 Stochastic Petri Net 2 Markov Chain 3 Markovian Stochastic

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Time(d) Petri Net. Serge Haddad. Petri Nets 2016, June 20th LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA

Time(d) Petri Net. Serge Haddad. Petri Nets 2016, June 20th LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA Time(d) Petri Net Serge Haddad LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA haddad@lsv.ens-cachan.fr Petri Nets 2016, June 20th 2016 1 Time and Petri Nets 2 Time Petri Net: Syntax and Semantic

More information

The Need for and the Advantages of Generalized Tensor Algebra for Kronecker Structured Representations

The Need for and the Advantages of Generalized Tensor Algebra for Kronecker Structured Representations The Need for and the Advantages of Generalized Tensor Algebra for Kronecker Structured Representations Leonardo Brenner, Paulo Fernandes, and Afonso Sales PUCRS, Av Ipiranga, 6681-90619-900 - Porto Alegre,

More information

The matrix approach for abstract argumentation frameworks

The matrix approach for abstract argumentation frameworks The matrix approach for abstract argumentation frameworks Claudette CAYROL, Yuming XU IRIT Report RR- -2015-01- -FR February 2015 Abstract The matrices and the operation of dual interchange are introduced

More information

Petri nets. s 1 s 2. s 3 s 4. directed arcs.

Petri nets. s 1 s 2. s 3 s 4. directed arcs. Petri nets Petri nets Petri nets are a basic model of parallel and distributed systems (named after Carl Adam Petri). The basic idea is to describe state changes in a system with transitions. @ @R s 1

More information

CSL model checking of biochemical networks with Interval Decision Diagrams

CSL model checking of biochemical networks with Interval Decision Diagrams CSL model checking of biochemical networks with Interval Decision Diagrams Brandenburg University of Technology Cottbus Computer Science Department http://www-dssz.informatik.tu-cottbus.de/software/mc.html

More information

DATA STRUCTURES FOR THE ANALYSIS OF LARGE STRUCTURED MARKOV MODELS

DATA STRUCTURES FOR THE ANALYSIS OF LARGE STRUCTURED MARKOV MODELS DATA STRUCTURES FOR THE ANALYSIS OF LARGE STRUCTURED MARKOV MODELS A Dissertation Presented to The Faculty of the Department of Computer Science The College of William & Mary in Virginia In Partial Fulfillment

More information

A Polynomial-Time Algorithm for Checking Consistency of Free-Choice Signal Transition Graphs

A Polynomial-Time Algorithm for Checking Consistency of Free-Choice Signal Transition Graphs Fundamenta Informaticae XX (2004) 1 23 1 IOS Press A Polynomial-Time Algorithm for Checking Consistency of Free-Choice Signal Transition Graphs Javier Esparza Institute for Formal Methods in Computer Science

More information

ADVANCED ROBOTICS. PLAN REPRESENTATION Generalized Stochastic Petri nets and Markov Decision Processes

ADVANCED ROBOTICS. PLAN REPRESENTATION Generalized Stochastic Petri nets and Markov Decision Processes ADVANCED ROBOTICS PLAN REPRESENTATION Generalized Stochastic Petri nets and Markov Decision Processes Pedro U. Lima Instituto Superior Técnico/Instituto de Sistemas e Robótica September 2009 Reviewed April

More information

On the benefits of using functional transitions and Kronecker algebra

On the benefits of using functional transitions and Kronecker algebra Performance Evaluation 58 (2004) 367 390 On the benefits of using functional transitions and Kronecker algebra Anne Benoit a,, Paulo Fernandes b, Brigitte Plateau a, William J. Stewart c a IMAG-ID, 51

More information

NONBLOCKING CONTROL OF PETRI NETS USING UNFOLDING. Alessandro Giua Xiaolan Xie

NONBLOCKING CONTROL OF PETRI NETS USING UNFOLDING. Alessandro Giua Xiaolan Xie NONBLOCKING CONTROL OF PETRI NETS USING UNFOLDING Alessandro Giua Xiaolan Xie Dip. Ing. Elettrica ed Elettronica, U. di Cagliari, Italy. Email: giua@diee.unica.it INRIA/MACSI Team, ISGMP, U. de Metz, France.

More information

Time and Timed Petri Nets

Time and Timed Petri Nets Time and Timed Petri Nets Serge Haddad LSV ENS Cachan & CNRS & INRIA haddad@lsv.ens-cachan.fr DISC 11, June 9th 2011 1 Time and Petri Nets 2 Timed Models 3 Expressiveness 4 Analysis 1/36 Outline 1 Time

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

Parametric State Space Structuring

Parametric State Space Structuring NASA/CR-97-206267 ICASE Report No. 97-67 Parametric State Space Structuring Gianfranco Ciardo College of William and Mary Marco Tilgner Tokyo Institute of Technology Institute for Computer Applications

More information

Introduction to Stochastic Petri Nets

Introduction to Stochastic Petri Nets Introduction to Stochastic Petri Nets Gianfranco Balbo Università di Torino, Torino, Italy, Dipartimento di Informatica balbo@di.unito.it Abstract. Stochastic Petri Nets are a modelling formalism that

More information

Stochastic Petri Net. Ben, Yue (Cindy) 2013/05/08

Stochastic Petri Net. Ben, Yue (Cindy) 2013/05/08 Stochastic Petri Net 2013/05/08 2 To study a formal model (personal view) Definition (and maybe history) Brief family tree: the branches and extensions Advantages and disadvantages for each Applications

More information

Time Petri Nets. Miriam Zia School of Computer Science McGill University

Time Petri Nets. Miriam Zia School of Computer Science McGill University Time Petri Nets Miriam Zia School of Computer Science McGill University Timing Specifications Why is time introduced in Petri nets? To model interaction between activities taking into account their start

More information

Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events

Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events Massimo Franceschet Angelo Montanari Dipartimento di Matematica e Informatica, Università di Udine Via delle

More information

Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication

Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication Stavros Tripakis Abstract We introduce problems of decentralized control with communication, where we explicitly

More information

Georg Frey ANALYSIS OF PETRI NET BASED CONTROL ALGORITHMS

Georg Frey ANALYSIS OF PETRI NET BASED CONTROL ALGORITHMS Georg Frey ANALYSIS OF PETRI NET BASED CONTROL ALGORITHMS Proceedings SDPS, Fifth World Conference on Integrated Design and Process Technologies, IEEE International Conference on Systems Integration, Dallas,

More information

Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events

Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events Massimo Franceschet Angelo Montanari Dipartimento di Matematica e Informatica, Università di Udine Via delle

More information

Analysis and Optimization of Discrete Event Systems using Petri Nets

Analysis and Optimization of Discrete Event Systems using Petri Nets Volume 113 No. 11 2017, 1 10 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Analysis and Optimization of Discrete Event Systems using Petri Nets

More information

IN THIS paper we investigate the diagnosability of stochastic

IN THIS paper we investigate the diagnosability of stochastic 476 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 50, NO 4, APRIL 2005 Diagnosability of Stochastic Discrete-Event Systems David Thorsley and Demosthenis Teneketzis, Fellow, IEEE Abstract We investigate

More information

Chapter 3 Deterministic planning

Chapter 3 Deterministic planning Chapter 3 Deterministic planning In this chapter we describe a number of algorithms for solving the historically most important and most basic type of planning problem. Two rather strong simplifying assumptions

More information

3 Undirected Graphical Models

3 Undirected Graphical Models Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 3 Undirected Graphical Models In this lecture, we discuss undirected

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Heuristic Search Algorithms

Heuristic Search Algorithms CHAPTER 4 Heuristic Search Algorithms 59 4.1 HEURISTIC SEARCH AND SSP MDPS The methods we explored in the previous chapter have a serious practical drawback the amount of memory they require is proportional

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it

More information

Overview. Discrete Event Systems Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for?

Overview. Discrete Event Systems Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for? Computer Engineering and Networks Overview Discrete Event Systems Verification of Finite Automata Lothar Thiele Introduction Binary Decision Diagrams Representation of Boolean Functions Comparing two circuits

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Specification models and their analysis Petri Nets

Specification models and their analysis Petri Nets Specification models and their analysis Petri Nets Kai Lampka December 10, 2010 1 30 Part I Petri Nets Basics Petri Nets Introduction A Petri Net (PN) is a weighted(?), bipartite(?) digraph(?) invented

More information

A general algorithm to compute the steady-state solution of product-form cooperating Markov chains

A general algorithm to compute the steady-state solution of product-form cooperating Markov chains A general algorithm to compute the steady-state solution of product-form cooperating Markov chains Università Ca Foscari di Venezia Dipartimento di Informatica Italy 2009 Presentation outline 1 Product-form

More information

Binary Decision Diagrams

Binary Decision Diagrams Binary Decision Diagrams Literature Some pointers: H.R. Andersen, An Introduction to Binary Decision Diagrams, Lecture notes, Department of Information Technology, IT University of Copenhagen Tools: URL:

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate

More information

Chapter 2 Direct Current Circuits

Chapter 2 Direct Current Circuits Chapter 2 Direct Current Circuits 2.1 Introduction Nowadays, our lives are increasingly dependent upon the availability of devices that make extensive use of electric circuits. The knowledge of the electrical

More information

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic Varieties Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic varieties represent solutions of a system of polynomial

More information

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford Probabilistic Model Checking Michaelmas Term 2011 Dr. Dave Parker Department of Computer Science University of Oxford Probabilistic model checking System Probabilistic model e.g. Markov chain Result 0.5

More information

Properties and Classification of the Wheels of the OLS Polytope.

Properties and Classification of the Wheels of the OLS Polytope. Properties and Classification of the Wheels of the OLS Polytope. G. Appa 1, D. Magos 2, I. Mourtos 1 1 Operational Research Department, London School of Economics. email: {g.appa, j.mourtos}@lse.ac.uk

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017 CMSC CMSC : Lecture Greedy Algorithms for Scheduling Tuesday, Sep 9, 0 Reading: Sects.. and. of KT. (Not covered in DPV.) Interval Scheduling: We continue our discussion of greedy algorithms with a number

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

A Symbolic Approach to the Analysis of Multi-Formalism Markov Reward Models

A Symbolic Approach to the Analysis of Multi-Formalism Markov Reward Models A Symbolic Approach to the Analysis of Multi-Formalism Markov Reward Models Kai Lampka, Markus Siegle IT Department Uppsala University, Sweden Bundeswehr University Munich, Germany Version of March 14,

More information

Motivation for introducing probabilities

Motivation for introducing probabilities for introducing probabilities Reaching the goals is often not sufficient: it is important that the expected costs do not outweigh the benefit of reaching the goals. 1 Objective: maximize benefits - costs.

More information

STRUCTURAL APPROACHES FOR SPN ANALYSIS

STRUCTURAL APPROACHES FOR SPN ANALYSIS STRUCTURAL APPROACHES FOR SPN ANALYSIS Gianfranco Ciardo Andrew S. Miner Department of Computer Science College of William and Mary {ciardo, asminer}@cs.wm.edu KEYWORDS Kronecker algebra, structured models,

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 31, pp. 271-294, 2008. Copyright 2008,. ISSN 1068-9613. DECOMPOSITIONAL ANALYSIS OF KRONECKER STRUCTURED MARKOV CHAINS YUJUAN BAO, İLKER N. BOZKURT,

More information

Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur. Lecture - 21 Power Flow VI

Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur. Lecture - 21 Power Flow VI Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Power Flow VI (Refer Slide Time: 00:57) Welcome to lesson 21. In this

More information

An Holistic State Equation for Timed Petri Nets

An Holistic State Equation for Timed Petri Nets An Holistic State Equation for Timed Petri Nets Matthias Werner, Louchka Popova-Zeugmann, Mario Haustein, and E. Pelz 3 Professur Betriebssysteme, Technische Universität Chemnitz Institut für Informatik,

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Approximation Metrics for Discrete and Continuous Systems

Approximation Metrics for Discrete and Continuous Systems University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science May 2007 Approximation Metrics for Discrete Continuous Systems Antoine Girard University

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Complexity of Kronecker Operations on Sparse Matrices with Applications to Solution of Markov Models

Complexity of Kronecker Operations on Sparse Matrices with Applications to Solution of Markov Models NASA/CR-97-206274 ICASE Report No. 97-66 Complexity of Kronecker Operations on Sparse Matrices with Applications to Solution of Markov Models Peter Buchholz (Universitat Dortmund, Germany) Gianfranco Ciardo

More information

CHAPTER 1 INTRODUCTION TO BRT

CHAPTER 1 INTRODUCTION TO BRT CHAPTER 1 INTRODUCTION TO BRT 1.1. General Formulation. 1.2. Some BRT Settings. 1.3. Complementation Theorems. 1.4. Thin Set Theorems. 1.1. General Formulation. Before presenting the precise formulation

More information

A shrinking lemma for random forbidding context languages

A shrinking lemma for random forbidding context languages Theoretical Computer Science 237 (2000) 149 158 www.elsevier.com/locate/tcs A shrinking lemma for random forbidding context languages Andries van der Walt a, Sigrid Ewert b; a Department of Mathematics,

More information

Spanning Trees in Grid Graphs

Spanning Trees in Grid Graphs Spanning Trees in Grid Graphs Paul Raff arxiv:0809.2551v1 [math.co] 15 Sep 2008 July 25, 2008 Abstract A general method is obtained for finding recurrences involving the number of spanning trees of grid

More information

Universität Augsburg

Universität Augsburg Universität Augsburg Properties of Overwriting for Updates in Typed Kleene Algebras Thorsten Ehm Report 2000-7 Dezember 2000 Institut für Informatik D-86135 Augsburg Copyright c Thorsten Ehm Institut für

More information

The State Explosion Problem

The State Explosion Problem The State Explosion Problem Martin Kot August 16, 2003 1 Introduction One from main approaches to checking correctness of a concurrent system are state space methods. They are suitable for automatic analysis

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Support weight enumerators and coset weight distributions of isodual codes

Support weight enumerators and coset weight distributions of isodual codes Support weight enumerators and coset weight distributions of isodual codes Olgica Milenkovic Department of Electrical and Computer Engineering University of Colorado, Boulder March 31, 2003 Abstract In

More information

Enumeration Schemes for Words Avoiding Permutations

Enumeration Schemes for Words Avoiding Permutations Enumeration Schemes for Words Avoiding Permutations Lara Pudwell November 27, 2007 Abstract The enumeration of permutation classes has been accomplished with a variety of techniques. One wide-reaching

More information

The efficiency of identifying timed automata and the power of clocks

The efficiency of identifying timed automata and the power of clocks The efficiency of identifying timed automata and the power of clocks Sicco Verwer a,b,1,, Mathijs de Weerdt b, Cees Witteveen b a Eindhoven University of Technology, Department of Mathematics and Computer

More information

HYPENS Manual. Fausto Sessego, Alessandro Giua, Carla Seatzu. February 7, 2008

HYPENS Manual. Fausto Sessego, Alessandro Giua, Carla Seatzu. February 7, 2008 HYPENS Manual Fausto Sessego, Alessandro Giua, Carla Seatzu February 7, 28 HYPENS is an open source tool to simulate timed discrete, continuous and hybrid Petri nets. It has been developed in Matlab to

More information

a (b + c) = a b + a c

a (b + c) = a b + a c Chapter 1 Vector spaces In the Linear Algebra I module, we encountered two kinds of vector space, namely real and complex. The real numbers and the complex numbers are both examples of an algebraic structure

More information

Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity

Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity CSE 531: Computational Complexity I Winter 2016 Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity January 22, 2016 Lecturer: Paul Beame Scribe: Paul Beame Diagonalization enabled us to separate

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Reachability in Petri nets with Inhibitor arcs

Reachability in Petri nets with Inhibitor arcs Reachability in Petri nets with Inhibitor arcs Klaus Reinhardt Universität Tübingen reinhard@informatik.uni-tuebingen.de Overview Multisets, New operators Q and Q on multisets, Semilinearity Petri nets,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

7. Queueing Systems. 8. Petri nets vs. State Automata

7. Queueing Systems. 8. Petri nets vs. State Automata Petri Nets 1. Finite State Automata 2. Petri net notation and definition (no dynamics) 3. Introducing State: Petri net marking 4. Petri net dynamics 5. Capacity Constrained Petri nets 6. Petri net models

More information

Embedded Systems 6 REVIEW. Place/transition nets. defaults: K = ω W = 1

Embedded Systems 6 REVIEW. Place/transition nets. defaults: K = ω W = 1 Embedded Systems 6-1 - Place/transition nets REVIEW Def.: (P, T, F, K, W, M 0 ) is called a place/transition net (P/T net) iff 1. N=(P,T,F) is a net with places p P and transitions t T 2. K: P (N 0 {ω})

More information

A Class of Star-Algebras for Point-Based Qualitative Reasoning in Two- Dimensional Space

A Class of Star-Algebras for Point-Based Qualitative Reasoning in Two- Dimensional Space From: FLAIRS- Proceedings. Copyright AAAI (www.aaai.org). All rights reserved. A Class of Star-Algebras for Point-Based Qualitative Reasoning in Two- Dimensional Space Debasis Mitra Department of Computer

More information

Part I: Definitions and Properties

Part I: Definitions and Properties Turing Machines Part I: Definitions and Properties Finite State Automata Deterministic Automata (DFSA) M = {Q, Σ, δ, q 0, F} -- Σ = Symbols -- Q = States -- q 0 = Initial State -- F = Accepting States

More information

Lecture 4 October 18th

Lecture 4 October 18th Directed and undirected graphical models Fall 2017 Lecture 4 October 18th Lecturer: Guillaume Obozinski Scribe: In this lecture, we will assume that all random variables are discrete, to keep notations

More information

A comment on Boucherie product-form results

A comment on Boucherie product-form results A comment on Boucherie product-form results Andrea Marin Dipartimento di Informatica Università Ca Foscari di Venezia Via Torino 155, 30172 Venezia Mestre, Italy {balsamo,marin}@dsi.unive.it Abstract.

More information

Generalized interval arithmetic on compact matrix Lie groups

Generalized interval arithmetic on compact matrix Lie groups myjournal manuscript No. (will be inserted by the editor) Generalized interval arithmetic on compact matrix Lie groups Hermann Schichl, Mihály Csaba Markót, Arnold Neumaier Faculty of Mathematics, University

More information

A REACHABLE THROUGHPUT UPPER BOUND FOR LIVE AND SAFE FREE CHOICE NETS VIA T-INVARIANTS

A REACHABLE THROUGHPUT UPPER BOUND FOR LIVE AND SAFE FREE CHOICE NETS VIA T-INVARIANTS A REACHABLE THROUGHPUT UPPER BOUND FOR LIVE AND SAFE FREE CHOICE NETS VIA T-INVARIANTS Francesco Basile, Ciro Carbone, Pasquale Chiacchio Dipartimento di Ingegneria Elettrica e dell Informazione, Università

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

Methods for the specification and verification of business processes MPB (6 cfu, 295AA)

Methods for the specification and verification of business processes MPB (6 cfu, 295AA) Methods for the specification and verification of business processes MPB (6 cfu, 295AA) Roberto Bruni http://www.di.unipi.it/~bruni 08 - Petri nets basics 1 Object Formalization of the basic concepts of

More information

MTAT Complexity Theory October 13th-14th, Lecture 6

MTAT Complexity Theory October 13th-14th, Lecture 6 MTAT.07.004 Complexity Theory October 13th-14th, 2011 Lecturer: Peeter Laud Lecture 6 Scribe(s): Riivo Talviste 1 Logarithmic memory Turing machines working in logarithmic space become interesting when

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing

EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing Michael J. Neely University of Southern California http://www-bcf.usc.edu/ mjneely 1 Abstract This collection of notes provides a

More information

Design of Distributed Systems Melinda Tóth, Zoltán Horváth

Design of Distributed Systems Melinda Tóth, Zoltán Horváth Design of Distributed Systems Melinda Tóth, Zoltán Horváth Design of Distributed Systems Melinda Tóth, Zoltán Horváth Publication date 2014 Copyright 2014 Melinda Tóth, Zoltán Horváth Supported by TÁMOP-412A/1-11/1-2011-0052

More information

On the Average Complexity of Brzozowski s Algorithm for Deterministic Automata with a Small Number of Final States

On the Average Complexity of Brzozowski s Algorithm for Deterministic Automata with a Small Number of Final States On the Average Complexity of Brzozowski s Algorithm for Deterministic Automata with a Small Number of Final States Sven De Felice 1 and Cyril Nicaud 2 1 LIAFA, Université Paris Diderot - Paris 7 & CNRS

More information

2.6 Complexity Theory for Map-Reduce. Star Joins 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51

2.6 Complexity Theory for Map-Reduce. Star Joins 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51 Star Joins A common structure for data mining of commercial data is the star join. For example, a chain store like Walmart keeps a fact table whose tuples each

More information

Rectangular Systems and Echelon Forms

Rectangular Systems and Echelon Forms CHAPTER 2 Rectangular Systems and Echelon Forms 2.1 ROW ECHELON FORM AND RANK We are now ready to analyze more general linear systems consisting of m linear equations involving n unknowns a 11 x 1 + a

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Learning Automata Based Adaptive Petri Net and Its Application to Priority Assignment in Queuing Systems with Unknown Parameters

Learning Automata Based Adaptive Petri Net and Its Application to Priority Assignment in Queuing Systems with Unknown Parameters Learning Automata Based Adaptive Petri Net and Its Application to Priority Assignment in Queuing Systems with Unknown Parameters S. Mehdi Vahidipour, Mohammad Reza Meybodi and Mehdi Esnaashari Abstract

More information