Performing transient and passage time analysis through uniformization and stochastic simulation

Size: px
Start display at page:

Download "Performing transient and passage time analysis through uniformization and stochastic simulation"

Transcription

1 Performing transient and passage time analysis through uniformization and stochastic simulation Simone Fulvio Rollini E H U N I V E R S I T Y T O H F R G E D I N B U Master of Science School of Informatics University of Edinburgh 2008

2 Abstract This MSc project consists in an extension of the functionalities of the PEPA Eclipse Plugin, a state-ofthe-art tool for performance analysis and evaluation developed at the University of Edinburgh; this is built on the top of the Eclipse technology, and features a PEPA editor and instruments for steady state analysis via ordinary differential equation methods. The present contribution allows the computation of transient and passage time analysis for PEPA models, by means of two methods: Uniformization. Stochastic simulation. Such approaches, the first one analytical while the second one algorithmic, are complementary in the sense of a strong tradeoff memory-time: simulation always needs a very low amount of storage, almost independent from the model complexity, but it can take a quite large amount of time to gather enough elements to make a run significant; uniformization is a naturally accurate method, yet needs to store the state space of the model and suffers from slowness of convergence if the model is stiff or the observation time range too broad. This dissertation presents a mathematical treatment of the two techniques, as well as the description of their implementation; a discussion is further carried out about the results obtained for simulation and uniformization in various case study, and a comparison is illustrated with other existing uniformization based tools (IPC and HYDRA, developed at the Imperial College of London). i

3 Acknowledgements I would like to thank my supervisor Dr.Stephen Gilmore for his patience and his constant support throughout the project development. ii

4 Declaration I declare that this thesis was composed by myself, that the work contained herein is my own except where explicitly stated otherwise in the text, and that this work has not been submitted for any other degree or professional qualification except as specified. (Simone Fulvio Rollini) iii

5 To my parents. iv

6 Table of Contents 1 Introduction Context Personal contribution Organization of the dissertation Mathematical background Stochastic processes Markov processes Discrete time Markov chains Continuous time Markov chains Steady state measures Transient measures Markov processes: strong and weak points Stochastic process algebras PEPA Stochastic simulation Overview Existing PEPA software PEPA Eclipse Plugin HYDRA and IPC Personal contribution Extension of the PEPA Eclipse Plugin Connection with the PEPA Eclipse Plugin Connection with HYDRA Other software used Organization of the PEPA Eclipse Plugin extension ISimulator Simulator IUniformizer Uniformizer IChart Chart v

7 4 Implementation Simulator Discrete event based simulation Dynamics of a simulation model Mechanics of a simulation run Data collection and analysis in a run Data collection and analysis over all the runs Uniformizer Mechanics of uniformization Data structures: comparison with HYDRA Distribution distance Newton Cotes formulae Comparison between uniformization and simulation Evaluation of self distance in stochastic simulation Testing Models examined Software used for testing JAMon Linux time JConsole Comparison HYDRA-uniformizer Unformization varying degree of stiffness, time range and number of samples Simulation execution time Simulation data collection Simulation performance bottleneck Distribution distance and self distance Conclusions 67 Bibliography 70 vi

8 Chapter 1 Introduction 1.1 Context The present project finds its context in the field of performance evaluation, whose primary aims are the description, analysis and optimization of the behaviour of systems, with particular focus on computer and communication networks. Although the systems taken into account are inherently deterministic, it is often useful to represent them as probabilistic, allowing a certain degree of randomness in their specification; an especially useful kind of stochastic model is represented by the class of continuous time Markov processes, because of its characteristic mathematical properties. At a basic level of abstraction, the behaviour of a system can be explained by identifying all possible configurations reachable and illustrating the ways in which it moves across them. However, specifying every state and transition in the state space of even a moderately complex Markov process with hundreds of states is an infeasible task. To remedy this difficulty, formalisms with a higher level of abstraction can be employed to describe models of stochastic systems in a more brief and clear manner; from these the underlying processes can be generated automatically and the performance measures derived in a standard way. As a modelling paradigm we consider here the stochastic process algebras, and we adopt the Performance Evaluation Process Algebra (PEPA) developed at the University of Edinburgh. Traditional techniques for analytical performance evaluation are predominantly based on the study of systems over a long interval of time, to overcome the bias determined by the starting conditions; after an initial warming up period the systems themselves likely reach an equilibrium state, (called steady state in Markov processes) and exhibit a regular behaviour. Steady state measures are adequate to predict mean resource based measures, such as throughput and utilization, but are not sufficient in case of measures defined over a finite time interval or at a point in time; they cannot in fact give a full understanding of the transient behaviour of a system, which is desirable for example in the presence of critical events, like reconfigurations or failures. Moreover, steady state measures are not sufficient to determine response time densities and quantiles; this is a serious problem since response time is a relevant performance criterion for almost all computer communication and transaction processing systems, and it is assuming increasing importance as key quality of service and performance metrics in Service Level Agreements and benchmarks. 1

9 Chapter 1. Introduction 2 Usually, response time targets are described in terms of quantiles, such as x% of the messages must be received within y seconds ; in the context of the models used by analysts, response times can be specified as passage times, the time taken to reach one of a set of target states from a given set of source states in the formal representation adopted. When the state space of a Markov process is too large, it becomes infeasible to obtain a closed form solution for the transient measures; numerical techniques have been developed to perform this computation, including ordinary differential equation solution methods and Laplace transform. Uniformization has been often considered the best method, because of its properties of efficiency, accuracy and numerical stability, and for those reasons it is the numerical approach adopted in the present project. However, performance measures of interest can be obtained only once all the possible configurations of a system (the state space of the associated Markov process) have been fully explored; this approach involves constructing and storing a representation of the space itself throughout the solution process, and can cause memory problems even with fairly sized models. A possible alternative consists in moving from analytical to simulation methods; a stochastic simulation model can be regarded as an algorithmic abstraction, in the sense that it gives a representation which when executed reproduces the behaviour of the system. Such solution has the remarkable advantage of having very low memory requirements, since each run only involves the on the fly generation of a path through the state space; but it comes to the expense of computation time, because very long runs may be needed in order to achieve results significant from a statistical point of view. 1.2 Personal contribution This project resumes the work carried out respectively at the Imperial College of London and at the University of Edinburgh, which has seen the parallel development of two software tools for systems performance evaluation. The first one is HYDRA (HYpergraph-based Distributed Response-time Analyser), based on DNAmaca modelling language, capable of implementing steady state, transient and passage time analysis via uniformization. extended by IPC (Imperial PEPA Compiler), based on PEPA, which act as interface towards HYDRA, translating PEPA specifications into DNAmaca ones. The second software is The PEPA Eclipse Plugin, built on top of the Eclipse technology, featuring a PEPA editor and instruments for steady state or transient analysis via ordinary differential equation methods, as well as tools to display results graphically. My personal contribution consists in an extension of the PEPA Eclipse Plugin project, in order to add some useful functionalities: Transient and passage time analysis via uniformization.

10 Chapter 1. Introduction 3 Transient and passage time analysis via simulation. Integrated charting utilities to show results in the form of: Line charts for uniformization and simulation. Candlestick charts for simulation confidence intervals. Combined charts for uniformization-simulation comparison. Notice that, as our uniformization algorithm, we have implemented the same algorithm as used by HYDRA, apart from a different choice for a data structure; a description and a comparison will be provided in the course of the dissertation. 1.3 Organization of the dissertation The present work is divided into four main chapters: A mathematical background, which introduces all the fundamental concepts related to Markov processes, stochastic process algebras, steady state and transient measures, uniformization and simulation. A general overview of the software implementation, with a description of the main packages and classes. A detailed illustration of the algorithms and data structures utilized, together with the necessary proofs of soundness for the solutions adopted. A presentation of the testing carried out over a family of PEPA models: there are shown the results obtained individually for uniformization and simulation in different situations, a comparison uniformization-simulation and a comparison between the HYDRA version of uniformization and the implemented one.

11 Chapter 2 Mathematical background This chapter illustrates the mathematical background of the project, and introduces the fundamental concepts related to Markov processes, stochastic process algebras, steady state and transient measures, uniformization and simulation. 2.1 Stochastic processes Markov processes The present work finds its roots in the field of performance evaluation, whose primary aims are the description, analysis and optimization of the behaviour of computer systems. To achieve that it is necessary to identify the aspects and characteristics which are relevant from a performance point of view, and to investigate the data and information generated by the components of the system ([26, 19]). Dealing with informatics, the focus is on computer and communication networks, for which a natural representation is that of a discrete event system: that means that its state is described by a collection of variables which can assume distinct values changing at distinct times in response to specific events. Although such systems are inherently deterministic, it is often useful to represent them as stochastic, allowing a certain degree of randomness in their behaviour. This is appropriate in various situations: it can be the case that the systems are so complex that it is unlikely that we will be able to obtain a detailed deterministic view of their mechanics; the environments which with computers interact can be unpredictable; moreover the use of stochastic approaches allows some kinds of analysis that determinism cannot offer, such as the study of typical or average rather than particular behaviour. At a basic level of description, the system behaviour can be explained by identifying all possible configurations/states reachable and describing the ways in which the system moves across the states. This is termed the state transition level behaviour of the model, and the changes in state as time progresses will be formally represented by a stochastic process, taking into account an unpredictable component. A stochastic process is in this sense the counterpart to a deterministic system. Instead of dealing with only one possible way of how the process might evolve under time, in a stochastic process there is a certain indeterminacy in its future evolution, which is described by a probability distribution. This means that even if the starting state/condition is known, there are many possibilities the process might go to, some of which are more probable than others. 4

12 Chapter 2. Mathematical background 5 Consider a random variable X which assumes different values at different times t. The sequence of random variables {X(t),t T} is called a stochastic process; T is said to be the index set and will represent time. The state space S is the set of all possible values which members of the sequence X(t) can take on; each of these values is called a state of the process, corresponding to the intuitive notion of states within the system represented. A stochastic process can be classified by the nature of its state space and of its time parameter. The stochastic process is said to have a discrete state space if its values are finite or countably infinite, otherwise we talk of a continuous state space; when modeling a discrete event system, a discrete finite state space is the natural choice. In the same way, if the times at which X(t) is observed are also countable, the process is said to be a discrete time process and we assume T = N. Otherwise, the process is said to be a continuous time process and T = R. We focus here on a specific kind of stochastic models, the class of Markov processes ([32, 26, 19, 7]), both in discrete and continuous time, characterized by some mathematical properties which make it a suitable tool, particularly in the context of dependability and performance evaluation of computer systems: X(t) is a Markov process, having the Markov or memoryless property: given the value X(t) of the system at some time t, its future evolution depends only on the current state and not on the knowledge of past history. For t 0 <... < t n+1 : Pr(X(t n+1 ) = x n+1 X(t n ) = x n,...,x(t 0 ) = x 0 ) = Pr(X(t n+1 ) = x n+1 X(t n ) = x n ) The process is homogeneous, its behaviour is invariant to shifts in the time axis. For all s: P(X(t n+1 + s) = x n+1 X(t n + s) = x n ) = P(X(t n+1 ) = x n+1 X(t n ) = x n ) The process is irreducible: all states in S can be reached from each other, by following a path of transitions. An essential probability distribution, which, as we will show later, is strictly connected to Markov processes, is the exponential distribution, which exhibits some important mathematical properties: The exponential distribution is the unique memoryless continuous distribution: given a random variable X, for t,s > 0 P(X > t + s X > t) = P(X > s) If in a discrete event system X represents the time until an event, absence of memory means that knowing that t time units have passed without the event occurring does not give any information about when the event will occur. Superposition property. If X 1 and X 2 are two exponentially distributed random variables, with parameters λ 1 and λ 2, and Y = min{x 1,X 2 }, then Y is also exponentially distributed with parameter λ 1 + λ 2. Decomposition property. Let X be an exponentially distributed random variable, with parameter λ, representing time until an event. Suppose that the events belong to one of two categories, A

13 Chapter 2. Mathematical background 6 with probability p A and B with probability p B = 1 p A. Then stream A and stream B are Poisson distributed with parameters λ p A and λ p B, and the waiting times between events of type A and between events of type B are exponentially distributed with parameters λ p A and λ p B Discrete time Markov chains This section considers Markov processes where state changes are observed at discrete time intervals and with discrete state spaces; they are referred to as discrete time Markov chains (DTMC) ([26, 19, 7, 32]). Since we are dealing with countable time steps t N, we can slightly modify the notation, and write X(t) as X n for t = n. Thus the Markovian property for a stochastic process X n,n N becomes: P(X n+1 = x n+1 X n = x n,...,x 0 = x 0 ) = P(X n+1 = x n+1 X n = x n ) Time homogeneity can be expressed as: for each s. P(X n+s+1 = x n+1 X n+s = x n ) = P(X n+1 = x n+1 X n = x n ) The P(X n+1 = x n+1 X n = x n ) describe the one step transition probabilities of a DTMC, that is the probabilities that the DTMC moves from state x n to state x n+1 in a single transition. These values can be organized in a stochastic transition probability matrix P, the elements p i j of which are defined as: p i j = P(X n+1 = j X n = i) under the conditions that for all i, j S, 0 p i j 1 and j p i j = 1. An important property which characterizes a Markov chain state is periodicity. A state i is said to have period k if any return to state i must occur in multiples of k time steps. For example, if it is only possible to return in an even number of steps, then i is periodic with period 2. Formally, the period k of a state is defined as: k = gcd{n : P(X n = i X 0 = i) > 0} If k = 1, then the state is said to be aperiodic, otherwise the state is said to be periodic with period k. A Markov chain is aperiodic if all its states are; it can be shown that every state in an irreducible Markov chain must have the same period. We now consider the evolution in time of the behaviour of a Markov chain when it starts in a state chosen by a probability distribution on the set of states, expressed as a probability vector. For DTMCs, we define the probability of being in state j after n time steps, having started in state i at time 0, as: π (n) i j = P(X n = j X 0 = i) Extending the previous expression to vectorial notation, we define π (n), whose generic entry π i shows the probability that the chain is in state i after n steps, starting from an initial distribution π (0). A probability distribution vector v is subjected to the conditions 0 v i 1 for all v and i z i = 1. The n step transition satisfies the Kolmogorov equation, that is: π (n) = π (n 1) P =... = π (0) P n

14 Chapter 2. Mathematical background 7 A primary target in Markovian analysis consists in calculating the probability distribution over the state space at an arbitrary point in the future, as the system achieves a regular behaviour. This is termed the limiting or steady state probability distribution {π j }, and for a DTMC is defined as: or starting from a given π (0). π j = lim π (n) n i j π = lim n π (n) A probability distribution v over S is called stationary if satisfies the equation: v = vp where P is the transition probability matrix as defined before. In a time homogeneous, irreducible, aperiodic and finite state (also called ergodic) DTMC, the steady state probabilities π always exist and are independent of the initial state probability distribution π (0). Furthermore, they coincide with the unique stationary distribution; π is uniquely determined by the equation: under the condition i π i = 1. π = πp Continuous time Markov chains We now discuss a family of Markov processes over discrete state spaces, but whose transitions can occur at arbitrary points in time; we refer to them as continuous time Markov chains (CTMC) ([26, 19, 7, 32]). Like any other stochastic process, a CTMC is characterized by a random variable X(t), indexed over time t (t R ), and a state space S such that X(t) S for all t. The dynamic behaviour of the system being modelled is described by the transitions between the states, and the times spent in the states themselves, called sojourn times. In general, these times represent periods in which processing is being carried out in the system, while transitions represent events. If a state i S is entered at time t and the next state transition takes place at time t + t, t is the sojourn time in state i. Thanks to the Markov property, the distribution of the time until the next change of state is always independent of the time of the previous change. Sojourn times are thus memoryless: the future evolution of the system does not depend on the evolution until the current state, nor it is dependent on how much time has already been spent in the current state. Since the only probability distribution function which has this property is the exponential distribution, it follows that the sojourn time in any state of a Markov process is an exponentially distributed random variable; the same kind of reasoning yields that the time delay before a transition from i to j is exponentially distributed with a parameter q i j, called instantaneous transition rate. The clearest way to represent simple Markov processes is in terms of its state transition diagram. Each state of the process is a node in a graph; the arcs represent possible transitions between states, and are labelled by their respective rates (parameters of the exponential distributions determining the transition durations).

15 Chapter 2. Mathematical background 8 Here is a little example of a 7 state process: 2 λ 1 µ 1 µ µ 2 µ 1 µ µ 3 µ 1 7 λ 2 Figure 2.1: State transition diagram A Markov process with n states can be characterized by its n n infinitesimal generator matrix, Q. The entry in the j th column of the i th row of the matrix ( j i) will be q i j, given by the sum of the parameters labelling arcs connecting nodes i and j in the state transition diagram. The diagonal elements are chosen to guarantee that the sum of the elements in every row is zero, i.e. q ii = j S, j i q i j. This is the matrix associated to the previous diagram: λ 1 λ µ 1 µ 2 µ 1 µ µ 2 0 µ µ 1 µ 3 µ 1 µ µ 3 0 µ µ 1 µ 1 λ λ 2 Figure 2.2: Infinitesimal generator matrix For a CTMC we can further consider the embedded discrete time Markov chain, which describes the behaviour of the process at state transition instants, that is to say the probability that the next state is j given that the current state is i. The embedded chain of a CTMC with n states has a one step n n transition matrix P with elements p i j. From the q i j we can derive the exit rates, q i, and the transition probabilities, p i j, as follows. The exit

16 Chapter 2. Mathematical background 9 rate is the rate at which the system leaves state i, i.e. it is the parameter of the exponential distribution governing the sojourn time and thus the reciprocal of the average sojourn time (1/q i ). This is given by the minimum of the delays until any of the possible transitions occurs; because of the superposition property, it is the sum of the individual transition rates, q i = j S, j i q i j (notice that the exit rates with inverted sign appear as the diagonal elements of the infinitesimal transition matrix, q i = q ii ); viceversa q i j = q i p i j, by the decomposition property. The transition probability p i j is the probability, given that a transition from state i occurs, that we move to state j. The definition of conditional probability implies that p i j = q i j /q i. Performance analysts are usually interested in considering the behaviour of systems over a long period of time; from a modelling point of view this is important since there might be different initial states for the model, and the choice among them could be arbitrary. The idea behind studying the long term probability distribution is that hopefully any bias introduced by the particular starting state will have been overcome after a sufficiently extended time. We assume that, if and when the effects of initial bias have disappeared, the system reaches a steady or equilibrium state; this means that the model is assumed to exhibit regularity and predictability in its behaviour over the state space. Mathematically, the probability distribution of the random variable X(t) over the state space S will not change anymore. Let us indicate by π k (t) the probability that the Markov process at time t will be in state x k, for x k S. A condition of equilibrium has been entered when, for all x k S, π k (t + τ) = π k (t), in other words the time at which we observe the model does not influence the probability that it is in a particular state. Thus we denote the steady state probabilities π k, independently from time, for the probability that the model is in state x k, and we collect them in a vector π.. Similarly to the results shown for Markov chains, a limiting or steady state probability distribution, π k,x k S, exists for every time homogeneous, finite, irreducible Markov process. Moreover, this distribution is the same as the limiting distribution: π k = lim t π k (t) = lim t P(X(t) = x k X(0) = x 0 ) (2.1) The result is independent from the specific initial state x 0. In vectorial notation: π = lim t π(t) (2.2) Notice that, differently from the case of a discrete time Markov process, we do not have to worry about periodicity; the randomness of the time the system spends in each state guarantees the desired convergence. The limiting probabilities can be directly computed from a set of equations, which in a certain sense balance the probability flows into and out of each state. In steady state, π i is the proportion of time that the process spends in state x i, while the transition rate q i j is the instantaneous probability that the model makes a transition from state x i to state x j. Thus, in an instant of time, the probability that a transition will occur from state x i to state x j is the probability that the model was in state x i, π i, multiplied by the transition rate q i j : π i q i j is called the probability flux from state x i to state x j. When the model is in steady state, in order to maintain the equilibrium, we must assume that the total probability flux out of a state is equal to the total probability flux into the state. So, for any x i, this

17 Chapter 2. Mathematical background 10 implies: π i q i j = π j q ji j i j i The collection of these equations for all states x i S is called the global balance equations. Exploiting the fact that the diagonal elements of the infinitesimal generator matrix Q are the negative sum of the other elements in the row, the equation can be rewritten to: π j q ji = 0 x j S Expressing the values π i as a row vector π, we can express this as a matrix equation: πq = 0 If the state space size is n, there are n equations in n unknowns, solvable together with the normalisation condition xi S π i = 1. The steady state probabilities are unique and independent of the initial state distribution Steady state measures As mentioned in the last section, an important objective with respect to a Markovian model is to calculate the steady state probability distribution, that is the probability distribution of the random variable X(t) over the state space S, as the system achieves a regular pattern of behaviour. From such distribution we can derive a variety of performance measures, each one regarded as a random variable itself, to characterise the behaviour of the system, basing ourselves on the long term probabilities that the model is in a set of states which satisfies some condition. Among the most common there are ([26, 28]): Response time, an estimate of the time which elapses between the arrival of a request to the system and its completion. Throughput, the number of such requests satisfied per unit time. Utilization, the proportion of time a system resource is in use Transient measures Traditional techniques for analytical performance modelling are predominantly based on the steady state analysis of Markov chains. Steady state measures are adequate to predict mean resource based measures (such as the ones mentioned above) and even some mean passages or response time values, but are not sufficient in case of measures defined over a finite time interval or at a point in time ([19, 11, 12, 14, 24]). They cannot in fact give full understanding of the transient behaviour of a system (described by transient state probabilities), which is desirable for example in presence of critical events, such as reconfigurations or failures. Moreover, steady state measures are not suitable for determining passage time densities and quantiles; this is a serious problem since passage time quantiles are assuming increasing importance as key quality of service and performance metrics in Service Level Agreements and benchmarks. A rapid

18 Chapter 2. Mathematical background 11 response time is in fact a relevant performance criterion for many kinds of computer communication and transaction processing systems, such as stock market trading systems, web and database servers, communication networks. Usually, response time targets are described in terms of quantiles, e.g. x% of the messages must be received within y seconds. In the context of the models adopted by performance analysts, response times can be specified as passage times, the time taken to reach one of a set of target states from a given set of source states in the underlying Markov chain Transient distribution The transient state distribution of a CTMC, π i j (where j represents a set of states), is the probability that the process is in a state in j at time t, given that it was in state i at time 0 ([19, 11, 12, 14, 24]): ) π i j (X(t) (t) = P j X(0) = i Once these probabilities are known, we can compute the distribution at time t, given the initial distribution, according to: Passage times π j (t) = P(X(t) j) = π i j (t)p(x(0) = i) i The first passage time for a CTMC from a source state i into a non empty set of target states j is, for t 0 ([19, 11, 12, 14, 24]): T i j (t) = inf{u > 0 : X(t + u) j X(t) = i} For a time homogeneous CTMC, T i j (t) is independent of t, so: T i j (t) = inf{u > 0 : X(u) j X(0) = i} T i j is a random variable with an associated probability density function f i j, the main target of passage time analysis; its computation involves convolving state holding times over all possible paths from state i into any of the states in the set j Uniformization When the transition matrix of a Markov process is too large, it becomes infeasible to obtain a closed form solution for the transient state probabilities; numerical approaches have been developed to perform this computation, including ordinary differential equation solution methods and Laplace transform. Uniformization has been often considered the best method, for its properties of efficiency, accuracy and numerical stability. A detailed mathematical description of this technique will be given in the next chapters Markov processes: strong and weak points The first assumption we have made in treating stochastic models is that the behaviour of a real system during a given period of time is characterised by the probability distributions of a stochastic process; it cannot be definitely proved, but extensive evidence suggests that such hypothesis is true.

19 Chapter 2. Mathematical background 12 Another fundamental one in the context of Markov processes is the memoryless property: the future behaviour of a system is only dependent on its current state, while past history is irrelevant. It is reasonable for essentially deterministic systems like computer and communication networks, but it may influence the level of abstraction during the modelling phase. We assume that the Markov process is finite, time homogeneous and irreducible (aperiodic in case of discrete time), necessary conditions for a steady state distribution to exist, and to coincide with the long term probability distribution; this is again reasonable, considering the systems under examination, and unlikely to restrict the model expressiveness. In contrast, the hypothesis that all delays and inter event times are exponentially distributed is hardly confirmed by the data which is gathered when observing a real system. Dependence on this assumption is the main weakness from a model building point of view, but it is also the strongest feature when deriving the model solution: only memoryless systems can be solved directly in terms of the global balance equations and the memoryless property requires the adoption of exponential delays. Performance measures of interest are obtained once the state space has been fully explored and once the steady state (in case of resource based measures) or the transient (in case transient measures) vector probability distribution has been computed; this approach involves constructing and storing the infinitesimal generator matrix and the vector throughout the solution process, and can cause memory problems even with moderately sized models. This problem is usually referred to as state space explosion; it is one of the major handicaps in the practical use of Markovian based modelling techniques, and extensive research has been dedicated to find efficient methods to solve memory issues. 2.2 Stochastic process algebras Specifying every state and transition in the state space of even a fairly complex Markov process with hundreds of states is an infeasible task. To remedy this difficulty, formalisms with a higher level of abstraction can be employed to describe models of stochastic systems in a more brief and clear manner; from these the underlying processes can be generated automatically and the performance measures derived in a standard way. We focus in particular on a class of modelling paradigms, stochastic process algebras (SPA), extensions of untimed process algebras like Milner s Calculus of Communicating Systems and Hoare s Communicating Sequential Processes. These abstract languages can be considered as high level tools for the model specification and design of concurrent systems ([25, 26, 27]). The process algebra formalism, employed to check correctness in the behaviour of systems (deriving qualitative properties such as freedom from deadlock or livelock), is broadened by associating exponential delays with actions, and reachability analysis is further used to construct the corresponding Markov processes. The advantages of SPA are that they incorporate the fundamental features of process algebras and thus bring to the area of performance modelling several characteristics not always offered by other existing formalisms. Primary among them is compositionality, the capacity to model a system as the interaction of its subsystems: this not only helps model construction, but it can, in some circumstances, be exploited during model solution. Other two features are formality, the adoption of a precise meaning for all the terms in the language, and abstraction, the ability to build models disregarding unnecessary

20 Chapter 2. Mathematical background 13 details when appropriate. In the process algebra approach systems are designed as collections of entities, called agents, which undertake atomic actions. These actions are the basic blocks of the language and they are used to describe sequential behaviours which may run concurrently, and synchronizations or communications; a small number of combinators define how such components and interactions between them can be built PEPA As an example of stochastic process algebra we illustrate PEPA, an acronym which stands for Performance Evaluation Process Algebra ([25, 26, 27]). PEPA extends standard process algebra by associating an exponentially distributed random variable, representing duration, with each action. This leads to a well defined relationship between the algebra model and a Markov process, which can be generated from the model derivation graph and then analyzed for steady state or transient performance measures. Systems are modelled in PEPA as interactions of components that can perform sets of actions: an action is described as a pair (α,r), where α is the type and r R + is the parameter of the exponential distribution governing its duration. Whenever an action is enabled for a process, an instance of the related distribution is sampled: the result specifies how long it will take to complete it. A small set of combinators is used to build up complex behaviour from atomic actions: sequential composition (prefix), choice, synchronization (cooperation) and abstraction (hiding). where: The syntax of a PEPA component P is given by: P ::= (a,λ).p P+ P P P P/L A S (a, λ).p represents the prefix operator. A component may show sequential behaviour, performing an action a and then changing state to P; in case of active actions the time taken for execution is represented by an exponentially distributed random variable with parameter λ. In some other cases the rate of an action is independent from the component. Such actions are carried out together with a second component, with the first one assuming a passive role; the rate shows the value, or top. P 1 + P 2 is the choice operator: a choice between two possible behaviours is represented as their sum. A race condition determines the behaviour of simultaneously enabled actions and the continuous nature of the exponential distribution ensures that the actions cannot occur at the same instant; the rates reflect their relative probabilities, following the decomposition principle. P 1 L P 2 shows cooperation over a given set of actions L; each one of these requires the simultaneous involvement of both components. The resulting shared action will have the same type as the two participating actions and a rate reflecting the rate of the action in the slowest component; in case of a passive action, this will assume the rate of the action it cooperates with. The pure parallel combinator can be thought of as cooperation over the empty set; it represents the situation in which two components behave in a completely independent way.

21 Chapter 2. Mathematical background 14 The hiding operator P/L abstracts over a set of actions L, rendering them private to the component involved. The duration of the actions remains the same, but their type appears as the unknown type τ. The use of this operator has an important effect: the actions in L can no longer be used to cooperate with other components, since synchronization on τ is not allowed; this is useful to prevent components added to the model at a later stage to interfere indiscriminately with its behaviour. A is a constant label, used to associate identifiers with expressions, allowing recursive definitions to construct components with infinite behaviour. We present here an example specification of a little system made of a client and a server: Client = def (req,λ 1 ).Client 1 def Client 1 = (comp 1,µ 1 ).Client 2 def Client 2 = (resp,λ 2 ).Client Server = def (req,λ 1 ).Server 1 def Server 1 = (comp 2,µ 2 ).Server 2 def Server 2 = (comp 3,µ 3 ).Server 3 def Server 3 = (resp,λ 2 ).Server Client req,resp Server The two components synchronize on a request sent by the client; they independently carry out some internal computations (the action comp 1 for the client, comp 2 and comp 3 for the server); finally they synchronize again on the response transmitted by the server to the client. The inherent formality of the process algebra approach allow us to assign a precise meaning to every PEPA specification, so that the behaviour can be deduced unambiguously once given a description of a system; a structured operational semantics is used to associate a labelled transition system with every expression in the language. Mathematically speaking, a labelled transition system (S,T,{ t t T}) consists of a set of states S, a set of transition labels T and a transition relation t S S: in this context states represent syntactic terms, labels are (type,rate) pairs, and the relation is given by semantic rules. The transition system can be also expressed in the form of a direct transition diagram, called derivation graph, which shows all the possible evolutions of the model. The derivation graph for the model shown is the following:

22 Chapter 2. Mathematical background 15 Client 1 Server L 1 (comp 1,µ 1 ) (comp 2,µ 2 ) (req,λ 1 ) Client 2 Server L 1 Client 1 Server L 2 (comp 2,µ 2 ) (comp 1,µ 1 ) (comp 3,µ 3 ) Client L Server Client 2 L Server 2 Client 1 L Server 3 (comp 3,µ 3 ) (comp 1,µ 1 ) Client 2 L Server 3 (res,λ 2 ) Figure 2.3: Derivation graph for the client server model The Markov process underlying the PEPA model is obtained from the derivation graph in a straightforward way: each state of the process corresponds to a node in the graph and the transitions between states are defined by considering the rates labelling the arcs. Since all action durations are exponentially distributed, the total transition rate between two states will be the sum of the action rates labelling arcs connecting the corresponding nodes in the derivation graph (see figure 2.4; notice that it is the same diagram we presented while introducing the concept of CTMC). 2 λ 1 µ 1 µ µ 2 µ 1 µ µ 3 µ 1 7 λ 2 Figure 2.4: Markov process correspondent to the derivation graph

23 Chapter 2. Mathematical background 16 Because of such a close connection with Markov processes, PEPA models inherit some of the assumptions and characteristics already discussed in the previous sections. To ensure existence and uniqueness of a steady state probability distribution, necessary conditions for ergodicity include strong connectivity of the derivation graph, and, in terms of syntactic rules, the need for cooperation to be the highest level combinator (it is not allowed to have a choice between two components which are themselves cooperation expressions). As Markov processes, PEPA models are subject to the problem of state space explosion: even system descriptions of moderate size can generate a huge state space, making analysis infeasible. 2.3 Stochastic simulation So far the stochastic models considered have been solved with analytical methods: carrying out an analysis of the system, it has been possible to obtain the steady state or transient behaviour of the corresponding stochastic process, expressed as a probability distribution over the state space. Such models can be viewed as a mathematical abstraction; on the contrary, a stochastic simulation model can be regarded as an algorithmic abstraction, in the sense that it gives a representation which when executed reproduces the behaviour of the system ([28, 34, 26]). We assume again that a system is described by a sequence of random variables {X(t),t T }. Any set of instances of {X(t),t T}, for increasing values of time, represents the progression of the stochastic process along a path in the state space, travelling from state to state, with the position at time t being X(t); a run of the simulation model generates one of such sample paths. The behaviour of a model is characterized by some parameters associated to performance measures (like throughput, utilization or response time), which are themselves regarded as random variables. In general, each simulation run provides a single estimate for them; the longer the run the more accurate the estimate will be, since the run more likely will be representative of all the aspects of the system behaviour. However, a single observation in the sample space is usually not enough; in order to carry out a reliable study of the system, a sufficiently large amount of repeated executions is required, and that can make the simulation model quite expensive to evaluate. There are nonetheless some reasons why simulation might be preferred to analytic modelling: As discussed while presenting Markov processes, an analytical solution involves constructing the complete state space, and then computing and storing the infinitesimal generator matrix and the transient or steady state distribution vector. For a model with n states, the cost in terms of memory is O(n 2 ), practically unsustainable in case of large n; on the other hand, a simulation only requires the generation of a sample path on the fly during execution. Markovian (and consequently PEPA) models rely on mathematical assumptions that may be not satisfied or appropriate for the system under examination: for example the hypothesis that inter event times are exponentially distributed, or that multiple events do not happen at the same time. The construction of a simulation model allows a far greater freedom, permitting to represent a system at an arbitrary level of detail, even if a high complexity usually requires longer and more numerous runs to achieve statistically significant results.

24 Chapter 2. Mathematical background 17 As a final remark, it is important to highlight that the adoption of high level paradigms, such as process algebras, does not exclude the application of simulation techniques: in fact, from a PEPA specification, both a Markov process and a simulation model can be independently generated and used to carry out performance analysis.

25 Chapter 3 Overview This chapter provides a general overview of the software implementation, with a description of the organization in packages and classes; it also illustrates the connections with existing PEPA based tools, and the Java software used to develop the simulation and the charting facilities. 3.1 Existing PEPA software PEPA Eclipse Plugin The PEPA Eclipse Plugin project, developed at the University of Edinburgh, is a tool for the analysis of PEPA models; as the name says, it is built on top of the Eclipse technology and consists of a collection of plugins for the Eclipse IDE. The software features a PEPA editor and instruments for performance evaluation, achieved through steady state analysis via ordinary differential equation methods; it also includes means to directly display results graphically on the Eclipse platform itself ([36]) HYDRA and IPC HYpergraph based Distributed Response Time Analyzer (HYDRA) is a parallel tool that uses a uniformization based technique to compute transient distributions, and response times densities and quantiles in Markov chains, with particular application to very large systems; it is a further development of DNAmaca, a tool for the steady state analysis of Markov chains with big state spaces. The Imperial PEPA Compiler (IPC) is an extension of HYDRA, and allows to derive performance measures from PEPA models; it compiles PEPA specifications into the input language of the DNAmaca tool, which are then passed to HYDRA to be analyzed. Both HYDRA and IPC have been created at the Imperial College of London ([6, 9, 10, 13, 20]). 18

26 Chapter 3. Overview Personal contribution Extension of the PEPA Eclipse Plugin The present work is aimed at enriching the existing software by allowing new forms of analysis for PEPA models. In particular it offers the user the possibility to: Perform transient and passage time analysis via uniformization. Perform transient and passage time analysis via stochastic simulation. Display the results obtained by the two methods through the BIRT chart engine, either individually or in a combined fashion for comparison purposes Connection with the PEPA Eclipse Plugin The implemented software joins the PEPA Eclipse Plugin in two main points of the class PepaTools, which represents the main interface of the plugin itself: The first one is a state space explorer, which at each step scans the surroundings of the current state returning the enabled transitions; it is an instance of the class StateSpaceBuilder, returned by the method getbuilder of the class PepaTools. The second one is a representation of the underlying Markov chain state space, from which the infinitesimal generator matrix associated can be obtained; it is an instance of the class StateSpace, output of the method derive of PepaTools Connection with HYDRA The PEPA Eclipse Plugin extension realizes uniformization by applying the same algorithm used by the HYDRA tool; a modification to the data structures adopted there has been carried out to improve the global memory usage and will be illustrated in the next chapter. A comparison between the execution time of the two variations is shown in the fifth chapter Other software used In this section we present briefly the software used during the development of the PEPA Eclipse Plugin extension, without entering in details; these will be supplied, where necessary, while describing the mechanics of the algorithms SSJ Stochastic Simulation in Java (SSJ) is a Java library for stochastic simulation, created at the Département d Informatique et de Recherche Opérationnelle (DIRO) of the Université de Montréal. It consists of an collection of packages whose aim is to help simulation programming in Java. It offers instruments

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

1 Modelling and Simulation

1 Modelling and Simulation 1 Modelling and Simulation 1.1 Introduction This course teaches various aspects of computer-aided modelling for the performance evaluation of computer systems and communication networks. The performance

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Performance Evaluation of Queuing Systems

Performance Evaluation of Queuing Systems Performance Evaluation of Queuing Systems Introduction to Queuing Systems System Performance Measures & Little s Law Equilibrium Solution of Birth-Death Processes Analysis of Single-Station Queuing Systems

More information

Stochastic Process Algebras

Stochastic Process Algebras Stochastic Process Algebras Allan Clark, Stephen Gilmore, Jane Hillston, and Mirco Tribastone LFCS, School of Informatics, University of Edinburgh Abstract. In this tutorial we give an introduction to

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle  holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

Population models from PEPA descriptions

Population models from PEPA descriptions Population models from PEPA descriptions Jane Hillston LFCS, The University of Edinburgh, Edinburgh EH9 3JZ, Scotland. Email: jeh@inf.ed.ac.uk 1 Introduction Stochastic process algebras (e.g. PEPA [10],

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

From Stochastic Processes to Stochastic Petri Nets

From Stochastic Processes to Stochastic Petri Nets From Stochastic Processes to Stochastic Petri Nets Serge Haddad LSV CNRS & ENS Cachan & INRIA Saclay Advanced Course on Petri Nets, the 16th September 2010, Rostock 1 Stochastic Processes and Markov Chains

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

6 Solving Queueing Models

6 Solving Queueing Models 6 Solving Queueing Models 6.1 Introduction In this note we look at the solution of systems of queues, starting with simple isolated queues. The benefits of using predefined, easily classified queues will

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions Electronic Notes in Theoretical Computer Science Vol. 85 No. 4 (2003) URL: http://www.elsevier.nl/locate/entsc/volume85.html Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

More information

The State Explosion Problem

The State Explosion Problem The State Explosion Problem Martin Kot August 16, 2003 1 Introduction One from main approaches to checking correctness of a concurrent system are state space methods. They are suitable for automatic analysis

More information

Extracting Passage Times from PEPA models with the HYDRA Tool: a Case Study

Extracting Passage Times from PEPA models with the HYDRA Tool: a Case Study Extracting Passage Times from PEPA models with the HYDRA Tool: a Case Study Jeremy T. Bradley 1 Nicholas J. Dingle 1 Stephen T. Gilmore 2 William J. Knottenbelt 1 1 Department of Computing, Imperial College

More information

Availability. M(t) = 1 - e -mt

Availability. M(t) = 1 - e -mt Availability Availability - A(t) the probability that the system is operating correctly and is available to perform its functions at the instant of time t More general concept than reliability: failure

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010 Exercises Stochastic Performance Modelling Hamilton Institute, Summer Instruction Exercise Let X be a non-negative random variable with E[X ]

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Composition of product-form Generalized Stochastic Petri Nets: a modular approach

Composition of product-form Generalized Stochastic Petri Nets: a modular approach Composition of product-form Generalized Stochastic Petri Nets: a modular approach Università Ca Foscari di Venezia Dipartimento di Informatica Italy October 2009 Markov process: steady state analysis Problems

More information

Stochastic Petri Net

Stochastic Petri Net Stochastic Petri Net Serge Haddad LSV ENS Cachan & CNRS & INRIA haddad@lsv.ens-cachan.fr Petri Nets 2013, June 24th 2013 1 Stochastic Petri Net 2 Markov Chain 3 Markovian Stochastic Petri Net 4 Generalized

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Varieties of Stochastic Calculi

Varieties of Stochastic Calculi Research is what I'm doing when I don't know what I'm doing. Wernher Von Braun. Artificial Biochemistry Varieties of Stochastic Calculi Microsoft Research Trento, 26-5-22..26 www.luca.demon.co.uk/artificialbiochemistry.htm

More information

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1. IEOR 46: Introduction to Operations Research: Stochastic Models Spring, Professor Whitt Class Lecture Notes: Tuesday, March. Continuous-Time Markov Chains, Ross Chapter 6 Problems for Discussion and Solutions.

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Modeling biological systems with delays in Bio-PEPA

Modeling biological systems with delays in Bio-PEPA Modeling biological systems with delays in Bio-PEPA Giulio Caravagna Dipartimento di Informatica, Università di Pisa, argo Pontecorvo 3, 56127 Pisa, Italy. caravagn@di.unipi.it Jane Hillston aboratory

More information

Dynamic resource sharing

Dynamic resource sharing J. Virtamo 38.34 Teletraffic Theory / Dynamic resource sharing and balanced fairness Dynamic resource sharing In previous lectures we have studied different notions of fair resource sharing. Our focus

More information

A tool for the numerical solution of cooperating Markov chains in product-form

A tool for the numerical solution of cooperating Markov chains in product-form HET-NETs 2010 ISBN XXX XXX pp. xx xx A tool for the numerical solution of cooperating Markov chains in product-form SIMONETTA BALSAMO GIAN-LUCA DEI ROSSI ANDREA MARIN a a Università Ca Foscari di Venezia

More information

Tuning Systems: From Composition to Performance

Tuning Systems: From Composition to Performance Tuning Systems: From Composition to Performance Jane Hillston School of Informatics, University of Edinburgh, Scotland UK Email: Jane.Hillston@ed.ac.uk This paper gives a summary of some of the work of

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

Methodology for Computer Science Research Lecture 4: Mathematical Modeling

Methodology for Computer Science Research Lecture 4: Mathematical Modeling Methodology for Computer Science Research Andrey Lukyanenko Department of Computer Science and Engineering Aalto University, School of Science and Technology andrey.lukyanenko@tkk.fi Definitions and Goals

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

Analysis of Software Artifacts

Analysis of Software Artifacts Analysis of Software Artifacts System Performance I Shu-Ngai Yeung (with edits by Jeannette Wing) Department of Statistics Carnegie Mellon University Pittsburgh, PA 15213 2001 by Carnegie Mellon University

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Trace Refinement of π-calculus Processes

Trace Refinement of π-calculus Processes Trace Refinement of pi-calculus Processes Trace Refinement of π-calculus Processes Manuel Gieseking manuel.gieseking@informatik.uni-oldenburg.de) Correct System Design, Carl von Ossietzky University of

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

Modelling Complex Queuing Situations with Markov Processes

Modelling Complex Queuing Situations with Markov Processes Modelling Complex Queuing Situations with Markov Processes Jason Randal Thorne, School of IT, Charles Sturt Uni, NSW 2795, Australia Abstract This article comments upon some new developments in the field

More information

MULTIPLE CHOICE QUESTIONS DECISION SCIENCE

MULTIPLE CHOICE QUESTIONS DECISION SCIENCE MULTIPLE CHOICE QUESTIONS DECISION SCIENCE 1. Decision Science approach is a. Multi-disciplinary b. Scientific c. Intuitive 2. For analyzing a problem, decision-makers should study a. Its qualitative aspects

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1 Queueing systems Renato Lo Cigno Simulation and Performance Evaluation 2014-15 Queueing systems - Renato Lo Cigno 1 Queues A Birth-Death process is well modeled by a queue Indeed queues can be used to

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Session-Based Queueing Systems

Session-Based Queueing Systems Session-Based Queueing Systems Modelling, Simulation, and Approximation Jeroen Horters Supervisor VU: Sandjai Bhulai Executive Summary Companies often offer services that require multiple steps on the

More information

Networks of Queues Models with Several. Classes of Customers and Exponential. Service Times

Networks of Queues Models with Several. Classes of Customers and Exponential. Service Times Applied Mathematical Sciences, Vol. 9, 2015, no. 76, 3789-3796 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2015.53287 Networks of Queues Models with Several Classes of Customers and Exponential

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

ring structure Abstract Optical Grid networks allow many computing sites to share their resources by connecting

ring structure Abstract Optical Grid networks allow many computing sites to share their resources by connecting Markovian approximations for a grid computing network with a ring structure J. F. Pérez and B. Van Houdt Performance Analysis of Telecommunication Systems Research Group, Department of Mathematics and

More information

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

Agreement algorithms for synchronization of clocks in nodes of stochastic networks UDC 519.248: 62 192 Agreement algorithms for synchronization of clocks in nodes of stochastic networks L. Manita, A. Manita National Research University Higher School of Economics, Moscow Institute of

More information

Introduction to Queuing Networks Solutions to Problem Sheet 3

Introduction to Queuing Networks Solutions to Problem Sheet 3 Introduction to Queuing Networks Solutions to Problem Sheet 3 1. (a) The state space is the whole numbers {, 1, 2,...}. The transition rates are q i,i+1 λ for all i and q i, for all i 1 since, when a bus

More information

CS 798: Homework Assignment 3 (Queueing Theory)

CS 798: Homework Assignment 3 (Queueing Theory) 1.0 Little s law Assigned: October 6, 009 Patients arriving to the emergency room at the Grand River Hospital have a mean waiting time of three hours. It has been found that, averaged over the period of

More information

The Markov Decision Process (MDP) model

The Markov Decision Process (MDP) model Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy School of Informatics 25 January, 2013 In the MAB Model We were in a single casino and the

More information

CASPA - A Tool for Symbolic Performance Evaluation and Stochastic Model Checking

CASPA - A Tool for Symbolic Performance Evaluation and Stochastic Model Checking CASPA - A Tool for Symbolic Performance Evaluation and Stochastic Model Checking Boudewijn R. Haverkort 1, Matthias Kuntz 1, Martin Riedl 2, Johann Schuster 2, Markus Siegle 2 1 : Universiteit Twente 2

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Stochastic2010 Page 1

Stochastic2010 Page 1 Stochastic2010 Page 1 Extinction Probability for Branching Processes Friday, April 02, 2010 2:03 PM Long-time properties for branching processes Clearly state 0 is an absorbing state, forming its own recurrent

More information

Chapter 1. Introduction. 1.1 Stochastic process

Chapter 1. Introduction. 1.1 Stochastic process Chapter 1 Introduction Process is a phenomenon that takes place in time. In many practical situations, the result of a process at any time may not be certain. Such a process is called a stochastic process.

More information

Stochastic Simulation.

Stochastic Simulation. Stochastic Simulation. (and Gillespie s algorithm) Alberto Policriti Dipartimento di Matematica e Informatica Istituto di Genomica Applicata A. Policriti Stochastic Simulation 1/20 Quote of the day D.T.

More information

stochnotes Page 1

stochnotes Page 1 stochnotes110308 Page 1 Kolmogorov forward and backward equations and Poisson process Monday, November 03, 2008 11:58 AM How can we apply the Kolmogorov equations to calculate various statistics of interest?

More information

Stochastic Petri Nets. Jonatan Lindén. Modelling SPN GSPN. Performance measures. Almost none of the theory. December 8, 2010

Stochastic Petri Nets. Jonatan Lindén. Modelling SPN GSPN. Performance measures. Almost none of the theory. December 8, 2010 Stochastic Almost none of the theory December 8, 2010 Outline 1 2 Introduction A Petri net (PN) is something like a generalized automata. A Stochastic Petri Net () a stochastic extension to Petri nets,

More information

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS 63 2.1 Introduction In this chapter we describe the analytical tools used in this thesis. They are Markov Decision Processes(MDP), Markov Renewal process

More information

SPA for quantitative analysis: Lecture 6 Modelling Biological Processes

SPA for quantitative analysis: Lecture 6 Modelling Biological Processes 1/ 223 SPA for quantitative analysis: Lecture 6 Modelling Biological Processes Jane Hillston LFCS, School of Informatics The University of Edinburgh Scotland 7th March 2013 Outline 2/ 223 1 Introduction

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Regenerative Processes. Maria Vlasiou. June 25, 2018

Regenerative Processes. Maria Vlasiou. June 25, 2018 Regenerative Processes Maria Vlasiou June 25, 218 arxiv:144.563v1 [math.pr] 22 Apr 214 Abstract We review the theory of regenerative processes, which are processes that can be intuitively seen as comprising

More information

STAT 380 Continuous Time Markov Chains

STAT 380 Continuous Time Markov Chains STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous

More information

Continuous Time Processes

Continuous Time Processes page 102 Chapter 7 Continuous Time Processes 7.1 Introduction In a continuous time stochastic process (with discrete state space), a change of state can occur at any time instant. The associated point

More information

Real Time Operating Systems

Real Time Operating Systems Real Time Operating ystems Luca Abeni luca.abeni@unitn.it Interacting Tasks Until now, only independent tasks... A job never blocks or suspends A task only blocks on job termination In real world, jobs

More information

Introduction to Markov Chains, Queuing Theory, and Network Performance

Introduction to Markov Chains, Queuing Theory, and Network Performance Introduction to Markov Chains, Queuing Theory, and Network Performance Marceau Coupechoux Telecom ParisTech, departement Informatique et Réseaux marceau.coupechoux@telecom-paristech.fr IT.2403 Modélisation

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model

More information

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford Probabilistic Model Checking Michaelmas Term 20 Dr. Dave Parker Department of Computer Science University of Oxford Next few lectures Today: Discrete-time Markov chains (continued) Mon 2pm: Probabilistic

More information

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford Probabilistic Model Checking Michaelmas Term 2011 Dr. Dave Parker Department of Computer Science University of Oxford Overview CSL model checking basic algorithm untimed properties time-bounded until the

More information

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains Chapter 3 Balance equations, birth-death processes, continuous Markov Chains Ioannis Glaropoulos November 4, 2012 1 Exercise 3.2 Consider a birth-death process with 3 states, where the transition rate

More information

Spatial and stochastic equivalence relations for PALOMA. Paul Piho

Spatial and stochastic equivalence relations for PALOMA. Paul Piho Spatial and stochastic equivalence relations for PALOMA Paul Piho Master of Science by Research CDT in Pervasive Parallelism School of Informatics University of Edinburgh 2016 Abstract This dissertation

More information

EP2200 Course Project 2017 Project II - Mobile Computation Offloading

EP2200 Course Project 2017 Project II - Mobile Computation Offloading EP2200 Course Project 2017 Project II - Mobile Computation Offloading 1 Introduction Queuing theory provides us a very useful mathematic tool that can be used to analytically evaluate the performance of

More information

Process-Algebraic Modelling of Priority Queueing Networks

Process-Algebraic Modelling of Priority Queueing Networks Process-Algebraic Modelling of Priority Queueing Networks Giuliano Casale Department of Computing Imperial College ondon, U.K. gcasale@doc.ic.ac.uk Mirco Tribastone Institut für Informatik udwig-maximilians-universität

More information

IN THIS paper we investigate the diagnosability of stochastic

IN THIS paper we investigate the diagnosability of stochastic 476 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 50, NO 4, APRIL 2005 Diagnosability of Stochastic Discrete-Event Systems David Thorsley and Demosthenis Teneketzis, Fellow, IEEE Abstract We investigate

More information

Modelling in Systems Biology

Modelling in Systems Biology Modelling in Systems Biology Maria Grazia Vigliotti thanks to my students Anton Stefanek, Ahmed Guecioueur Imperial College Formal representation of chemical reactions precise qualitative and quantitative

More information

Markov Reliability and Availability Analysis. Markov Processes

Markov Reliability and Availability Analysis. Markov Processes Markov Reliability and Availability Analysis Firma convenzione Politecnico Part II: Continuous di Milano e Time Veneranda Discrete Fabbrica State del Duomo di Milano Markov Processes Aula Magna Rettorato

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory Contents Preface... v 1 The Exponential Distribution and the Poisson Process... 1 1.1 Introduction... 1 1.2 The Density, the Distribution, the Tail, and the Hazard Functions... 2 1.2.1 The Hazard Function

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdlhandlenet/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

14 Random Variables and Simulation

14 Random Variables and Simulation 14 Random Variables and Simulation In this lecture note we consider the relationship between random variables and simulation models. Random variables play two important roles in simulation models. We assume

More information

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K Queueing Theory I Summary Little s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K " Little s Law a(t): the process that counts the number of arrivals

More information

Flow Equivalence and Stochastic Equivalence in G-Networks

Flow Equivalence and Stochastic Equivalence in G-Networks Flow Equivalence and Stochastic Equivalence in G-Networks Jean-Michel Fourneau Laboratoire PRISM Univ. de Versailles Saint-Quentin 45 Avenue des Etats-Unis 78000 Versailles, France jmf@prism.uvsq.fr Erol

More information

Definition and Examples of DTMCs

Definition and Examples of DTMCs Definition and Examples of DTMCs Natarajan Gautam Department of Industrial and Systems Engineering Texas A&M University 235A Zachry, College Station, TX 77843-3131 Email: gautam@tamuedu Phone: 979-845-5458

More information

Queueing Networks G. Rubino INRIA / IRISA, Rennes, France

Queueing Networks G. Rubino INRIA / IRISA, Rennes, France Queueing Networks G. Rubino INRIA / IRISA, Rennes, France February 2006 Index 1. Open nets: Basic Jackson result 2 2. Open nets: Internet performance evaluation 18 3. Closed nets: Basic Gordon-Newell result

More information

LECTURE #6 BIRTH-DEATH PROCESS

LECTURE #6 BIRTH-DEATH PROCESS LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

AC&ST AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS. Claudio Melchiorri

AC&ST AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS. Claudio Melchiorri C. Melchiorri (DEI) Automatic Control & System Theory 1 AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS Claudio Melchiorri Dipartimento di Ingegneria dell Energia Elettrica e dell Informazione (DEI)

More information

Name of the Student:

Name of the Student: SUBJECT NAME : Probability & Queueing Theory SUBJECT CODE : MA 6453 MATERIAL NAME : Part A questions REGULATION : R2013 UPDATED ON : November 2017 (Upto N/D 2017 QP) (Scan the above QR code for the direct

More information

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING 2 CHAPTER 5. QUEUEING Chapter 5 Queueing Systems are often modeled by automata, and discrete events are transitions from one state to another. In this chapter we want to analyze such discrete events systems.

More information

Stochastic Petri Net

Stochastic Petri Net Stochastic Petri Net Serge Haddad LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA haddad@lsv.ens-cachan.fr Petri Nets 2016, June 20th 2016 1 Stochastic Petri Net 2 Markov Chain 3 Markovian Stochastic

More information