Acknowledgements I wish to thank in a special way Prof. Salvatore Nicosia and Dr. Paolo Valigi whose help and advices have been crucial for this work.

Size: px
Start display at page:

Download "Acknowledgements I wish to thank in a special way Prof. Salvatore Nicosia and Dr. Paolo Valigi whose help and advices have been crucial for this work."

Transcription

1 Universita degli Studi di Roma \Tor Vergata" Modeling and Control of Discrete Event Dynamic Systems (Modellazione e Controllo di Sistemi Dinamici a Eventi Discreti) Francesco Martinelli Tesi sottomessa per il conseguimento del titolo di Dottore di Ricerca in \Informatica ed Ingegneria dell'automazione" X Ciclo del corso di dottorato ( ) Docente guida (advisor): Prof. Salvatore Nicosia Coordinatore (coordinator): Prof. Giuseppe Iazeolla Roma, Febbraio 1998

2 Acknowledgements I wish to thank in a special way Prof. Salvatore Nicosia and Dr. Paolo Valigi whose help and advices have been crucial for this work. A special thank goes also to Prof. Michael Caramanis, Prof. Christos Cassandras and Prof. James Perkins from Boston University (Boston, USA), who gave me the opportunity of joining an international and very active research group.

3 Contents 1 Introduction 3 2 Discrete Event Dynamic Systems: modelling and control Discrete Event Dynamic System models State automata model Petri Net model Dioid Algebra Continuous ow model Control problem formulation and related issues The state reconstruction algorithm Review of some Perturbation Analysis Techniques The state reconstruction algorithm Notation and queueing network dynamics The state reconstruction problem A concurrent implementation Estimate analysis Control of Discrete Event Systems based on the path reconstruction approach Review of some control approaches Stochastic Comparison Algorithm for Non-Stationary Discrete Event Systems Optimization Problem Formulation The stochastic comparison algorithm for non stationary DEDS Analytical results for the deterministic case The general case A modied version of the Stochastic Comparison Algorithm for Non- Stationary Discrete Event Systems Some simulation results Some implementation problems Conclusions 107 1

4 2 Symbols and abbreviations ASA C-SRA CVDS DEDS FPA GA GSMP IPA JIT LR PA SA SC SCA SCANS SCANS' SPA SRA TWA J(; t) L(; t; t) Optimize Augmented System Analysis Concurrent SRA Continuous-Variable Dynamic Systems Discrete Event Dynamic Systems Finite PA Genetic Algorithm Generalized Semi-Markov Process Innitesimal PA Just in Time Likely-hood Ratio Perturbation Analysis Simulated Annealing Standard Clock Stochastic Comparison Algorithm SCA for Non-Stationary DEDS SCANS improved Smoothed PA State Reconstruction Algorithm Time Warping Algorithm A particular resource allocation or parameter vector value An optimal allocation The search space, i.e. the space of all possible The set of optimal allocations Performance function (or index) at time t for allocation Estimate of J(; t) observing the system for t Minimize

5 Chapter 1 Introduction System and control theory deals with the problem of modeling and controlling systems. Natural systems, such as, for example, mechanical or thermodynamical systems, are characterized by continuous variables: the state of such a kind of systems is a vector whose entries are real numbers. Their evolution is described by means of dierential equations and is time-driven, that is, the current value of the state directly depends on the time, which can be thought of as an independent variable. Once the equations of a model for this kind of systems have been derived, it is possible in many cases, based on these equations, to nd an analytical expression for a control law aimed at obtaining some desired results. Very similar to these systems, there is a class of systems which are always characterized by a real state vector, but their evolution, yet time-driven, is described by means of dierence equations: in this case the state of the system changes only at discrete time instants. Many results available for time-continuous systems can be adapted to t in this case. All of these systems, discrete or continuous time dependent, are time-driven and their state can be given as a vector with real entries. They will be denoted with the term of Continuous-Variable Dynamic Systems (CVDS)([1]). Many man-made systems can not be successfully described by models used for CVDS. It is the set of articial systems usually characterized by some entities which can provide a particular desired service (and will be referred to as servers) and some other entities which compete one another and wait for this service (and will 3

6 Introduction 4 be called customers). This situation is really general: even if in this thesis the attention will be devoted to manufacturing systems ([2]), the same concepts apply to communication and computer networks, or many general queueing-systems. If manufacturing systems are considered, a customer is typically a part which waits for an operation on a queue while a server is a machine which performs a given set of operations on parts. In the communication system case the customer can be a piece of information which could require a particular channel or another device, which must be considered as a server. There is a common feature in all these man-made systems: the state takes value on a discrete set. As an example, in the manufacturing case, the state typically comprises the number of customers in the queues and the state of the machines (which, again, is a discrete if not boolean variable). Moreover the evolution is eventdriven rather than time-driven: the value of the state changes when something, that is an event, happens. The time an event is observed is usually a random variable and the state of the system as a function of the time is a stochastic process. Discrete nature of the state and event-dependence make this kind of systems very dierent from CVDS. The term Discrete Event Dynamic Systems (DEDS) comprises these major characteristics. Classical models like dierential or dierence equations are not useful to model DEDS. A mathematical model is a set of equations which can be used to reproduce the behavior of a system. In Chapter 2 a review of some mathematical models introduced to describe DEDS will be presented. The main dierence between these models and the ones used for CVDS is that rarely they can be directly used to derive analytical expressions for metrics dened on the DEDS or for the control law which solves a particular control problem, how often is done in CVDS. A dierent procedure is required for DEDS. A DEDS is usually a man-made system which has been realized for some specic purpose. The purpose of a DEDS and, in particular, of a manufacturing system, is that of providing some service achieving some objectives ([3, 2]): at a high level these objectives can be those of making the most money, or guaranteeing high quality products, or staying in

7 Introduction 5 business as long as possible. At a lower level, i.e. to achieve the previous results, the objectives will be some performance measure like the number of part released by the system in the time unit (throughput), the time each customer has to wait for a service, the time a machine is in service, the cost and the gain corresponding to the dierent operations, the percentage of imperfect products, and so on. Therefore, it appears natural to formulate a control problem for a manufacturing system as the optimization of a performance function dened to capture some interesting feature of the system. This performance function depends on the allocation of some resources and/or on the denition of some policies or protocols: the dynamic choice of an allocation and/or the denition of a policy in order to optimize the considered performance function is then the objective of a control scheme. From a more general point of view, the performance function can be seen as dependent on a vector of parameters, where each possible value for this vector can represent a particular resource allocation, a particular policy or a mix of them. In this perspective the objective of a control scheme is the dynamic choice of the value of the parameter vector to optimize a performance function. This is the problem addressed in this thesis, and it will be formally stated in Chapter 2, together with the main diculties which arise to solve it. In many cases a closed expression of the performance index as a function of the parameter vector is not directly available through mathematical models of the system and many approaches, like Perturbation Analysis ([4, 5, 6]), Rapid Learning ([7]), Sample Path Analysis ([8]) and State Reconstruction ([9, 10]) are aimed to compute an estimate of the performance function for a particular parameter setting. The estimate is computed using data recorded during the observation of the system, which can evolve with a value of the parameter vector dierent from the one for which the estimate is sought. This will be the main subject of Chapter 3 which contains one of the contributions of this thesis, the State Reconstruction Algorithm. This algorithm presents many advantages with respect to other well known techniques: it can be applied to any DEDS considering every possible value for the parameter vector; it provides an exact estimate of the performance function and

8 Introduction 6 does not require any knowledge on future events of the DEDS unlike many classical Perturbation Analysis techniques. Chapter 4 deals with the control problem: how to use this kind of estimate approaches to design control algorithms. The basic idea is quite simple: once an estimate of the performance function for many and possibly all the values of the parameter vector is available, the objective is that of selecting the value of the parameter vector corresponding to the (supposed) best performance. Such a conceptually simple problem presents many complications. Among them: (i) the set of all possible values of the parameter vector, called the search space, can be very large, and an estimate of the performance function for all possible values of the parameter vector could become computationally unfeasible; (ii) the performance function is known only through estimates; (iii) the system can be time-varying, which means a good choice for the parameter vector can become quite bad in the future. All of these aspects require a careful study for designing an acceptable control scheme. A comparison between dierent proposals is oered in Chapter 4. Among them, a novel algorithm, the Stochastic Comparison Algorithm for Non- Stationary DEDS (SCANS) ([11, 12, 13]), and a modied version of it are presented in Chapter 4 and constitute the more signicant contribution of this thesis. SCANS has been derived modifying one of the more recent stochastic optimization approaches in this research area, the Stochastic Comparison Algorithm (SCA) proposed by Gong et al. ([14, 15]) and described also in Chapter 4. The change has been performed trying to maintain the interesting characteristics of the original algorithm but with the objective of controlling non stationary DEDS. Analytical results as well as simulation experiments assess the eectiveness of the proposed scheme, which can be applied to any DEDS and benets from advantages of order statistics with respect to cardinal estimates ([16, 17, 18]). The thesis ends with a concluding chapter, Chapter 5, which summarizes on the results presented and reports some indications for future work.

9 Chapter 2 Discrete Event Dynamic Systems: modelling and control This chapter focuses on two subjects: how to model a Discrete Event Dynamic System and how a control problem can be formulated. In particular, the control problem considered in this thesis will be formally stated. The rst section is devoted to the description of the major models which have been derived for studying DEDS [1, 19]. It must be remarked that a mathematical model of a system (a general system, also a Continuous-Variable Dynamic System) is a tool which allows to analyze and identify the behavior of the system. In that a model is not the real system but can be seen as another system, completely known and simple enough, which can be used to reproduce the behavior of the considered system with sucient precision in terms of the objectives for which the model has been created. A model is created to analyze and identify a system but the identication process is usually nothing but the rst step in the solution of a control problem. As a matter of fact, for CVDS, the models can often be directly used to analytically compute a control law which solves the considered control problem. As it will be more apparent in the following, the most part of models derived to describe a Discrete Event Dynamic System is usually very complex and much more suitable for simulation purposes rather than analytical studies. This means that these models constitute a good tool to reproduce the behavior of a DEDS and its evolution, but they are not very useful to derive an analytical expression of the state 7

10 Chapter 2 - Modeling and control 8 as a function of the time. This is not only due to the stochastic nature of events characterizing a DEDS but it depends also on the particular structure of such a kind of systems. The particular structure of a DEDS reects also on the way a control problem for this kind of systems can be formulated. Section 2 deals with this problem. A control problem, as said in the Introduction, can be often transformed in the dynamic optimization of a performance function dened on the system. In a manufacturing system, this objective function can depend on the particular policy adopted to process arriving parts or on the allocation of some resources. These can be considered as two dierent problems but also as two steps in the design of a manufacturing system: a long term control is applied to allocate resources, then a policy must be dened given the allocation. But resource allocation can also be seen as a short term control problem especially when the resource allocation can be accomplished very quickly. In the Introduction, a parameter vector which can comprise both the resource allocation and the type of policy implemented has been mentioned, and the problem is the dynamic choice of the value of the parameter vector to optimize a performance function dened on the manufacturing system. The discussion of this problem is reported in Section Discrete Event Dynamic System models The Discrete Event Dynamic Systems considered in this thesis are manufacturing systems, where some parts wait on queues to be serviced by some machines. The state of such a kind of systems comprises the number of parts waiting in all the queues contained in the system as well as the state of the machines, which can be idle, blocked, down or servicing a part. The state changes when an event happens. For manufacturing systems an event can be a service completion, a part arrival, a machine failure or repair, and so on. A sequence of events drives the state of the system from a value to another value. If each event is associated with a symbol of a language, a sequence of events can be seen as a word in this language ([20, 21]).

11 Chapter 2 - Modeling and control 9 So, it is natural to try to describe a DEDS by means of a state automaton, because a state automaton is a machine characterized by a set of states and the transition between states is driven by the words of the language accepted by the automaton. State Automata formalism is the basic model considered in this thesis. Other models can be used and will be briey sketched in this Section. These models are dierent algebras to describe DEDS ([22]) and a common feature is that they are well dened both in an untimed and in a timed scenario, as it will be more apparent in the following State automata model In the DEDS framework, a state automaton can be formally given as a ve-tuple (E; X; (x); f; x 0 ), where E is the set comprising all the events dened on the DEDS, X is the discrete set of all the states the DEDS can assume, (x) E, dened for all x 2 X, is the set of all the events which can happen when the state of the DEDS is x, f is a transition function which maps a state x 2 X and an event e 2 E into a new state x 0 2 X. It means that if the DEDS is in x and e happens, the new state will be x 0. Finally x 0 simply denotes the initial state of the DEDS. In this description the time can be omitted: once a sequence of events is available, it is possible to feed the DEDS using this sequence and nd, by the iterative application of the transition function f, the state sequence of the DEDS corresponding to this event sequence. It is possible to give a graphic representation of the state automaton modeling the DEDS. In this representation every possible state of the DEDS x 2 X is associated with a node and, for any couple of nodes associated with states x and x 0, there is an edge from x to x 0 labelled by e 2 E if f(x; e) = x 0. The structure described right now is an untimed model of the DEDS. To consider time in such a description it is enough to associate each event with the time it happens. Consider event e 2 E and associate with this event a clock sequence t e := ft e;1 ; t e;2 ; : : :g comprising all the lifetimes of event e. This means that when the DEDS leaves state x and enters state x 0 where event e is enabled (i.e. e 2 (x 0 ) but e 62 (x)) for the n-th time, if it is not disabled it will happen after a time given by t e;n. It is disabled

12 Chapter 2 - Modeling and control 10 Figure 2.1: The simple queueing system considered in the example. if the state changes (due to another event which happens before) and event e is no more feasible in the new state. In this case its lifetime t e;n is lost. When the clock sequences are not deterministic but are associated with some stochastic distribution (that is lifetimes are not deterministic numbers but are random variables) and the state transition does not depend on a deterministic function but there is some state transition probability p(x 0 ; x; e), the state automaton models a Generalized Semi- Markov Process (GSMP), which is a very general class of stochastic DEDS [23]. The markovian aspect is the fact that the system behaves like a Markov Chain at state transitions. It is not a Markov process however because the times between state transitions may obey to general stochastic probability laws and the evolution is characterized by the clock mechanism sketched above. The GSMP formalism will be extensively used in this thesis being the model used to analyze the manufacturing systems considered in the following. When all the lifetimes are exponentially distributed the DEDS is a Markov Chain. In this case analytical results can be derived to determine closed form expressions for many metrics dened on the system, like the time a machine is down in the average and so on ([3]). To conclude this description an example will be presented. Consider the system depicted in Figure 2.1 comprising a machine failure-prone which can go down only while servicing a part according to some stochastic process. If this happens, the part is lost and the interrupted service resumes with a new part when the machine is repaired and a part is available. This machine provides service to parts arriving according again with some random process. The parts wait on a queue comprising only two buer slots. When the queue is full part arriving are dropped while the machine cannot work if the queue is empty. The stochastic processes characterizing

13 Chapter 2 - Modeling and control 11 a a a (0,u) c (1,u) c (2,u) c (3,u) r f r f r f a (0,d) (1,d) (2,d) a a a Figure 2.2: The state automaton corresponding to the considered queueing system. failures, repairs, part arrivals and service durations are not interesting here (they characterize the clock sequences of the corresponding events). The untimed state automaton corresponding to this system is depicted in Figure 2.2. In this automaton E = fa; c; f; rg, where a=part arrival, c = service completion, f = machine failure and r = machine repair. The state (n; s) comprises the number of parts in the system (queue and machine) (n = 0; 1; 2; 3) and the state of the machine (s = u; d, where u=up and d=down). The machine can be considered in service if it is not down and n > 0: in other words no blocking mechanism is possible. So the state s = u comprises both the state in service if n > 0 and idle if n = Petri Net model Petri Nets ([24, 25, 26, 27]) are a very general tool for DEDS modeling. Indeed, it turns out that any DEDS modeled by means of state automata (which are a really general tool to model DEDS) can always be represented using a Petri Net ([1]). But the use of this kind of models is not simply an equivalent way to describe a DEDS. Rather, it can be used as a complementary tool. As a matter of fact, in many cases, a Petri Net model turns out to be more natural for a particular DEDS than a State Automaton and vice versa. It is important to gure out which model ts to the particular application. In Petri Nets, events are associated with transitions. A transition is enabled (that is an event can happen) if some conditions are met:

14 Chapter 2 - Modeling and control 12 in the Petri Net formalism a transition is enabled if the places before it contain a sucient number of tokens. So, a Petri Net comprises a given number of places lled by tokens. Any disposition of tokens in the places corresponds to a particular state of the Petri Net and hence to a particular state of the DEDS associated with this Petri Net. Formally, a Petri Net can be given as a four-tuple (P; T; A; w), where P is a set of places, T is a set of transitions, A is a set of arcs between places and transitions and between transitions and places, and w is a weight associated with each arc in the net. Each transition t i is associated with a set I(t i ) of input places and a set O(t i ) of output places. Places are lled by tokens. A transition t i is enabled and can re when every place p j 2 I(t i ) contains a number of tokens not smaller than the weight w ji associated with the arc from p j to t i. The transition res immediately in untimed Petri Nets. In timed Petri Nets this happens after some time, which is the time associated with the transition t i. When the transition res, w ji tokens disappear from all the places p j 2 I(t i ) and w ik tokens are added to all places p k 2 O(t i ). So, tokens present in a place mean that some condition is met. To better understand how Petri Nets can be used to model a DEDS the previous example will be considered and the corresponding untimed Petri Net is reported in Figure 2.3 where weights w are all equal to Dioid Algebra An appealing way to model DEDS is that of dioid or max-plus algebra [28, 29, 30]. The basic idea of this approach is the crucial observation that a sum between lifetimes of events and a minimum operation performed to pick up the next triggering event are the only two operations required for an analytical description of DEDS. Max-plus algebra allows to derive a formalism which can be used to describe DEDS of a particular class by means of a state model equation like that used for linear CVDS: x(k + 1) = Ax(k) + Bu(k) (2.1)

15 Chapter 2 - Modeling and control 13 Part arrives Queue empty Lost parts Part accepted Idle Queue Start service Part in service Service completed Fail Machine broken Repair Figure 2.3: The Petri Net corresponding to the considered queueing system. where x is the state of the system and u the control. When used for CVDS the product and the sum which appear in (2.1) are the usual operations dened in the standard algebra. It is possible to dene these two operations in a dierent way (in particular introducing an addition operator between two real numbers as the maximum between them and a multiplication operator as the usual sum of the two numbers) in such a way that a similar equation can be written for some DEDS, with the new meaning assigned to the operations sum and product. This approach allows to apply many concepts which hold for the analysis and the control of CVDS to DEDS. These concepts include behavior of the system at steady state, maximum eigenvalue assignment and so on ([31, 32, 33, 34]). The major limits of such an approach is on the topology of the queueing network which can be considered by means of a max-plus algebra (in particular no routing nodes can be considered) and the fact that the model can easily be obtained only for deterministic DEDS, i.e. when events happen according to deterministic clock sequences.

16 Chapter 2 - Modeling and control Continuous ow model In some cases, to solve particular control problems, the discrete ow of parts in a manufacturing system can be approximated by a continuous ow and the system can be described as a CVDS, making the analysis and the solution of a control problem analytically feasible ([35, 36, 37, 38, 39, 40, 41, 42]). This means that the state of the manufacturing system is characterized by real entries and its change does not depend on events any more. Rather, parts arrive in the queue with a given (deterministic or stochastic) rate and are serviced by machines with some other rate. In some more complex scenarios, machines can fail according to a given stochastic process ([3]). 2.2 Control problem formulation and related issues When a DEDS is modeled by means of a Markov Chain, or by the equations of maxplus algebra or when it can be reduced to a continuous system having considered parts as a continuous ow, it is possible to derive closed-form expressions for many characteristics as a function of its parameters. In the Markov Chain framework for example it is possible [3] in many cases to derive an expression for many interesting characteristics of the system, like the average length of a queue, the average service time of a machine and so on. When the transition probabilities of the Markov Chain can be modied by an external input the corresponding model will be addressed to by the term of Controlled Markov Chain and this external input can be used to drive the system in such a way to optimize some performance function. The modication of the transition probability is then the eect of a control policy dened on the queueing system. When the policy depends only on the current state it is called stationary. In these cases the solution can be obtained by means of Dynamic Programming solving the Hamilton Jacobi Bellman (HJB) optimality equation. This equation can be solved numerically and/or can be used to prove some properties which the optimal solution satises [43, 44, 45].

17 Chapter 2 - Modeling and control 15 The problem of nding a control policy to optimize a performance function is addressed in many works which use a continuous ow model of the DEDS. As an important example consider a scheduling problem: for a reliable (i.e. without failures), exible (i.e. with negligible set-up times and costs) machine M which can provide service to n dierent part types i = 1; 2; : : : ; n. Suppose part type i can be processed at maximum rate i and let d i be the deterministic and constant demand rate for part type i supposing an innite reservoir of raw material is available for machine M. Let x i denote the quantity of part type i in the system, with x i > 0 representing an inventory x i of part type i and x i < 0 a backlog of x i. If u i is the production rate at time t for part type i, it is possible to write: dx i dt = u i d i i = 1; 2; : : : ; n (2.2) which shows how a DEDS can be reduced to a CVDS introducing a continuous ow approximation and can be solved given u i and the initial state x i (0) 8 i. A capacity bound must be considered for machine M: X Hence, for the system to be stable, it must be i X i u i i 1: (2.3) d i i < 1; (2.4) which guarantees that machine M has capacity enough to meet the demand. Suppose a backlog/inventory cost is associated with each part type i: c i (x i ) = c i;+ x + i + c i; x i ; (2.5) where x + = maxfx; 0g, x = maxf x; 0g and c i;+, c i; are two positive constants. Let c(x) = nx i=1 c i (x i ) (2.6) be the complete cost function. Given an initial state x(0) = x 0, the control problem is that of nding an expression for the production rates u i (control variables) to

18 Chapter 2 - Modeling and control 16 minimize J(x 0 ) = Z 1 0 c[x(t)]dt: (2.7) If the system is stable (eq. (2.4)), the solution is u i (t) = 0 if x i (t) > 0 and work on part types with x i (t) < 0 having the maximum c i; i index. If many part types achieve this maximum, the total capacity of the machine is splitted among them in an arbitrary way: it is enough that machine M works at full capacity (i.e. eq. (2.3) satised with equality) until the backlog is cleared. This is the well known c-rule. Observe that u i (t) = 0 if x i (t) > 0. This is the \Just in Time" (JIT) policy: never work to accumulate a stock of products. It is possible here because the system is stable and machine M is reliable. If M was failure prone, with stochastic and exponentially distributed failure times, it would have been necessary to introduce a hedging point h i > 0 for each part type i and produce until x i = h i rather than x i = 0 to cope with breakdown periods of machine M. The c-rule described above can be proved through classical analysis arguments like Pontryagin's maximum principle but also by means of interchange arguments ([46]). The c-rule, however, is just a particular case of the general result about myopic policies which applies when the cost function has a more general expression than the piecewise linear function given in (2.5) [47]. In the more general case also, the state-feedback nature of this type of control (the value assigned to u(t) = (u 1 (t); u 2 (t); : : : ; u n (t)) at time t depends on the state x(t) = (x 1 (t); x 2 (t); : : : ; x n (t))) is much more evident. In many situations however, it is of interest to consider the discrete nature of a DEDS. In this thesis, in particular, the problem of dynamic resource allocation for general DEDS modelled using the GSMP formalism discussed in section 2.1 is addressed. The control problem is then formulated as follows. A performance function J is dened on the DEDS to capture some interesting feature of it. This function depends on a parameter vector which can comprise general characteristics of the DEDS. In the following this vector will be usually assumed to denote a particular resource allocation for the DEDS. To stress such a dependence, the performance

19 Chapter 2 - Modeling and control 17 θ(t) DEDS J(θ(t),t) Figure 2.4: The DEDS considered in the control problem. function will be denoted by J(). Usually, dealing with DEDS, a given number of resources is available and must be allocated into the system. For this reason, a particular value for the parameter vector will be often referred to as an allocation. The set of all possible allocations will be denoted by. As it will be remarked in Chapter 4, a DEDS is usually non-stationary: the performance function J() is not a time invariant function of. For this reason another entry will be considered in the performance function: J(; t) denotes the value of the performance function when the system is allocated and at time t. The control problem can be formulated as an optimization problem, i.e. nd (t) = arg min J(; t) (2.8) 2 Once the control problem has been formulated in this way, it is possible to see the performance function J(; t) as an output of the DEDS, an output which must be controlled, and the particular allocation (t) chosen for the system at time t as a control input. The system can be represented by the block diagram shown in Figure 2.4. In this perspective, classical considerations can be reported. For example, a closed loop control scheme with a reference signal r(t) can be considered for this system and it is reported in Figure 2.5. The reference signal r(t) is in this case just a command which asks the control to minimize the output J(; t) but it could be a dierent signal, since also a regulation problem can be addressed at this point. Anyway, the above structure presents many dierences with the corresponding structure for CVDS. The measures taken on the DEDS are used to compute an estimate of J(; t) for many and possibly all 2 to derive the allocation (t) which is the best with

20 Chapter 2 - Modeling and control 18 r(t) Controller θ(t) DEDS J(θ(t),t) Measures Figure 2.5: The block diagram comprising the DEDS and the controller. respect to the criterion assigned by r(t). Moreover, all of the signals have a dierent meaning with respect to the CVDS case: a control action is performed at the end of an interval (control epoch) where the measurements are taken on the DEDS. So, the input (t) and the output J(; t) of the DEDS are not dened for all t but only at a control epoch. This will be more apparent in the following. Nevertheless the main idea of a dynamic control scheme is still true: in many situations the DEDS is non-stationary; this means that an allocation which is optimal now can be no more optimal in the future. The control scheme must be able to change allocation to react to this change. Moreover, the search of the optimal through the estimation procedure can be very dicult, especially if the search space is very large. So, the second step in the design of a control scheme is the denition of a strategy which must be applied to explore the set. The estimation problem for the performance function and the design of a search strategy will be addressed in Chapter 3 and 4 respectively.

21 Chapter 3 The state reconstruction algorithm In the previous chapter an introduction to dierent models of Discrete Event Dynamic Systems and the control problem have been outlined. The control of discrete event dynamic systems, as it was pointed out in the previous chapter, was based on the dynamic choice of an allocation 2 in such a way to optimize an objective function J(; t). The problem is that a closed form expression for J(; t) as a function of is usually not available and its value must be estimated in some way. To control a DEDS the simplest way would be to start with some allocation 0, observe the system for some time, then change the allocation to a dierent value 1 and see if the observed performance has improved with this change. This can be repeated with many dierent allocations to choose the one which gave the best observed performance. This is not a clever way to proceed. If the system under control is a real system this would mean to operate it for a long time under many and probably bad allocations, with the result that the time required to get a good performance can be very large. If the system is simulated, it is necessary to produce a simulation run for every allocation to test. Also in this case the time required to get a good performance is very large. These approaches are often referred to as brute force techniques. Observe also that simulating a real system could be impossible if the stochastic process characterizing the system is not known and no trick is used (it is 19

22 Chapter 3 - State Reconstruction Algorithm 20 not possible to generate random variables for the simulation). The idea to solve this problem is estimating the behavior of the system for different allocations without actually change allocation but only using data collected during the evolution of the system under allocation 0. This allows to save computation time and makes possible the analysis of unknown systems, whose simulation can not be actuated directly. The techniques which try to estimate the evolution of the system for an allocation dierent from the current one, only using data collected during the evolution under allocation 0 are the subject of the so called Perturbation Analysis (PA), which is a very important eld of research (see [4, 5, 1, 6] for a review). There are many proposals in the Perturbation Analysis area, which address different kinds of problems. The algorithm described in this chapter, the state reconstruction algorithm, can be seen as a part of this research area. As it will be more apparent in the following, it is a way very general for evaluating the behavior of a discrete event dynamic system when some of its parameters are modied. Although, historically, the main motivation for the introduction of Perturbation Analysis techniques was the attempt to reduce simulation computational eort, the perturbation analysis turns out to be very useful to implement on-line control schemes as it will be claried in the next chapter. As already remarked, there are many proposals in the framework of Perturbation Analysis. These proposals will be sketched in Section 3.1 as an introduction to the State Reconstruction Algorithm, which is one of the contributions of this thesis and will be presented in Section Review of some Perturbation Analysis Techniques Perturbation Analysis deals with the problem of estimating the \structure" of the objective function J(; t) as a function of allocation only using data collected during the evolution of the system with a particular allocation, which will be often

23 Chapter 3 - State Reconstruction Algorithm 21 referred to as the nominal allocation, and will be denoted by 0. The term \structure" of objective function denotes the dependence of the objective function on the allocation. The question that PA tries to answer to is then: \what happens on the objective function if some change is performed on the allocation?". To x ideas consider the case the allocation denotes the value of a single parameter, with 2. Hence must be regarded as an interval or a set of possible values for. 1 Dierent techniques then apply if the parameter is a continuous parameter or if it can take values only on a discrete set. In the rst case, i.e. when a continuous parameter is considered, there are two dierent kinds of estimate: the estimate of the sensitivity of J(; t) on changes in, i.e. the derivative of the objective function with respect to the parameter or the change of the objective function due to a nite variation of the parameter. In the second case, of course, only the second kind of estimate can be considered. An estimate of the sensitivity of J(; t) to changes in can be solved using an Innitesimal Perturbation Analysis (IPA) approach ([48, 49, 50, 51, 52, 53, 54, 55]) or the so called Likelyhood Ratio (LR) approach ([56, 57, 58]), which appears much simpler than the IPA approach but it gives an estimate with larger variance ([1, 5]). IPA is a well established body of research and there are many works on this subject. The application of an IPA estimator is usually easy, even if the analysis necessary to derive it could be really involved ([1]). Nevertheless, IPA is not always feasible in all the problems. First of all, even if the objective function depends on a continuous parameter, this dependence could be non-continuous: Smoothed Perturbation Analysis (SPA) ([59, 60, 61]) tries to overcome this problem, as well as Discontinuous Perturbation Analysis ([62]). If the parameter under consideration takes values on a discrete set, it is obvious that IPA can not be applied, and a dierent approach is required. 1 If a resource allocation problem is considered, is a discrete set. But, from a more general point of view, can be also a real value, for example when it represents some threshold or a hedging point.

24 Chapter 3 - State Reconstruction Algorithm 22 One more point to emphasize is that, even if IPA can be applied to a problem, it always produces local estimates, that is, it gives information only with respect to innitesimal variations of the nominal parameters, in a neighbour of the current nominal value. Remember that controlling a DEDS means changing a parameter in such a way to improve the performance of the system. In this perspective, the IPA estimator, which gives an estimate of the derivative of the objective function with respect to the parameter to adjust, suggests the direction and the magnitude of the change to apply. More formally, the control strategy consists in the update of in the direction suggested by this sensitivity estimate according to 0 = [@J=@] IP A where is a small number, [@J=@] IP A is the sensitivity estimate, is the current parameter value and 0 is the new parameter value, the one after the update. It is clear that this strategy may lead to a local minimum of J(; t), unless some assumptions are made on the performance function. When the objective function depends on a discrete parameter, a dierent PA approach, as already pointed out, must be used for the study of the structure of the objective function. Dierent techniques ([5, 4, 48, 49, 63, 64, 65]) have been proposed to evaluate the change of the objective function due to a generally small but nite change of a considered parameter. These approaches are referred to as Finite Perturbation Analysis (FPA) techniques. The classic and modied FPA techniques, i.e. the ones presented in the works just cited, present some drawbacks. First of all, only small changes in the parameter value can be considered, in order to avoid a too deep modication of the original path, where path denotes the ordered sequence of events. In the control problem, as it will be clearer in the following, larger perturbations should be considered. To understand if there is a change in the order of events (due to the parameter change), also future events must be considered: this makes this approach applicable only o line. The reconstruction of the behavior of the system with the modied parameter is approximate since some hypothesis are taken in order to simplify the computation.

25 Chapter 3 - State Reconstruction Algorithm 23 These problems make FPA approach sometimes not suitable for the application of control algorithms. A dierent approach is the one used to solve the so called constructability problem, i.e. the problem of determining the evolution of the DEDS under a perturbed parameter. Suppose a DEDS is characterized by some parameter and suppose the current value of this parameter is 0. Under this parameter value the DEDS gives a particular performance. Consider any other value ^ for. What would have been the performance of the system if the value of the parameter was ^ instead of 0? The answer should be possible for any parameter value (unlike the FPA techniques discussed above), using only information about the past of the system, and giving exact reconstructions. It is clear that only an estimate can be provided of the performance function, since only a particular and nite realization is considered. The term exact here refers to the reconstruction realized: the sequence produced by the real system with that realization (and that value of parameter ) and the sequence given by the reconstruction algorithm are the same. Observe that all FPA techniques can be also applied to a continuous parameter, and also to provide a sensitivity estimate when IPA approach fails. Anyway, for the control algorithms presented in the next chapter, this is not the case. The solution of the constructability problem for many and possibly all values of the considered parameter is addressed by the term Rapid Learning ([7]): providing a performance estimate for many dierent parameter values allows to rapidly learn the structure of the performance function J(; t) as a function of, making easy the choice of a \good" value (or allocation). One possible solution to this problem is the Standard Clock (SC) algorithm ([66]). This algorithm can be applied only to DEDS whose events are characterized by lifetimes exponentially distributed. An articial sequence is generated to test the DEDS with a given parameter value. The drawbacks of this approach are mainly due to the fact it only applies to DEDS with lifetimes exponentially distributed. Moreover a sequence of random variables must be generated. A dierent approach is the Augmented System Analysis (ASA) proposed in [67,

26 Chapter 3 - State Reconstruction Algorithm 24 68]. The idea is to reconstruct the evolution of the system under a perturbed parameter only using data collected observing the evolution of the real system in the nominal parameter value. So, unlike SC, a sequence of random variables is not generated for this analysis and a real sequence is now exploited. In a rst version this algorithm could be applied to DEDS with events all generated by Poisson processes (i.e. like in the SC approach). A successive extension allowed to consider DEDS with at most one event generated by a non-poisson process. Two conditions must be satised by the DEDS in order to reconstruct its evolution under a dierent parameter using data collected in the real path ([69]). These conditions are the so called observability and constructability conditions. The observability condition states that the set of feasible events in the current reconstructed state must be contained in the set of feasible events in the current nominal state. This condition implies that all the feasible events in the reconstructed path can be compared with the corresponding events in the nominal path in order to determine the time they happen. The constructability condition comprises the observability but, in addition, requires that the time probability distribution of the feasible events in the reconstructed and nominal paths are the same. Due to the memoryless property of exponential lifetimes, it turns out that for DEDS with Poisson events, the constructability condition coincides with the observability condition. In the ASA approach, applied to DEDS with Poisson events, only the observability condition must be assured, in order to make the reconstruction possible. When the observability condition is not satised the event matching algorithm is applied to make the reconstruction possible. The idea is to wait for the next state in the nominal path which guarantees the observability condition. If some events have lifetimes non exponentially distributed the idea is that of saving these lifetimes when the reconstruction is suspended and resume the reconstruction process when the observability condition is met and ages of non exponential events (that is events with non exponentially distributed lifetimes) match with the corresponding saved ages. This is the idea of the Age Matching Algorithm. From the analysis of this algorithm it turns out that ASA, through event matching and age matching algorithms, can

27 Chapter 3 - State Reconstruction Algorithm 25 be applied to DEDS with Poisson events and possibly one non-poisson event. The main drawback of this approach is that a long time could be wasted for the real system to enter a state where the observability condition (and the age matching condition, in the non exponential case) is satised. In the meanwhile, many lifetimes could be recorded and used when the reconstruction resumes. This idea is the basis of the Time Warping Algorithm (TWA) for the sample path reconstruction of general DEDS (see [8]). This approach is very near to the one presented in the next section. The main dierence is that TWA is applied for an on-line reconstruction of the evolution of the system, while the State Reconstruction Algorithm (SRA) described below reconstructs the evolution of the system periodically at the end of a given interval of time, where a control action is performed based on the reconstruction. For this reason the time where the reconstruction is carried out is referred to as control epoch (see below). This aspect makes it more suitable for the control applications presented in the next chapter. Concurrent estimation is studied also in [70]. Concluding this introductive section, it is possible to say that the SRA algorithm (and TWA as well) adds to the already mentioned advantages typical of all constructability techniques (i.e. analysis of any perturbation using only past history and providing an exact reconstruction) also the possibility to be applied to any DEDS with general and unknown lifetimes distribution. Some perturbations can not be considered anyway. A change in the topology of the DEDS cannot be considered, as well as any perturbation which adds some new events in the perturbed system ([8]). Some \dicult" perturbations can be allowed using some trick. Consider for example a perturbation which modies the lifetime distribution of a particular event. Since the SRA and TWA approaches work recording and using the lifetimes observed in the nominal evolution of the system, such a perturbation would seem unfeasible. Anyway, if the value of the cumulative distribution function corresponding to the considered lifetime is stored in place of the lifetime itself, this value can be used to evaluate the lifetime corresponding to that realization but considering the perturbed distribution. This example is depicted in [8] and in the next section.

28 Chapter 3 - State Reconstruction Algorithm The state reconstruction algorithm This section summarizes some results presented in ([71, 9, 72, 10, 73, 74]) about a novel algorithm designed to reconstruct the evolution of a general DEDS only using data collected in the real evolution of the system itself. In [9, 10, 74] the algorithm is presented in detail. The application to control problems is especially presented in [72] while [71, 73] contain some points about a parallel implementation of the algorithm Notation and queueing network dynamics The class of Discrete Event Dynamic Systems which will be considered and for which a control scheme will be provided, is depicted in this section. It is the class of systems that can be modeled by means of queueing networks with: (a) general service time at each node, (b) general routing policy at each node, (c) general scheduling policy at each node, (d) nite buer capacity, (e) multi-class, (f) non-preemptive service at each node, (g) innite arrival rate sources, (h) innite capacity sinks. Each node comprises a server and several input queues or buers; in particular, it is assumed that each server has as much input queues as the number of classes it can provide service to, and each class is associated with a single dedicated input queue. The dynamics of such a class of Discrete Event Systems can be modeled by means of Generalized Semi-Markov Processes (see Section 2.1). The event of type \completion of service of a customer of a specic class on a node" is sucient to fully describe the evolution of the system considered here. Observe that this class of DEDS is really general: any system where some servers provide service to dierent customers waiting in some queue is in the considered class. Then the state reconstruction algorithm presented in this section can apply to many dierent situations: production lines, computing systems, communication networks and so on. The following three subsections are intended as introductory to the issue of dy-

`First Come, First Served' can be unstable! Thomas I. Seidman. Department of Mathematics and Statistics. University of Maryland Baltimore County

`First Come, First Served' can be unstable! Thomas I. Seidman. Department of Mathematics and Statistics. University of Maryland Baltimore County revision2: 9/4/'93 `First Come, First Served' can be unstable! Thomas I. Seidman Department of Mathematics and Statistics University of Maryland Baltimore County Baltimore, MD 21228, USA e-mail: hseidman@math.umbc.edui

More information

Environment (E) IBP IBP IBP 2 N 2 N. server. System (S) Adapter (A) ACV

Environment (E) IBP IBP IBP 2 N 2 N. server. System (S) Adapter (A) ACV The Adaptive Cross Validation Method - applied to polling schemes Anders Svensson and Johan M Karlsson Department of Communication Systems Lund Institute of Technology P. O. Box 118, 22100 Lund, Sweden

More information

Optimal Rejuvenation for. Tolerating Soft Failures. Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi.

Optimal Rejuvenation for. Tolerating Soft Failures. Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi. Optimal Rejuvenation for Tolerating Soft Failures Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi Abstract In the paper we address the problem of determining the optimal time

More information

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 5, MAY

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 5, MAY IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 5, MAY 1998 631 Centralized and Decentralized Asynchronous Optimization of Stochastic Discrete-Event Systems Felisa J. Vázquez-Abad, Christos G. Cassandras,

More information

Dynamic control of a class of discrete event systems using a state reconstruction algorithm 1

Dynamic control of a class of discrete event systems using a state reconstruction algorithm 1 Dedicated to Professor Jakub Gutenbaum on his 70th birthday Control and Cybernetics vol. 29 (2000) No. 1 Dynamic control of a class of discrete event systems using a state reconstruction algorithm 1 by

More information

7. Queueing Systems. 8. Petri nets vs. State Automata

7. Queueing Systems. 8. Petri nets vs. State Automata Petri Nets 1. Finite State Automata 2. Petri net notation and definition (no dynamics) 3. Introducing State: Petri net marking 4. Petri net dynamics 5. Capacity Constrained Petri nets 6. Petri net models

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Zero-Inventory Conditions For a Two-Part-Type Make-to-Stock Production System

Zero-Inventory Conditions For a Two-Part-Type Make-to-Stock Production System Zero-Inventory Conditions For a Two-Part-Type Make-to-Stock Production System MichaelH.Veatch Francis de Véricourt October 9, 2002 Abstract We consider the dynamic scheduling of a two-part-type make-tostock

More information

Link Models for Circuit Switching

Link Models for Circuit Switching Link Models for Circuit Switching The basis of traffic engineering for telecommunication networks is the Erlang loss function. It basically allows us to determine the amount of telephone traffic that can

More information

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems Exercises Solutions Note, that we have not formulated the answers for all the review questions. You will find the answers for many questions by reading and reflecting about the text in the book. Chapter

More information

How to Pop a Deep PDA Matters

How to Pop a Deep PDA Matters How to Pop a Deep PDA Matters Peter Leupold Department of Mathematics, Faculty of Science Kyoto Sangyo University Kyoto 603-8555, Japan email:leupold@cc.kyoto-su.ac.jp Abstract Deep PDA are push-down automata

More information

6 Solving Queueing Models

6 Solving Queueing Models 6 Solving Queueing Models 6.1 Introduction In this note we look at the solution of systems of queues, starting with simple isolated queues. The benefits of using predefined, easily classified queues will

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati Lectures on Dynamic Systems and ontrol Mohammed Dahleh Munther Dahleh George Verghese Department of Electrical Engineering and omputer Science Massachuasetts Institute of Technology c hapter 8 Simulation/Realization

More information

Session-Based Queueing Systems

Session-Based Queueing Systems Session-Based Queueing Systems Modelling, Simulation, and Approximation Jeroen Horters Supervisor VU: Sandjai Bhulai Executive Summary Companies often offer services that require multiple steps on the

More information

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K Queueing Theory I Summary Little s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K " Little s Law a(t): the process that counts the number of arrivals

More information

In: Proc. BENELEARN-98, 8th Belgian-Dutch Conference on Machine Learning, pp 9-46, 998 Linear Quadratic Regulation using Reinforcement Learning Stephan ten Hagen? and Ben Krose Department of Mathematics,

More information

Lecture 14 - P v.s. NP 1

Lecture 14 - P v.s. NP 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 27, 2018 Lecture 14 - P v.s. NP 1 In this lecture we start Unit 3 on NP-hardness and approximation

More information

Operations Research II, IEOR161 University of California, Berkeley Spring 2007 Final Exam. Name: Student ID:

Operations Research II, IEOR161 University of California, Berkeley Spring 2007 Final Exam. Name: Student ID: Operations Research II, IEOR161 University of California, Berkeley Spring 2007 Final Exam 1 2 3 4 5 6 7 8 9 10 7 questions. 1. [5+5] Let X and Y be independent exponential random variables where X has

More information

[4] T. I. Seidman, \\First Come First Serve" is Unstable!," tech. rep., University of Maryland Baltimore County, 1993.

[4] T. I. Seidman, \\First Come First Serve is Unstable!, tech. rep., University of Maryland Baltimore County, 1993. [2] C. J. Chase and P. J. Ramadge, \On real-time scheduling policies for exible manufacturing systems," IEEE Trans. Automat. Control, vol. AC-37, pp. 491{496, April 1992. [3] S. H. Lu and P. R. Kumar,

More information

A POPULATION-MIX DRIVEN APPROXIMATION FOR QUEUEING NETWORKS WITH FINITE CAPACITY REGIONS

A POPULATION-MIX DRIVEN APPROXIMATION FOR QUEUEING NETWORKS WITH FINITE CAPACITY REGIONS A POPULATION-MIX DRIVEN APPROXIMATION FOR QUEUEING NETWORKS WITH FINITE CAPACITY REGIONS J. Anselmi 1, G. Casale 2, P. Cremonesi 1 1 Politecnico di Milano, Via Ponzio 34/5, I-20133 Milan, Italy 2 Neptuny

More information

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 7 Interconnected

More information

Submitted to IEEE Transactions on Computers, June Evaluating Dynamic Failure Probability for Streams with. (m; k)-firm Deadlines

Submitted to IEEE Transactions on Computers, June Evaluating Dynamic Failure Probability for Streams with. (m; k)-firm Deadlines Submitted to IEEE Transactions on Computers, June 1994 Evaluating Dynamic Failure Probability for Streams with (m; k)-firm Deadlines Moncef Hamdaoui and Parameswaran Ramanathan Department of Electrical

More information

Industrial Automation (Automação de Processos Industriais)

Industrial Automation (Automação de Processos Industriais) Industrial Automation (Automação de Processos Industriais) Discrete Event Systems http://users.isr.ist.utl.pt/~jag/courses/api1516/api1516.html Slides 2010/2011 Prof. Paulo Jorge Oliveira Rev. 2011-2015

More information

Carnegie Mellon University Forbes Ave. Pittsburgh, PA 15213, USA. fmunos, leemon, V (x)ln + max. cost functional [3].

Carnegie Mellon University Forbes Ave. Pittsburgh, PA 15213, USA. fmunos, leemon, V (x)ln + max. cost functional [3]. Gradient Descent Approaches to Neural-Net-Based Solutions of the Hamilton-Jacobi-Bellman Equation Remi Munos, Leemon C. Baird and Andrew W. Moore Robotics Institute and Computer Science Department, Carnegie

More information

1. Introduction. Consider a single cell in a mobile phone system. A \call setup" is a request for achannel by an idle customer presently in the cell t

1. Introduction. Consider a single cell in a mobile phone system. A \call setup is a request for achannel by an idle customer presently in the cell t Heavy Trac Limit for a Mobile Phone System Loss Model Philip J. Fleming and Alexander Stolyar Motorola, Inc. Arlington Heights, IL Burton Simon Department of Mathematics University of Colorado at Denver

More information

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions Electronic Notes in Theoretical Computer Science Vol. 85 No. 4 (2003) URL: http://www.elsevier.nl/locate/entsc/volume85.html Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

More information

Fast Evaluation of Ensemble Transients of Large IP Networks. University of Maryland, College Park CS-TR May 11, 1998.

Fast Evaluation of Ensemble Transients of Large IP Networks. University of Maryland, College Park CS-TR May 11, 1998. Fast Evaluation of Ensemble Transients of Large IP Networks Catalin T. Popescu cpopescu@cs.umd.edu A. Udaya Shankar shankar@cs.umd.edu Department of Computer Science University of Maryland, College Park

More information

Modelling data networks stochastic processes and Markov chains

Modelling data networks stochastic processes and Markov chains Modelling data networks stochastic processes and Markov chains a 1, 3 1, 2 2, 2 b 0, 3 2, 3 u 1, 3 α 1, 6 c 0, 3 v 2, 2 β 1, 1 Richard G. Clegg (richard@richardclegg.org) November 2016 Available online

More information

Queues and Queueing Networks

Queues and Queueing Networks Queues and Queueing Networks Sanjay K. Bose Dept. of EEE, IITG Copyright 2015, Sanjay K. Bose 1 Introduction to Queueing Models and Queueing Analysis Copyright 2015, Sanjay K. Bose 2 Model of a Queue Arrivals

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010 Exercises Stochastic Performance Modelling Hamilton Institute, Summer Instruction Exercise Let X be a non-negative random variable with E[X ]

More information

Discrete Event Systems Exam

Discrete Event Systems Exam Computer Engineering and Networks Laboratory TEC, NSG, DISCO HS 2016 Prof. L. Thiele, Prof. L. Vanbever, Prof. R. Wattenhofer Discrete Event Systems Exam Friday, 3 rd February 2017, 14:00 16:00. Do not

More information

Answers to selected exercises

Answers to selected exercises Answers to selected exercises A First Course in Stochastic Models, Henk C. Tijms 1.1 ( ) 1.2 (a) Let waiting time if passengers already arrived,. Then,, (b) { (c) Long-run fraction for is (d) Let waiting

More information

Multiplicative Multifractal Modeling of. Long-Range-Dependent (LRD) Trac in. Computer Communications Networks. Jianbo Gao and Izhak Rubin

Multiplicative Multifractal Modeling of. Long-Range-Dependent (LRD) Trac in. Computer Communications Networks. Jianbo Gao and Izhak Rubin Multiplicative Multifractal Modeling of Long-Range-Dependent (LRD) Trac in Computer Communications Networks Jianbo Gao and Izhak Rubin Electrical Engineering Department, University of California, Los Angeles

More information

The Markov Decision Process (MDP) model

The Markov Decision Process (MDP) model Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy School of Informatics 25 January, 2013 In the MAB Model We were in a single casino and the

More information

Lecture 15 - NP Completeness 1

Lecture 15 - NP Completeness 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 29, 2018 Lecture 15 - NP Completeness 1 In the last lecture we discussed how to provide

More information

1 Introduction The purpose of this paper is to illustrate the typical behavior of learning algorithms using stochastic approximations (SA). In particu

1 Introduction The purpose of this paper is to illustrate the typical behavior of learning algorithms using stochastic approximations (SA). In particu Strong Points of Weak Convergence: A Study Using RPA Gradient Estimation for Automatic Learning Felisa J. Vazquez-Abad * Department of Computer Science and Operations Research University of Montreal, Montreal,

More information

Stochastic models in product form: the (E)RCAT methodology

Stochastic models in product form: the (E)RCAT methodology Stochastic models in product form: the (E)RCAT methodology 1 Maria Grazia Vigliotti 2 1 Dipartimento di Informatica Università Ca Foscari di Venezia 2 Department of Computing Imperial College London Second

More information

Dynamic resource sharing

Dynamic resource sharing J. Virtamo 38.34 Teletraffic Theory / Dynamic resource sharing and balanced fairness Dynamic resource sharing In previous lectures we have studied different notions of fair resource sharing. Our focus

More information

Online Companion for. Decentralized Adaptive Flow Control of High Speed Connectionless Data Networks

Online Companion for. Decentralized Adaptive Flow Control of High Speed Connectionless Data Networks Online Companion for Decentralized Adaptive Flow Control of High Speed Connectionless Data Networks Operations Research Vol 47, No 6 November-December 1999 Felisa J Vásquez-Abad Départment d informatique

More information

VARIANCE REDUCTION IN SIMULATIONS OF LOSS MODELS

VARIANCE REDUCTION IN SIMULATIONS OF LOSS MODELS VARIANCE REDUCTION IN SIMULATIONS OF LOSS MODELS by Rayadurgam Srikant 1 and Ward Whitt 2 October 20, 1995 Revision: September 26, 1996 1 Coordinated Science Laboratory, University of Illinois, 1308 W.

More information

QUALIFYING EXAM IN SYSTEMS ENGINEERING

QUALIFYING EXAM IN SYSTEMS ENGINEERING QUALIFYING EXAM IN SYSTEMS ENGINEERING Written Exam: MAY 23, 2017, 9:00AM to 1:00PM, EMB 105 Oral Exam: May 25 or 26, 2017 Time/Location TBA (~1 hour per student) CLOSED BOOK, NO CHEAT SHEETS BASIC SCIENTIFIC

More information

G-networks with synchronized partial ushing. PRi SM, Universite de Versailles, 45 av. des Etats Unis, Versailles Cedex,France

G-networks with synchronized partial ushing. PRi SM, Universite de Versailles, 45 av. des Etats Unis, Versailles Cedex,France G-networks with synchronized partial ushing Jean-Michel FOURNEAU ;a, Dominique VERCH ERE a;b a PRi SM, Universite de Versailles, 45 av. des Etats Unis, 78 05 Versailles Cedex,France b CERMSEM, Universite

More information

Control of Hybrid Petri Nets using Max-Plus Algebra

Control of Hybrid Petri Nets using Max-Plus Algebra Control of Hybrid Petri Nets using Max-Plus Algebra FABIO BALDUZZI*, ANGELA DI FEBBRARO*, ALESSANDRO GIUA, SIMONA SACONE^ *Dipartimento di Automatica e Informatica Politecnico di Torino Corso Duca degli

More information

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem Wade Trappe Lecture Overview Network of Queues Introduction Queues in Tandem roduct Form Solutions Burke s Theorem What

More information

M 2 M 3. Robot M (O)

M 2 M 3. Robot M (O) R O M A TRE DIA Universita degli Studi di Roma Tre Dipartimento di Informatica e Automazione Via della Vasca Navale, 79 { 00146 Roma, Italy Part Sequencing in Three Machine No-Wait Robotic Cells Alessandro

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Average Reward Parameters

Average Reward Parameters Simulation-Based Optimization of Markov Reward Processes: Implementation Issues Peter Marbach 2 John N. Tsitsiklis 3 Abstract We consider discrete time, nite state space Markov reward processes which depend

More information

1. Introduction As is well known, the bosonic string can be described by the two-dimensional quantum gravity coupled with D scalar elds, where D denot

1. Introduction As is well known, the bosonic string can be described by the two-dimensional quantum gravity coupled with D scalar elds, where D denot RIMS-1161 Proof of the Gauge Independence of the Conformal Anomaly of Bosonic String in the Sense of Kraemmer and Rebhan Mitsuo Abe a; 1 and Noboru Nakanishi b; 2 a Research Institute for Mathematical

More information

Discrete Probability and State Estimation

Discrete Probability and State Estimation 6.01, Spring Semester, 2008 Week 12 Course Notes 1 MASSACHVSETTS INSTITVTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.01 Introduction to EECS I Spring Semester, 2008 Week

More information

1 Introduction Future high speed digital networks aim to serve integrated trac, such as voice, video, fax, and so forth. To control interaction among

1 Introduction Future high speed digital networks aim to serve integrated trac, such as voice, video, fax, and so forth. To control interaction among On Deterministic Trac Regulation and Service Guarantees: A Systematic Approach by Filtering Cheng-Shang Chang Dept. of Electrical Engineering National Tsing Hua University Hsinchu 30043 Taiwan, R.O.C.

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Single-part-type, multiple stage systems

Single-part-type, multiple stage systems MIT 2.853/2.854 Introduction to Manufacturing Systems Single-part-type, multiple stage systems Stanley B. Gershwin Laboratory for Manufacturing and Productivity Massachusetts Institute of Technology Single-stage,

More information

Analysis and Optimization of Discrete Event Systems using Petri Nets

Analysis and Optimization of Discrete Event Systems using Petri Nets Volume 113 No. 11 2017, 1 10 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Analysis and Optimization of Discrete Event Systems using Petri Nets

More information

Stability, Queue Length and Delay of Deterministic and Stochastic Queueing Networks Cheng-Shang Chang IBM Research Division T.J. Watson Research Cente

Stability, Queue Length and Delay of Deterministic and Stochastic Queueing Networks Cheng-Shang Chang IBM Research Division T.J. Watson Research Cente Stability, Queue Length and Delay of Deterministic and Stochastic Queueing Networks Cheng-Shang Chang IBM Research Division T.J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 cschang@watson.ibm.com

More information

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1 Queueing systems Renato Lo Cigno Simulation and Performance Evaluation 2014-15 Queueing systems - Renato Lo Cigno 1 Queues A Birth-Death process is well modeled by a queue Indeed queues can be used to

More information

Novel determination of dierential-equation solutions: universal approximation method

Novel determination of dierential-equation solutions: universal approximation method Journal of Computational and Applied Mathematics 146 (2002) 443 457 www.elsevier.com/locate/cam Novel determination of dierential-equation solutions: universal approximation method Thananchai Leephakpreeda

More information

MULTIPLE CHOICE QUESTIONS DECISION SCIENCE

MULTIPLE CHOICE QUESTIONS DECISION SCIENCE MULTIPLE CHOICE QUESTIONS DECISION SCIENCE 1. Decision Science approach is a. Multi-disciplinary b. Scientific c. Intuitive 2. For analyzing a problem, decision-makers should study a. Its qualitative aspects

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy

CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy August 25, 2017 A group of residents each needs a residency in some hospital. A group of hospitals each need some number (one

More information

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Queueing Theory. VK Room: M Last updated: October 17, 2013. Queueing Theory VK Room: M1.30 knightva@cf.ac.uk www.vincent-knight.com Last updated: October 17, 2013. 1 / 63 Overview Description of Queueing Processes The Single Server Markovian Queue Multi Server

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

DES. 4. Petri Nets. Introduction. Different Classes of Petri Net. Petri net properties. Analysis of Petri net models

DES. 4. Petri Nets. Introduction. Different Classes of Petri Net. Petri net properties. Analysis of Petri net models 4. Petri Nets Introduction Different Classes of Petri Net Petri net properties Analysis of Petri net models 1 Petri Nets C.A Petri, TU Darmstadt, 1962 A mathematical and graphical modeling method. Describe

More information

Sub-Optimal Scheduling of a Flexible Batch Manufacturing System using an Integer Programming Solution

Sub-Optimal Scheduling of a Flexible Batch Manufacturing System using an Integer Programming Solution Sub-Optimal Scheduling of a Flexible Batch Manufacturing System using an Integer Programming Solution W. Weyerman, D. West, S. Warnick Information Dynamics and Intelligent Systems Group Department of Computer

More information

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure

More information

Stochastic Petri Nets. Jonatan Lindén. Modelling SPN GSPN. Performance measures. Almost none of the theory. December 8, 2010

Stochastic Petri Nets. Jonatan Lindén. Modelling SPN GSPN. Performance measures. Almost none of the theory. December 8, 2010 Stochastic Almost none of the theory December 8, 2010 Outline 1 2 Introduction A Petri net (PN) is something like a generalized automata. A Stochastic Petri Net () a stochastic extension to Petri nets,

More information

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18. IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas

More information

Performance of Round Robin Policies for Dynamic Multichannel Access

Performance of Round Robin Policies for Dynamic Multichannel Access Performance of Round Robin Policies for Dynamic Multichannel Access Changmian Wang, Bhaskar Krishnamachari, Qing Zhao and Geir E. Øien Norwegian University of Science and Technology, Norway, {changmia,

More information

Modeling and Stability Analysis of a Communication Network System

Modeling and Stability Analysis of a Communication Network System Modeling and Stability Analysis of a Communication Network System Zvi Retchkiman Königsberg Instituto Politecnico Nacional e-mail: mzvi@cic.ipn.mx Abstract In this work, the modeling and stability problem

More information

A tutorial on some new methods for. performance evaluation of queueing networks. P. R. Kumar. Coordinated Science Laboratory. University of Illinois

A tutorial on some new methods for. performance evaluation of queueing networks. P. R. Kumar. Coordinated Science Laboratory. University of Illinois A tutorial on some new methods for performance evaluation of queueing networks P. R. Kumar Dept. of Electrical and Computer Engineering, and Coordinated Science Laboratory University of Illinois 1308 West

More information

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel

The Bias-Variance dilemma of the Monte Carlo. method. Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel The Bias-Variance dilemma of the Monte Carlo method Zlochin Mark 1 and Yoram Baram 1 Technion - Israel Institute of Technology, Technion City, Haifa 32000, Israel fzmark,baramg@cs.technion.ac.il Abstract.

More information

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins.

On-line Bin-Stretching. Yossi Azar y Oded Regev z. Abstract. We are given a sequence of items that can be packed into m unit size bins. On-line Bin-Stretching Yossi Azar y Oded Regev z Abstract We are given a sequence of items that can be packed into m unit size bins. In the classical bin packing problem we x the size of the bins and try

More information

Preface These notes were prepared on the occasion of giving a guest lecture in David Harel's class on Advanced Topics in Computability. David's reques

Preface These notes were prepared on the occasion of giving a guest lecture in David Harel's class on Advanced Topics in Computability. David's reques Two Lectures on Advanced Topics in Computability Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot, Israel. oded@wisdom.weizmann.ac.il Spring 2002 Abstract This text consists

More information

University of California Department of Mechanical Engineering ECE230A/ME243A Linear Systems Fall 1999 (B. Bamieh ) Lecture 3: Simulation/Realization 1

University of California Department of Mechanical Engineering ECE230A/ME243A Linear Systems Fall 1999 (B. Bamieh ) Lecture 3: Simulation/Realization 1 University of alifornia Department of Mechanical Engineering EE/ME Linear Systems Fall 999 ( amieh ) Lecture : Simulation/Realization Given an nthorder statespace description of the form _x(t) f (x(t)

More information

Stochastic Optimization

Stochastic Optimization Chapter 27 Page 1 Stochastic Optimization Operations research has been particularly successful in two areas of decision analysis: (i) optimization of problems involving many variables when the outcome

More information

Introduction to Queuing Networks Solutions to Problem Sheet 3

Introduction to Queuing Networks Solutions to Problem Sheet 3 Introduction to Queuing Networks Solutions to Problem Sheet 3 1. (a) The state space is the whole numbers {, 1, 2,...}. The transition rates are q i,i+1 λ for all i and q i, for all i 1 since, when a bus

More information

Extracted from a working draft of Goldreich s FOUNDATIONS OF CRYPTOGRAPHY. See copyright notice.

Extracted from a working draft of Goldreich s FOUNDATIONS OF CRYPTOGRAPHY. See copyright notice. 106 CHAPTER 3. PSEUDORANDOM GENERATORS Using the ideas presented in the proofs of Propositions 3.5.3 and 3.5.9, one can show that if the n 3 -bit to l(n 3 ) + 1-bit function used in Construction 3.5.2

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 6. Renewal Mathematically, renewal refers to a continuous time stochastic process with states,, 2,. N t {,, 2, 3, } so that you only have jumps from x to x + and

More information

A Queueing System with Queue Length Dependent Service Times, with Applications to Cell Discarding in ATM Networks

A Queueing System with Queue Length Dependent Service Times, with Applications to Cell Discarding in ATM Networks A Queueing System with Queue Length Dependent Service Times, with Applications to Cell Discarding in ATM Networks by Doo Il Choi, Charles Knessl and Charles Tier University of Illinois at Chicago 85 South

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

CHAPTER 4. Networks of queues. 1. Open networks Suppose that we have a network of queues as given in Figure 4.1. Arrivals

CHAPTER 4. Networks of queues. 1. Open networks Suppose that we have a network of queues as given in Figure 4.1. Arrivals CHAPTER 4 Networks of queues. Open networks Suppose that we have a network of queues as given in Figure 4.. Arrivals Figure 4.. An open network can occur from outside of the network to any subset of nodes.

More information

SPN 2003 Preliminary Version. Translating Hybrid Petri Nets into Hybrid. Automata 1. Dipartimento di Informatica. Universita di Torino

SPN 2003 Preliminary Version. Translating Hybrid Petri Nets into Hybrid. Automata 1. Dipartimento di Informatica. Universita di Torino SPN 2003 Preliminary Version Translating Hybrid Petri Nets into Hybrid Automata 1 Marco Gribaudo 2 and Andras Horvath 3 Dipartimento di Informatica Universita di Torino Corso Svizzera 185, 10149 Torino,

More information

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days? IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2005, Professor Whitt, Second Midterm Exam Chapters 5-6 in Ross, Thursday, March 31, 11:00am-1:00pm Open Book: but only the Ross

More information

Completion Time in Dynamic PERT Networks 57 job are nished, as well as that the associated service station has processed the same activity of the prev

Completion Time in Dynamic PERT Networks 57 job are nished, as well as that the associated service station has processed the same activity of the prev Scientia Iranica, Vol. 14, No. 1, pp 56{63 c Sharif University of Technology, February 2007 Project Completion Time in Dynamic PERT Networks with Generating Projects A. Azaron 1 and M. Modarres In this

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

On the static assignment to parallel servers

On the static assignment to parallel servers On the static assignment to parallel servers Ger Koole Vrije Universiteit Faculty of Mathematics and Computer Science De Boelelaan 1081a, 1081 HV Amsterdam The Netherlands Email: koole@cs.vu.nl, Url: www.cs.vu.nl/

More information

Chapter 1. Introduction. 1.1 Stochastic process

Chapter 1. Introduction. 1.1 Stochastic process Chapter 1 Introduction Process is a phenomenon that takes place in time. In many practical situations, the result of a process at any time may not be certain. Such a process is called a stochastic process.

More information

Discrete-event simulations

Discrete-event simulations Discrete-event simulations Lecturer: Dmitri A. Moltchanov E-mail: moltchan@cs.tut.fi http://www.cs.tut.fi/kurssit/elt-53606/ OUTLINE: Why do we need simulations? Step-by-step simulations; Classifications;

More information

Discrete Event Systems

Discrete Event Systems DI DIPARTIMENTO DI INGEGNERIA DELL INFORMAZIONE E SCIENZE MATEMATICHE Lecture notes of Discrete Event Systems Simone Paoletti Version 0.3 October 27, 2015 Indice Notation 1 Introduction 2 1 Basics of systems

More information

λ λ λ In-class problems

λ λ λ In-class problems In-class problems 1. Customers arrive at a single-service facility at a Poisson rate of 40 per hour. When two or fewer customers are present, a single attendant operates the facility, and the service time

More information

A general algorithm to compute the steady-state solution of product-form cooperating Markov chains

A general algorithm to compute the steady-state solution of product-form cooperating Markov chains A general algorithm to compute the steady-state solution of product-form cooperating Markov chains Università Ca Foscari di Venezia Dipartimento di Informatica Italy 2009 Presentation outline 1 Product-form

More information

CS261: A Second Course in Algorithms Lecture #11: Online Learning and the Multiplicative Weights Algorithm

CS261: A Second Course in Algorithms Lecture #11: Online Learning and the Multiplicative Weights Algorithm CS61: A Second Course in Algorithms Lecture #11: Online Learning and the Multiplicative Weights Algorithm Tim Roughgarden February 9, 016 1 Online Algorithms This lecture begins the third module of the

More information

Information in Aloha Networks

Information in Aloha Networks Achieving Proportional Fairness using Local Information in Aloha Networks Koushik Kar, Saswati Sarkar, Leandros Tassiulas Abstract We address the problem of attaining proportionally fair rates using Aloha

More information

2DI90 Probability & Statistics. 2DI90 Chapter 4 of MR

2DI90 Probability & Statistics. 2DI90 Chapter 4 of MR 2DI90 Probability & Statistics 2DI90 Chapter 4 of MR Recap - Random Variables ( 2.8 MR)! Example: X is the random variable corresponding to the temperature of the room at time t. x is the measured temperature

More information

Specification models and their analysis Petri Nets

Specification models and their analysis Petri Nets Specification models and their analysis Petri Nets Kai Lampka December 10, 2010 1 30 Part I Petri Nets Basics Petri Nets Introduction A Petri Net (PN) is a weighted(?), bipartite(?) digraph(?) invented

More information

Any live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step.

Any live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step. 2. Cellular automata, and the SIRS model In this Section we consider an important set of models used in computer simulations, which are called cellular automata (these are very similar to the so-called

More information

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139 Upper and Lower Bounds on the Number of Faults a System Can Withstand Without Repairs Michel Goemans y Nancy Lynch z Isaac Saias x Laboratory for Computer Science Massachusetts Institute of Technology

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information