Lecturer: Olga Galinina

Similar documents
Lecturer: Olga Galinina

Stochastic process. X, a series of random variables indexed by t

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Part I Stochastic variables and Markov chains

GI/M/1 and GI/M/m queuing systems

IEOR 6711, HMWK 5, Professor Sigman

MARKOV PROCESSES. Valerio Di Valerio

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Readings: Finish Section 5.2

Lecture 4a: Continuous-Time Markov Chain Models

Characterizing the BMAP/MAP/1 Departure Process via the ETAQA Truncation 1

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Data analysis and stochastic modeling

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

M/G/1 and M/G/1/K systems

Jitter Analysis of an MMPP 2 Tagged Stream in the presence of an MMPP 2 Background Stream

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Introduction to Queuing Networks Solutions to Problem Sheet 3

Non Markovian Queues (contd.)

LECTURE #6 BIRTH-DEATH PROCESS

Stochastic Processes

Statistics 150: Spring 2007

Birth-Death Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Part II: continuous time Markov chain (CTMC)

Contents LIST OF TABLES... LIST OF FIGURES... xvii. LIST OF LISTINGS... xxi PREFACE. ...xxiii

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING

Analysis of an Infinite-Server Queue with Markovian Arrival Streams

Assignment 3 with Reference Solutions

Lecture 20: Reversible Processes and Queues

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Lectures on Probability and Statistical Models

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Continuous time Markov chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Review of Mathematical Concepts. Hongwei Zhang

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

P 1.5 X 4.5 / X 2 and (iii) The smallest value of n for

The exponential distribution and the Poisson process

An M/M/1 Queue in Random Environment with Disasters

INDEX. production, see Applications, manufacturing

57:022 Principles of Design II Final Exam Solutions - Spring 1997

Eleventh Problem Assignment

Variance reduction techniques

Markov processes and queueing networks

Disjointness and Additivity

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

STAT STOCHASTIC PROCESSES. Contents

Analysis of a tandem queueing model with GI service time at the first queue

Probability Distributions

Some Background Information on Long-Range Dependence and Self-Similarity On the Variability of Internet Traffic Outline Introduction and Motivation Ch

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

Stochastic Simulation

Stat 516, Homework 1

Queueing Review. Christos Alexopoulos and Dave Goldsman 10/6/16. (mostly from BCNN) Georgia Institute of Technology, Atlanta, GA, USA

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Time Reversibility and Burke s Theorem

Jordan normal form notes (version date: 11/21/07)

Variance reduction techniques

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Continuous-time Markov Chains

ISyE 6650 Test 2 Solutions

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

6 Markov Chain Monte Carlo (MCMC)

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

18.440: Lecture 33 Markov Chains

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS

Numerical Transform Inversion to Analyze Teletraffic Models

IE 303 Discrete-Event Simulation

Glossary availability cellular manufacturing closed queueing network coefficient of variation (CV) conditional probability CONWIP

Markov Chains and MCMC

J. MEDHI STOCHASTIC MODELS IN QUEUEING THEORY

CS 237: Probability in Computing

THE QUEEN S UNIVERSITY OF BELFAST

Continuous Time Processes

Queueing Review. Christos Alexopoulos and Dave Goldsman 10/25/17. (mostly from BCNN) Georgia Institute of Technology, Atlanta, GA, USA

Stochastic modelling of epidemic spread

6 Continuous-Time Birth and Death Chains

Link Models for Circuit Switching

HEAVY-TRAFFIC ASYMPTOTIC EXPANSIONS FOR THE ASYMPTOTIC DECAY RATES IN THE BMAP/G/1 QUEUE

Chapter 6: Random Processes 1

SYMBOLS AND ABBREVIATIONS

1.225J J (ESD 205) Transportation Flow Systems

Transcription:

Lecturer: Olga Galinina E-mail: olga.galinina@tut.fi

Outline Motivation Modulated models; Continuous Markov models Markov modulated models; Batch Markovian arrival process; Markovian arrival process; Markov modulated arrival process; Switched Markov modulated arrival process. Discrete Markov models Discrete Markov modulated models; Discrete-time batch Markovian arrival process; Discrete-time Markovian arrival process; Markov modulated Bernoulli process; Switched Markov modulated Bernoulli process; Discrete-time switched Poisson process; Discrete-time switched Bernoulli process; Fitting parameters example.

Why non-renewal models? Classic renewal processes: strictly stationary; completely uncorrelated; single arrivals. In general inadequate to capture traffic properties Real traffic: may not be strictly stationary; may be correlated; multiple arrivals are allowed. Note! We have to be close to reality! 3

Modulated models Basic concept: one process X i (t) determines the state S(t); states are associated with other processes. Figure 1: Graphical representation of general modulated process. What is important about this process: values of {S(t), t {T}} are not observable; values of {X i (t), t {T}}, i = 1,,, M are observable. 4

Modulated models In general: no restrictions on the choice of {S(t), t {T}} ; no restrictions on the choice of {X i (t), t {T}}, i = 1,,, M. Why these processes are non-renewal: value of the process depends on the state; in special case there is autocorrelation. We are interested in special case: modulating process is Markovian; processes in states are renewal. These are called Markov modulated processes 5

Markov modulated processes We distinguish between: continuous-time model interarrival times; discrete-time model number of arrivals in slots. Classification is related to measurements: continuous interarrival times time-consuming; discrete number of arrivals in slots easier to do. Figure : Discrete and continuous measurements. 6

Continuous-time time Markov models 7

Batch Markovian AP (BMAP) To define BMAP we start considering Poisson process assuming: no arrivals occurred prior to t=0; we consider it in terms of arrivals prior to time t: N(t); rate of the process is assumed to be λ. {N(t),t 0}, N(t) {0,1, } can be considered as pure birth process: states N(t) {0,1, } denote number of arrivals; exponential sojourn time in state i; after that process jumps to state i+1 resulting in exponential interarrival time. Figure 3: Interpretation of Poisson arrival process as pure birth process. 8

Batch Markovian AP (BMAP) Infinitesimal generator of {N(t), t 0}: d 0 d 1 0... 0 d0 d1... Q =, 0 0 d0... d1 = λ, d0 = d1 = λ. If the process is batch Poisson we have: d0 d1 d d3 0 d0 d1 d Q =, 0 0 d0 d1 d = λ p, i = 1,,...; d = d = λ. i i 0 i i=1= 1 Here p i is the probability that size of the batch is i. 9

Batch Markovian AP (BMAP) Batch Markovian arrival process: extension of the batch Poisson: interarrival times are no longer exponential; Markovian structure is still preserved. Consider process {N(t),J(t), t 0}, N(t) {0,1, }, J(t) {0,1,,M}: ( 0) ( 1) ( ) ( 3) 0 D ( 0 ) D ( 1 ) D ( ) 0 0 D( 0) D( 1) D D D D Q =. D(k), k = 0, 1,... are M M matrices; D(0): negative diagonal elements and non-negative off-diagonal; D(k), k = 1,,... are non-negative: D = i=0= 0 ( ) D i. 10

Batch Markovian AP (BMAP) Important notes about BMAP: most general analytically tractable continuous-time Markov modulated process; allows arbitrary distribution of the interarrival times; ACF: sum of exponential terms. Constructive interpretation of BMAP: continuous-time Markov chain {S(t), t R}, S t {1,,, M}: sojourn time is exponentially distributed with parameters λ i, i=1,,, M; when MC changes its state from i to j a batch of arrivals is generated. Denote probabilities of transitions with k arrivals as: p ij (k), i=1,,, M. we assume p ii (0)=0, i=1,,. 11

Batch Markovian AP (BMAP) How to characterize D-BMAP: number of states, M; intensitiesout of states λ i, i=1,,..., M; probabilitiesp ij (k), i, j=1,,..., M, k = 0, 1, How to have this information in compact form: matrices D(k) where each element is: ( ) = λ,, = 1,,...,, = 1,,... d k p i j M k ij i ij matrix D(0) each element of which is λi pij, i j dij =, λ i, i = j where for infinitesimal generator D the following holds: D D( i). = i=1 1

Batch Markovian AP (BMAP) Matrices D(0), D(1),..., D(k),... take the form: λ 1 p1 ( 0 ) λ1 p13 ( 0 ) λ1... p1m ( 0 ) λ1 p1( 0) λ λ p3 ( 0 ) λ... pm ( 0) λ D( 0 ) =,............ pm1( 0) λm pm ( 0) λm pm3( 0 ) λm... λ M p 11( 1) λ 1 p 1 ( 1) λ 1 p 13( 01 ) λ 1... p 1 M ( 1) λ 1 p1( 1) λ p ( 1) λ p3 ( 1 ) λ... pm ( 1) λ D( 1 ) =,............ pm1( 1) λm pm ( 1) λm pm3( 1 ) λm... pmm ( 1) λ M ( ) D k ( ) ( ) ( 01 )... M ( ) ( ) ( ) ( 1 )... ( ) p 11 k λ 1 p 1 k λ 1 p 13 λ 1 p 1 k λ 1 p1 k λ p k λ p3 λ pm k λ =............. ( ) ( ) ( 1 1 3 )... pm k λm pm k λm pm λm pmm ( k) λ M 13

Markovian arrival process: MAP MAP is a special case of BMAP: only single arrivals are allowed. MAP process {W(t), t R} is defined as: d 1 = λ p 1, i, j = 1,,..., M ; k = 0,1. d ( ) ( ) ( ) λi ij ( ) ij i ij ij p 1, i j 1 =. λ i, i = j D = D(0) + D(1). Mean is given by: [ ] 1 ( ( 0 )) 1 E W = = π D e λ λ is the overall intensity of the process. 14

Markov modulated arrival process: MMPP MMPPis a special case of MAP: arrivals are allowed when state changes from i to i, i=1,,,m. MMPP process is defined as: λ i pij ( 1 ), i = j d ij ( 1 ) =, 0, i j D = D(0) + D(1). In matrix form: d ij ( ) ( ) λi pij 1, i j 0 =. λi, i = j ( ) ( ) ( ) D 0 = D Λ, D 1 = Λ, D k = 0, k 1. Λ = diag ( λ are rates of Poisson process in 1, λ,..., λm ) = diag ( λ ), λi states i = 1,,, M; λ, D full description is then given by ( ) 15

Markov modulated arrival process: MMPP Characteristics of MMPP: steady-state vector π = (π 1, π,..., π M ) of modulating CTMC: π D = 0, π e = 1. mean arrival rate is given by: θ T λ = π ( ) ( 1 ) kd k e = π D e = πλ e = πλ. k = 1 if CTMC is irreducible aperiodic, autocovariance function is given by: N 1 1 i CW ( τ ) = E W ( t) W ( t + τ ) = λδ ( τ ) + ϕ0 + ϕ ie γ τ, ( ) = 1, τ = 0. ( ) = 0, τ 0 δ τ δ τ γ, i= 0, 1,, N 1 are N 1 eigenvaluesof CTMC given that ; i γ 0 = 0 if CTMC is irreducible aperiodic all eigenvalues and real; i= 1 16

Markov modulated arrival process: MMPP Distributions of the MMPP: PF number of arrivals is a weighted sum of Poisson distributions: N k i π iλi e p Pr { W ( t) k} λ k = = =, k = 0,1,..., k i= 1! π i is the steady state probability of CTMC is state i; PDF of interarrival times is a weighted sum of exponentials: N iw pw ( w) = π iλi e λ, i= 1 this is known as hyperexponential distribution; recall that C > 1 and C = 1 in a limiting case. What we can model with MMPP: empirical distributions with high variability: C > 1 provides simple check of MMPP suitability! ACF exhibiting (sum of) exponential decay. 17

Switched Markov modulated arrival process: SMMPP SMMPP is a special case of MMPP: modulating Markov chain has only states.. SMMPP is given by: λ1 0 r, 1 r Λ = D =, 0 λ r r D ( ) 0 = D Λ. Steady-state probabilities of modulating CTMC are: π λ r λ r =, π =. 1 1 1 λ1r + λr1 λ1r + λr1 π is the vector containing these probabilities. 18

Switched Markov modulated arrival process: SMMPP CDF F(x) = Pr{X x} of interarrival times is given by: ( D Λ) x 1 u1x u x F ( x) = 1 π e ( Λ D) Λ e = 1 qe + ( 1 q) e, e = ( 1,1 ),0 < q < 1. Probability density function (PDF) is: A u x ( ) ( ) 1 u x f x qu e q u e q = + 1,0 < < 1. 1 Autocorrelation function is given by: ( 1 1 )( + 1 [ + 1] ) ( ) [ ] K X k = E X E X X k E X k = ( ) ( 1 ( ) ) k + 1 k π D D eπ = Λ Λ Λ Λ ( Λ D) Λ e = Aσ, k = 1,,... where A and σ are given by: ( ) r r ( + ) ( + + ) λ1 λ 1 λ1λ =, σ =. λ1r λr1 λ1λ λ1r λ r λ 1 1λ + λ1r + λr1 19

Switched Markov modulated arrival process: SMMPP What is important about SMMPP: distribution is hyperexponential; ACF decays exponentially. What we may capture by SMMPP: interarrival distributions with: monotone decreasing behavior; coefficient of variation: C > 1. q 1 q + [ ] ( [ ] σ X E X E X ) u1 u C = = = 1 1. ( E [ X ]) ( E [ X ]) q 1 q + u1 u exponentially decaying ACF only! 0

Switched Markov modulated arrival process: SMMPP Figure 7: Possible distributions of SMMPP. 1

Switched Markov modulated arrival process: SMMPP Figure 8: Possible normalized ACFs of SMMPP.

Discrete Markov modulated models 3

Discrete Markov modulated models What is special about such models: modulated process is discrete Markov in nature transition probability matrix is in the form: d d d 11 1 1M d d d, 1, 1,,...,. M 1 M Q = d ji = j = M i= 1 d d d M1 M MM What is interesting: usually easier to deal with; may have arbitrary distribution; may have complex ACF structure: sum of geometrical terms. 4

Discrete Markov modulated models Discrete processes: time is divided into intervals (slotted); durations of intervals are the same t; some arrivals may occur in each interval. Figure 11: Illustration of discreteness of the arrival process. Note the following: interpretation of discrete processes can be considered as approximations of real arrivals; sometimes this is a natural way. we are going to work with number representation; intervals representation is also possible. 5

Discrete-time time batch Markovian arrival process Basic characteristics: most general analytically tractable discrete Markov modulated process; allows arbitrary distribution of the number of arrivals in a slot; ACF: sum of geometrical terms. Assume time axis is slotted; the slot duration is constant and given by t = (t i+1 t i ); discrete-time homogenous aperiodic, irreducible MC {S(n), n=0, 1,...}: state spaces(n) { 1,,..., M } ; transition probability matrix D. {W(n), n=0, 1, } is D-BMAP with MMC is {S(n), n=0, 1,...} if: value of {W(n), n=0, 1, } is function of the current state of {S(n), n=0, 1,...} 6

Discrete-time time batch Markovian arrival process How to completely define D-BMAP: matrices D(k), k = 0, 1, : state change from ito j, i,j = 0, 1,... ; arrival of k customers. Example: d ij (0): transition from state ito state j without any arrivals; d ij (k): transition from state ito state j with a batch arrival of size k. In general for d ij (k) we have: d ij (k) = Pr{W(n) = k, S(n) = j S(n 1) = i}, k= 0, 1,... Note the following: for pair (i, j), d ij (k), k = 0, 1,... are called conditional probability functions: M j= 1 k= 0 ( ) d k = 1, j = 0,1,..., M. ij For different pairs (i, j) d ij (k) are allowed to be different. 7

Discrete-time time batch Markovian arrival process Figure 1: Illustration of the D-BMAP. 8

Discrete-time time batch Markovian arrival process π = π π π ( 1,,..., M) Let be the vector of stationary probabilities of {S(n)}: i ( n) π = lim π, i = 1,,... n We can find π = π, π,..., π using: i ( 1 M) π D = π. π e = 1 easiest way to compute: D i and take any row where i is large (> 1000): Example of how to compute π : 0. 0.8 0.384615 0.615385 D = = 0.5 0.5 0.384615 0.615385 Hence, π = 0.384615, 0.615385. ( ) 1000, D. 9

Discrete-time time batch Markovian arrival process Using the mean arrival rate in the slot is: E W = kd k e, e = 1,...,1. π [ ] ( ) ( ) π k= 1 The variance of D-BMAP is: D W R 0 k D k e E W. ( ) [ ] = ( ) = ( ) [ ] W π k=1= 1 Let R W (i), i0, 1,... be the ACF of the D-BMAP: ( ) ( ) ( ) [ ] ( ) W ( ) [ ] RW i = E W n W n + 1 E W, i 0, R 0 = D W. The ACFof D-BMAP is: R i = kd k D kd k e E W, i 0. W π k= 1 k= 1 ( ) i 1 ( ) ( ) ( ) [ ] 30

Discrete-time time batch Markovian arrival process The mean process of D-BMAP: {G(n), n=0, 1,...} with G(n) = G i : M j= 1 k= 1 ( ) G = kd k, i = 1,,..., M. i G = G,G,..., G M ( ) 1 ij is the mean vector of D-BMAP. The mean input rate in the slot is given by: M [ ] = π = [ ] E G G E W k= 1 The variance of the mean process of D-BMAP is given by: i i. M ( ) [ ] = ( ) = π [ ] D G R 0 G E G. G i i k= 1 The ACF of the mean process is given by: ( ) i 1 ( ) R ( ) ( [ ]) G i = π kd k D kd k e E G, i 0. k= 1 k= 1 31

Discrete-time time batch Markovian arrival process Advantages of using D-BMAP: quite general process; analytically tractable. Shortcomings of using D-BMAP: really hard to parameterize: we have to estimate matrices D(k), k = 0, 1, ; M M k max parameters. We usually use a special case of D-BMAP: D-MAP: discrete-time Markovian arrival process; D-MMBP: discrete-time Markov modulated batch process; D-SBP: discrete-time switched batch process; D-SPP: discrete-time switched Poisson process; D-SBP: discrete-time switched Bernoulli process. 3

Discrete-time time Markovian arrival process D-MAP is a special case of D-BMAP: only single arrival in a slot is possible D(0) and D(1)! Mean is given by: E W = πd 1 e, [ ] ( ) The variance of D-MAP is: The ACF of D-BMAP is: D W = R 0 = πd 1 e- E W, ( ) [ ] ( ) ( ) [ ] W 1 R ( i ) π D( 1) D i W = e- E[ W ], i 0. ( ) Note the following: D-MAP reduces versatility of D-BMAP in terms of batch arrivals; D-MAP still have different conditional PFs for each different pair of states (i, j)! 33

Markov modulated Bernoulli process MMBPis a special case of D-BMAP: conditional PFs depends on the current state only. Recall for D-BMAP we had: { } ( ) ( ) ( ) ( ) d ( k) = d ( k), j l. d k = Pr W n = k, S n = j S n 1 = i, k = 0,1,..., i, j = 0,1,..., M. ij in general, ij il too many conditional PFs to determine; these conditional PFs depends on pair of states (ij). ForMMBPwe have: ij ( ) = ( ),. d k d k j l il in overall, we only have M conditional PFs; now, it does not matter to which state transition occurs; ACF, meanand variancecan be obtained using the same expression as for D-BMAP 34

Markov modulated Bernoulli process When it does not matter to which state transition occurs we may define: M { } ( ) ( ) ( ) ( ) a k = Pr W n = k, S n = j S n 1 = i, k = 0,1,..., i, j = 0,1,..., M. i j= 1 a i (k), k = 0, 1,..., i = 1,,... are conditional PFs of arrivals; these conditional PFs depend on the state from which transition occurs. For a i (k), k = 0, 1,..., i = 1,,...: k=0 i ( ) = 1,, = 0,1,...,. a k i j M 35

Switched D-MMBP Special case of D-MMBP: only two states of the MMC. Assume that transition probability matrix of MMC has the following form: 1 α α D =. β 1 β Steady-state distribution expressed in terms of α and β: ACFof the mean process is: β α π1 =, π1 =. α + β α + β i G ( ) = [ ] λ, = 0,1,... R i D G i D[G] is the variance of the mean process; λ is single non-unit eigenvalueof the MMC: MMC has only two states: λ 0 = 0, 0 λ 1 < 1. 36

Switched D-MMBP We can express ACF of the mean process in terms of G G i i R ( ) 1 ( 1 ) [ ]( 1 ), 0,1,... G i = αβ α β D G α β i α + β = = λ = (1 α β); G 1 and G are means in states 1 and, respectively. Normalized ACF of the mean process is then: Note! ( i ) [ ] KG i KG ( i) =, i 0,1,... D G = λ = we have no simple relation between R G (i) and R W (i) except for: ( ) ( ) ( ) ( ) R i = R i, i = 0,1,..., R 0 = R 0 + x, W G W G (when conditional PFs are close to Poisson, x is close to E[W]). process may produce fair approximation for geometrically decaying ACF; conditional PFs are allowed to be arbitrary. 37

Discrete-time time switched Poisson process Special case of switched D-MMBP: conditional PFs are no longer arbitrary; in this particular case it is Poisson. D-SPP: 1, i = 0 R W ( i) = R G ( i) + E[ W] δ i, δ i =. 0, i = 1,,... [ ] = [ ] = π1 1 + π E W E G G G is the mean of SPP. Note! ACF has geometrical decay; distribution is a mixture of two Poisson distributions (not Poisson). 38

Discrete-time time switched Poisson process Figure 13: Possible behavior of the distribution of D-SPP. 39

Discrete-time time switched Poisson process What is required to parameterize D-SPP: transition probability matrix of MMC: α and β; means in states 1 and : G 1 and G ; recall that mean completely determine Poisson distribution. Special case of D-SPP is interrupted D-SPP: mean in state 1 is zero (no arrivals); mean in state is not zero. Characteristics of interrupted D-SPP: ACF still has geometrical decay; distribution is Poisson. 40

Discrete-time time switched Bernoulli process Special case of switched D-MMBP: conditional PFs are no longer arbitrary; in this particular case it is Bernoulli: only single parameter in each state: probability of arrival: { } ( ) ( ) ( ) ( ) d 1 = Pr W n = k, S n = 1 S n 1 = i = p, i, j = 1,. ij note that conditional PFs depend only on the current state. i D-SBP: setting p 1 =1, p =0 (or vice versa): W ( ) = ( ), = 0,1,..., R i R i i G (you can check it by inserting mean and variance in expression for ACF of D-BMAP). when both p 1 and p are not zero this property does not hold. 41

Fitting parameters example 4

Fitting parameters. Switched Markov modulated arrival process: SMMPP Parameters of PDF can be found using λ 1, λ, r 1, r : u1 x u x f x = qu e + 1 q u e,0 < q < 1. u q ( ) ( ) 1 λ + λ + r + r δ λ + λ +, u r + r + δ = =, λ r1 + λ1 r u =. λ r + λ r u u u u 1 1 1 1 1 ( 1 1 )( 1 ) δ can be found as follows: 1 ( r r ) 4 r r. δ = λ λ + + 1 1 1 Recall: parameters A and σof ACF can be found using λ 1, λ, r 1, r : A k K X ( k ) = Aσ, k = 1,,... ( ) r r ( + ) ( + + ) λ1 λ 1 λ1λ =, σ =. λ 1 1 1r λ r1 λ1λ r r λ1r λ r λ λ + λ + λ 1 1 43

Fitting parameters SMMPPis completely defined by λ 1, λ, r 1, r ; histogram and ACF of data are completely defined by u 1, u, σ and q; there is unique SMMPP capturing empirical data. Algorithm: estimate u 1, u, q from empirical pdf; estimate σ from empirical ACF; find λ 1, λ, r 1, r using the following: 1 λ1 = q ( 1 σ )( u1 u ) σ u1 u ( q ( 1 σ )( u1 u ) σ u1 u ) 4 σ u1u + + + + +, λ = r 1 1 r = = u1u ( λ1 q ( u1 u ) u ) λ1u1 λ1q ( u1 u ) u ( u λ )( u λ ) 1 1 1 λ λ 1 ( λ u )( λ + r u ) 1 1 1 1 1 λ λ 1,., 44

Fitting two first moments and NACF Determine u 1, u, q as follows: hyperexponentialdistribution with balanced means q/u 1 = (1 q)/u ; probabilities q can be found as: 1 C 1 q = 1 +. C 1 + rates u1 and u are given by: u [ ] 1 ( q) E [ Y ] q 1 =, u =. E Y Determine σ by setting σ = K Y (1) or minimizing: ( m) ( ) m= m m 0 1 K Y σ γ = m K m 0 m= 1 Y K Y (i), i= 1,,... is the normalized ACF of empirical data; m 0 is the intervals for which K Y (m 0 ) 0.. 45

Example of fitting Interarrival times and corresponding statistics Figure 9: Trace, histogram and NACF of interarrival times.. 46

Example of fitting Moments of data: E[Y] =.7, σ [Y] = 4.978, C[Y] = 1.004. What are conclusions about data: looks like data are exponential C [Y ] 1; we can still use hyperexponentialdistribution to approximate exponential. use empirical moments estimate parameters of H (q, u 1, u ) as: q = 0.53, u 1 = 0.476, u = 0.4. use empirical NACF estimate σ as σ = K(1) = 0.38. determine parameters of SMMPP λ 1, λ, r 1, r as: λ 1 = 0.45, λ = 0.170, r 1 =.557E 3, r = 0.0. 47

Example of fitting Figure 10: Visual comparison of empirical characteristics and characteristics of the model. 48