FAKULTÄT FÜR INFORMATIK

Size: px
Start display at page:

Download "FAKULTÄT FÜR INFORMATIK"

Transcription

1 b b b b b b b b b b b b b b b b b b b b FAKULTÄT FÜR INFORMATIK der Technischen Universität München Lehrstuhl VIII Rechnerstruktur/-architektur Prof. Dr. E. Jessen Modeling of Packet Arrivals Using Markov Modulated Poisson Processes with Power-Tail Bursts Diplomarbeit Hans-Peter Schwefel

2

3 b b b b b b b b b b b b b b b b b b b b FAKULTÄT FÜR INFORMATIK der Technischen Universität München Lehrstuhl VIII Rechnerstruktur/-architektur Prof. Dr. E. Jessen Modeling of Packet Arrivals Using Markov Modulated Poisson Processes with Power-Tail Bursts Diplomarbeit Hans-Peter Schwefel Aufgabensteller: Betreuer: Prof. Dr.-Ing. Eike Jessen Prof. Lester Lipsky, Ph.D., Dr. Michael Greiner Abgabedatum: Department of Computer Science, University of Connecticut

4

5 Contents iii Contents Introduction 2 Background 3 2. Denitions Matrix Exponential Distributions LAQT Queueing Models Power-Tail Distributions Order Statistics Self Similar Behavior and Long-Range Correlation Arrival Processes with Long-Range Correlation 8 3. Merging of Power-Tail and Poisson Process A Single Source On/O-Model: -Burst N-Burst Process (Multiple Sources) Description of the Process The LAQT-Matrices Calculation of the Mean The Steady-State Process at Departure Times Reducing the State Space Entrance Vector p Autocorrelation Coecient Lag-k Estimate of a-parameter for Real Network Parameter Inuence for 2-Burst SM/M/-Queue with Arrivals from the N-Burst Process Mean Queue-Length Buer Overow Probabilities Comparison to the Poisson-Zeta Process of [Fan & Georganas, 996]. 47

6 iv Contents 5 Simulations Generation of Random Variables Pareto Distributed Random Variable TPT-T Distributed Random Variable Generating Interarrival Times of the SM-Processes Simulated Autocorrelation Curves Calculation of the Autocorrelation Coecient MERGE-Process Varying Results and Natural Truncation N-Burst Process Simulation of the Counting Process Power-Tail Renewal Process Merged Process N-Burst Process GI/M/-Queue Simulations Comparison with Real Data 7 6. Ethernet Data TCP Data Summary 74 Bibliography 76

7 Chapter Introduction Within the last decade, the volume of trac on various computer networks has grown immensely. Especially for areas like network design, development of taring schemes, and application design, a statistical model for the network trac is essential. While more and more research was going on in that eld, it turned out that classical (Poisson) models are not able to yield the eects of trac on large scale networks: [Leland et al., 994] observed large uctuations in the number of packets on a local Ethernet. In classical models these uctuations are expected to occur when measuring over small time intervals, but they should smooth out quickly with increasing interval size. However, the measurement showed these uctuations over a wide range of time scales. That eect is called high variability or self-similar behavior (also, fractal behavior). Another eect, called long-range autocorrelation, is connected with the self-similarity and will be described in detail later on. The high variability could be taken as an indication of inhomogeneity. However, we will come up with a homogeneous model that captures that eect by using special distributions, so called Power-Tail distributions. There is increasing evidence that these kinds of distributions are involved in related elds: In 986, [Leland & Ott, 986] observed that the CPU times for jobs at BELLCORE had a power-tail distribution over 5 orders of magnitude. Furthermore, [Garg et al., 992] showed that the distribution of le sizes at the local UNIX system at the University of Connecticut also had a power-tail. Most recently, [Crovella et al., 996] measured the sizes of les sent to Boston University over the Internet and found power-tail behavior as well. Motivated by these ndings, we have come up with a mathematically tractable model, called N-Burst, that uses power-tail distributions to get the aforementioned characteristics. Similar models were discussed by [Willinger et al., 995] and [Fan & Georganas, 996]. However, they do not give exact analytical solutions to the same extent as we can give for our model. The analytical tools are supplied by the Linear Algebraic Queueing Theory (LAQT) brought up by [Neuts, 98] and [Lipsky, 992]. These methods were further developed by [Fiorini & Lipsky, 996], especially for treatment of the Semi-Markov Processes. The analytical section of this work uses MATLAB for the calculations and either MATLAB or GNUPLOT to produce the graphs. The simulations are done by C-programs running on a Sun Sparc workstation with SUN OS Notational conventions: Throughout this work matrices will be symbolized by bold faced capital letters (e.g. A). Bold faced, lower case letters (e.g. p) stand for row vectors. The symbol ' expresses the transposed of a matrix or a vector. I is the unit matrix, and

8 2 Introduction is the vector with all components being. The dimensions of both of these are determined by the context. Abbreviations: CDF cumulative distribution function CLT central limit theorem IA-times inter-arrival times iid independent, identical distributed LAQT linear algebraic queueing theory pdf probability density function PT Power-Tail SM Semi-Markov TPT truncated power-tail

9 3 Chapter 2 Background 2. Denitions The center of our interest will be xed sized packet trac (such as it appears in ATMnetworks). Therefore the trac on the line is suciently described by the inter-arrival times (IA-times) T ; :::; T n between the single packets. In our models these T i will be random variables. The process ft i g is called the Inter-arrival Process. If the T i are iid, then the process is a renewal process. However, as we will see later, real network trac has characteristics that do not allow independence. Another way of looking at the trac is by counting the number of packet arrivals in some interval of length. Let N`() be the random variable denoting the number of arrivals in the `-th interval. Then the sequence fn`()g is the Counting Process associated with the Inter-arrival Process ft i g: N`() = fk : ` < kx i= T i (` + )g : The following will give some well-known denitions about random variables, since we will make ample use of them. In all cases the time variables are in the range x <. Let X be a random variable representing the time for a process to complete. Then the Cumulative Probability Distribution Function (CDF) is one-sided, and dened by: 3 F (x) := P r(x x) with F ( ) =, and lim x! F (x) =. The Reliability Function is R(x) := P r(x > x) = F (x); and the probability density function (pdf) for the process, if it exists, is: f(x) := d F (x) dx = d R(x) dx The `th moment of the distribution (or the expectation of X`), if it exists, is dened by: 3`P r(u)' means `Probability that U is true'. Z x` := E(X`) := x` f(x) dx: ()

10 4 2 Background The rst moment (mean) can also be calculated by using the reliability function: E(X) = Z The variance of the distribution (if it exists) is dened by R(x) dx: (2) Var(X) := 2 := E([X E(X)] 2 ) = E(X 2 ) [E(X)] 2 = x 2 x 2 ; and the dimensionless quantity, the coecient of variation, is dened by the ratio of the variance and the square of the mean (rst moment): C 2 := 2 x 2 Therefore multiplying a random variable by some constant a does not change the coef- cient of variation, because both, numerator and denominator, then yield an additional factor a 2. The denitions above only involve a single random variable. Since we deal with a series of those, we are also interested in the relationships between them. The Covariance of X and Y gives us an idea of that relationship: Cov(X; Y ) := E[(X E(X))(Y E(Y ))]: The Correlation Coecient of X and Y is dened as (X; Y ) := Cov(XY ) E[(X E(X))(Y E(Y ))] = (3) X Y X Y From its denition, it can be shown that (X; Y ). If (X; Y ) = then X and Y are called uncorrelated. If X and Y are independent, then it follows that they are uncorrelated. On the other hand, even if two variables are uncorrelated, they still may not be independent. The autocorrelation coecient lag-k for an inter-arrival process ft i g, or its associated counting process is dened as r i (k) := (T i ; T i+k ): For stationary processes in steady state the value is the same for all indices i, so we just use r(k). A positive r(k) means, that if the value t i is bigger than its mean, t i+k k steps later is more likely to be bigger than the mean as well. When having a realization fx i g i=;:::;n of a process (i.e. a series of n samples), the autocorrelation coecients lag-k can be estimated by r(k) n k P n k `= (x` ) (x`+k 2 ) p (4) where X = n k x`; 2 = n n k n k `= X `=k x`

11 2.2 Matrix Exponential Distributions 5 X 2 = n k (x` ) 2 ; 2 2 = n k n k `= n X `=k (x` 2 ) 2 : To distinguish between the arrival and the counting processes, we will use the notation, r(k), for the inter-arrival process, and r(kj) for the counting process. Though the interarrival times for a renewal process are uncorrelated (since they are independent by denition), the counting process in general has correlation - unless the process is memoryless, i.e. Poisson. Finally, though already mentioned, the most important distribution for all work about Markovian processes, is the negative exponential (with rate ). It has as its reliability function and density function: If follows for the mean and variance: R(x) = e x ; f(x) = e x : (5) E(X) = ; Var(X) = 2 ) C = : The associated counting process has a Poisson Distribution: P r[n j () = i] = ()i e 8j: (6) i! Strictly speaking, when talking about a Poisson process, it would be the counting process, that is referred to. However, this expression is also going to be used for the arrival process; it should be clear from the context, which one is meant. For negative exponential distributed state times, T, one neat property is the memorylessness. It does not matter at what time in the past the last event happened, the residual time is always negative exponential with the same rate: P r(t > x + hjt > x) = P r(t > x + h) P r(t > x) = e (x+h) e x = e h for h : If there are two negative exponentially distributed variables X and X 2 with rates and 2, then the probability of server nishing rst is P r(x < X 2 ) = Z R X2 (x) f X (x) dx = Z 2.2 Matrix Exponential Distributions F X (x) f X2 (x) dx = + 2 (7) The fundamental approach to all the analytical treatment of our trac models is a method called Linear Algebraic Queuing Theory (LAQT). By using a network of states (called phases) with exponentially distributed inter-event times (almost) any distribution can be approximated arbitrarily closely. This approach was rst brought up in [Neuts, 98] and is discussed in detail by [Lipsky, 992]. The following paragraph gives a brief introduction and mentions the needed formulas in reference to the latter.

12 6 2 Background P j µ µ j q j p j p P ij P jm q m p m µ m p i µ i Figure 2.: A typical subsystem with m phases. A customer enters the system and goes with probability p k to phase k. The time in each phase is exponentially distributed with rate k. Then he moves around according to the transition probabilities P kl and eventually leaves the system with probability q i for phase i. The distribution of the time in the subsystem then is an approximation of the desired distribution. The distribution of the time between a single customer entering a subsystem such as in Figure 2. and leaving it again can be neatly determined by using matrix algebra. Let (P) ij = P ij be the matrix of the transition probabilities within the subsystem. The exit vector q has as its components the probabilities of leaving the subsystem when being at the according phase. Since there are only two choices, going to another phase within the subsystem or leaving the system, the probabilities must add up to : P + q = : Next we dene the completion rate matrix M, which is a diagonal matrix of the single state leaving rates k. Further, let be a vector whose components i are the mean times to leave the system, given that the customer started at phase i. Then the following equation holds: = M + P : The rst summand gives the mean time at the current phase, while the second term stands for the mean time in the system after the next transition. Solving for gives: = [M(I P)] =: V : Therefore, V := [M(I P)] is called the service time matrix. Its elements V ij are the overall mean times spent at phase j until the customer leaves the system, if it started at phase i. The inverse of V, B := M(I P), has the name service rate matrix. The balance equations (ow into a state equals ow out of that state) for the system lead to the following dierential equation: dr(t) dt = R(t)B =) R(t) = exp( tb); where R ij (t) is the probability that the customer is in phase j at time t, given that it started in i.

13 2.2 Matrix Exponential Distributions 7 Whenever a customer leaves the system, the next one comes in immediately. The components p i of the entrance vector p are the probabilities that a new customer upon entering the system will directly go to phase i. That matrix formulation gives rise to an elegant formula for the reliability function of the time that the customer spends in the system: Dierentiation yields the pdf: R(x) = p exp( xb) : (8) f(x) = dr(x) dx = pb exp( xb) : Also, the nth moments of this distributions come out neatly using (): E(X n ) = n!pv n : (9) This way of handling distributions is only suitable for renewal processes, since the next customer makes its way through the subsystem independently of the previous one. However, investigations of real trac data showed correlation in the series of inter-arrival times. Thus we need to extend this matrix algebraic treatment to be able to capture so called Semi-Markov Processes (also known as Markov Renewal Processes). The idea is, that the next customer enters the subsystem in a phase that depends on the leaving phase of the previous customer (see [Fiorini et al., 995]). Therefore another matrix L is introduced. The components L ij give the departure rate from phase i, whereafter the next customer starts in phase j, i.e. for small time intervals, L ij is the probability that a customer at phase i leaves within and the next one starts in phase j. L and B must be consistent, i.e. L = B, because both matrices describe the departure rate of the customers. However, L does not capture what is going on before the departure, while B is not aecting the next customer. In the steady state of these semi-markov processes the entrance vector p is not relevant any more, since the entering customers are described by the L-matrix. However, for transient behavior that vector is still important. For notational convenience the matrix Y := VL is introduced. It can be shown (see [Fiorini et al., 995]), that the left eigenvector, p, of Y with eigenvalue (i.e. p = py) is the steady state equivalent of the entry vector of the renewal processes. The components of p give the probabilities that a newly arriving customer goes to the according phase, when the inuence of the startup of the process has deceased (i.e. steady state). Obviously, this p-vector now has to replace p in the formula for the moments of the distribution. From p = pvl it follows, that pv = pl. In our Markov modulated models, L is a diagonal matrix, so easily invertible; on the other hand V usually is much harder to write down. So it is easier to use for the mean: E(X) = p L : () [Fiorini et al., 995] also derive a formula for the covariance of the process in steady state: Cov(X; X +k ) := lim n! Cov(X n ; X n+k ) = pv(y k p)v for k = ; 2; : : : : ()

14 8 2 Background Putting these formulas together, we get for the autocorrelation coecient lag-k: r(k) = pv(yk p)v 2pV 2 (pv ; k = ; 2; : : : : (2) ) 2 For large k, Y k is calculated by using the method of spectral decomposition: Y k = X i k i v i u i = p + X i 6= k i v i u i : i are the eigenvalues of Y; u i ; vi are the corresponding left and right eigenvectors such that u i vi =. Since Y = VL = VB =, is the right eigenvector with eigenvalue. By its denition, p is the corresponding left eigenvector with the same eigenvalue. That leads to the following expression for r(k): r(k) = pv Pi 6= ki v i u i V 2.3 LAQT Queueing Models 2pV 2 (pv ) 2 ; k = ; 2; : : : : (3) At the end of the day, the performance of the models that we described in the last section needs to be evaluated. In our context that means, the process is fed into a queue. To make things not too complicated we assume single server queues with exponentially distributed service times, i.e. GI/M/-queues for the renewal process or SM/M/-queues for the correlated IA-process. The GI/M/-queue is analyzed in [Lipsky, 992]. Let the IA-process be sub-system with mean X, and the server be sub-system 2 with mean X 2. The utilization = X 2 = X has to be smaller than to have stable behavior (i.e. for a steady state to exist). The state space of the GI/M/-queue is the product of the state space of the arrival process and the set of possible queue-lengths. The balance equations are matrix equations that use the following matrices: A := I + X 2 B p; U := A : (4) For the open system (possibly innite number of customers) the limit U N for N! is necessary. That limit is dominated by the largest eigenvalue of U, i.e. the smallest eigenvalue of A, which is called the geometric parameter s. The balance equations nally yield for the steady state queue-length probabilities (the customer being served is also assumed to belong to the queue): () := P r(`queue length is ) = ; The mean queue-length then turns out to be: (k) = ( s)s k for k = ; 2; 3; ::: : (5) q = X k= k (k) = s (6)

15 2.3 LAQT Queueing Models 9 The mean system time (waiting time + service time) follows from Little's Law: T = X q = X 2 s Since the arrival process is non-exponential, it makes a dierence whether we look at the queue-length probabilities at randomly chosen points in time or at arrival times. The latter is important for determining buer overow probabilities for non-loss systems. If there is either a second backup buer (assumed to be innitely large, e.g. a harddisk) or assuming a feedback mechanism that signals the full primary buer to the sender and thus delays the new packet (i.e. the backup buer is spread out among the transmission sources), the probability that a newly arriving packet goes to the backup-buer is: P r(overow) = X k=b S + a(k); where B S is the size of the primary buer and a(k) is the probability that the queue-length at arrival times is k. It can be shown that (see [Lipsky, 992] p. 29) a(k) = ( s)s k ; ) q a = s (7) s ) P r(overow) = ( s) X k=b S + s k = s B S+ : Where q a is the mean queue-length at arrival times. For an exponential arrival process (M/M/-queue), s =. Another problem of importance in studying buers occurs when there is no secondary buer. As long as the primary buer is full, every arriving packet is thrown away. This would be equivalent to a GI/M//B S -queue. However, this approach is not going to be investigated here. The SM/M/-queue is a little bit more complicated, since there is no closed formula for the queue-length probabilities. It is treated in [Neuts, 98] as a so-called Quasi- Birth-Death Process. That is a Markovian-like process where the transition rate matrix is block tri-diagonal. The matrix notation of [Neuts, 98] is slightly dierent but will be changed here to make it consistent with the previous formulas. The state space of the SM/M/-queue is also the product of the state space of the arrival process and the set of possible queue-lengths. Using the previously introduced matrices B; M; L the blocktridiagonal transition rate matrix Q for the SM/M/-queue is the innite matrix: Q = B L X 2 I (B + X 2 I) L X 2 I (B + X 2 I) L X 2 I (B + X 2 I) The leftmost column of matrices stands for the states with queue-length, the next for queue-length and so on. The matrices on the main diagonal describe state transitions :

16 2 Background within the arrival process without change of the queue-length. The upper diagonal (L) describes the arrivals while the other diagonal (I= X 2 ) represents the servicing (without changing the state of the arrival process. According to [Neuts, 98], the balance equations have a matrix geometric solution for the steady state queue-length probabilities. Therefore the quadratic matrix equation, A + RA + R 2 A 2 = ; where A = L, A = (B + = X 2 I), A 2 = = X 2 I, has to be solved for a minimal R. An easy iterative procedure is used therefore: It can be shown that A has an inverse, so the equation can be rewritten as: R n+ = (A + R n2 A 2 )A : (8) Starting with R = this formula is repeatedly applied until convergence is reached. [Neuts, 98] showed that this procedure terminates eventually and delivers a correct solution. The R-matrix is the equivalent of the U-matrix of the GI/M/-queue. The steady state probabilities for the queue-length are then obtained by (k) = (I R)R k ; k = ; ; : : : : (9) is the residual vector obtained from p by multiplication with V and normalization, i.e. the components are the probabilities that the arrival process is in the corresponding state: = pv pv (2) The geometric series in the mean queue-length formula can then be simplied: q = X k= k (k) = R(I R) : (2) For the GI/M/-queues, the s-parameter is a necessity to get the queue-length probabilities and the mean queue-length. It is inherent in the sense, that it is an eigenvalue of the A-matrix. This is not the case for the SM/M/-queue. For comparison purposes we dene the equivalent s-parameter for the SM/M/-queue from (6) to be: s := q (22) Using the equivalent s-parameter has the advantage that it does not grow unboundedly with!. s < always holds, s = is equivalent to q = (smallest possible value for q). s! corresponds to q!. The vector part of (9), (I R)R k, has as its components the probabilities that the queue-length is k and the arrival process is in the corresponding state. If we look at the queue at arrival times, the states of the arrival process with a high departure rate 4 contribute more observation points. Therefore, the probability vector has to be scaled by

17 2.4 Power-Tail Distributions the L-matrix and then normalized to give the probabilities at arrival times. Adding up all the components gives the probability That leads to a mean queue-length at arrival times of q a = a(k) = (I R)Rk L L (23) X k= k a(k) = R(I R) L L Finally, the overow probability for a buer of size B S comes out as: P r(overow) = X k=b S Power-Tail Distributions a(k) = RB S L L (24) It turns out that the widely used exponential distributions are not sucient for modeling recent trac. Trac measurements show a high variability for widely ranging time-scales. In this work, this variability is achieved by using Power-tail (PT) Distributions. All nite extensions of exponential distributions (hyper-exponential, Erlangian,...) have a reliability function R(x) which drops o exponentially at some point, i.e. the likelihood of getting very large values for X vanishes very quickly. In contrast to that, the reliability function of a PT-distribution drops o with some power of x, much slower than an exponential distribution: R(x)! c x for large x: (25) c > and > are constants for a given distribution. Straight-forward dierentiation yields the asymptotic form for the pdf, f(x)! c (26) x+ Then from () and elementary calculus, it follows that all moments for ` are innite! Thus, if 2 then F () has innite variance, and if then F () has innite mean! We assume to be between and 2, so it has a nite mean, but innite variance. Since this parameter will appear in various measured trac characteristics (such as autocorrelation curves), and some real data ([Leland et al., 994]) suggested a value of = :4, that value will be used from now on. The nite mean of the distribution also implies, that the occasional huge values have to be made up by a larger number of values smaller than the mean. That means that most of the time, the observer is going to see something smaller than average. But occasionally, single huge samples occur. 4 The process itself is an arrival process when fed into a queue. However, we talk of departures from the SM-process. Thereby we look at the phases of the sub-system that build up the arrival process. A departure from one of the phases (described by the elements of L) then corresponds to an arrival at the queue.

18 2 2 Background One type of PT-distributions, which we will mainly use for simulations, is a so called Pareto Distribution: R(x) = (x + ) for x ; (27) with E(X) = for > : Though this function has a simple analytical formula and also is easy to simulate, the disadvantage is that the LAQT-methods of the last section cannot be applied to it: It can be shown that the Laplace transformation of all matrix exponential distributions is a rational function. This does not hold for the Pareto function, thus it does not have an exact matrix exponential representation. [Greiner et al., 995] have introduced one method to overcome that problem. They introduced a family of distributions, so called Truncated Power-tails (TPT), which asymptotically have PT-characteristics. For our purposes, the easiest case is using a series of hyper-exponentials. However, their paper is more general in that respect. It is straightforward to handle these hyper-exponentials by LAQT methods then, since they are a very simple case of phase distributions (with P =, so B is just diagonal). X ξ ξ 2 ξ T X 2 X T Figure 2.2: Phase diagram for the TPT distribution with truncation T. The probability i = i ( ) =( T ) of going to phase i is geometrically distributed, i.e. it gets smaller by a factor < from state i to i +. Furthermore, the state-time grows geometrically by the factor = = =. Figure 2.2 shows the state diagram for a hyper-exponential TPT distribution with T states. Let the entrance probabilities i and the state leaving rates r i be: i = i ( ) T ; r i = i where < < ; > : (28) Unless mentioned dierently, is always set to be :5. Choosing close to or often resulted in unstable simulations (see [Klinger, 997]), while the choice of outside that `critical' area did not have much inuence on any analytical results (see

19 2.5 Order Statistics 3 [Greiner et al., 995]). So :5 is a reasonable compromise. The parameter inuences the mean of the distribution. To make E(Y T ) = m, has to be = () T T m (29) Then [Greiner et al., 995] show that in the limit for T! the distribution has a PT with = log()= log(). Since we always x (mostly to be :4), will be determined from and. The reliability function is in the limit: R Y (x) = ( ) X i= i e r ix : [Klinger, 997] determined the tail-constant of (25) to be c = :2383 [E(X)] for this function. In practice it is usually not possible to work with innite series. So the calculations must be done using truncated PTs with increasing truncations T until the result converges in the sense, that increasing T does not change the results more than a negligible amount. The reliability function of the truncated reliability function is a nite sum with a dierent normalization constant: TX R YT (x) = i e rix : T In the following, the distribution with reliability function R YT will be called a TPT-T distribution. TPT- or equivalently just TPT is the name for the untruncated limit distribution with reliability R Y. Figure 2.3 shows the reliability functions for varying T keeping all the other parameters (,, E(Y T ) but not ) constant. The graphs are plotted on log-log scale, because the power-tail =x then results in a straight line with slope. Therefore, the dierence between an exponential and a PT drop-o can easily be seen. These kinds of graphs will show up very frequently in this work. Finally, though already mentioned, we give the LAQT-matrices for the TPT-T renewal process. The index is used throughout the rest of this work to distinguish them from other processes: p = [ ; 2 ; ; T ] ; P = ; B = M = 2.5 Order Statistics i= =... = T : (3) When doing simulations with PT-distributions later on, some interesting eects will arise. The explanation of those will need an introduction to order statistics for PTs, which tells

20 4 2 Background TPT T: Reliability function 2 3 R(x) T= x Figure 2.3: The reliability-function R YT (x) for T = ; 3; 5; 7; 2; 3 plotted on log-log scale keeping the mean E(Y T ) = constant. = :4 and = :5 are the TPT parameters. For T = 2 linear behavior (i.e. PT-behavior) can be observed over 3 orders of magnitude for x (i.e. until x 3 ). Smaller T 's will be used later on in the N-Burst Process, since calculations for N > 2 with large T will require extremely large matrices. something about the largest sample in a nite sample space, taken from independent distributions. Let fx i j i Ng be a set of iid random variables with distribution F (), and mean E(X). Further, let fx [] X [2] X [N] g be the same set, but ordered in size place. Then F X[N] (x) = [F (x)] N : That is, if N samples are taken from a distribution F () then the distribution of the largest of them is given by [F (x)] N (see Order Statistics in [Feller, 97] or [Trivedi, 982]). For PT distributions, the expectation value of this largest member was shown by [Lipsky et al., 996] to have the asymptotic behavior (for large N) In contrast, the negative exponential distribution yields E(X [N] ) = E(X) NX j= j E(X [N] )! E(X)N = : (3)! E(X) ln N (law of diminishing returns); The faster growth of the largest member of the order statistic of PT distributed variables again clearly shows the tail inuence; the probability of huge values occurring is too large to be negligible. Even worse, not only does the expected value E(X [N] ) grow by the factor N = with increasing N, but the distribution of X [N] also has a power-tail with the same, since: x R X[N] (x)! x ( c=x ) N = xn (x c) N N! c = Nc x (N )

21 2.5 Order Statistics 5 for x! (easily seen by using the binomial expansion of (x c) N ). As already mentioned, the main characteristic of a PT distribution is the non-negligible probability of large values. So E(X [N] ) is already large, but in its realizations, even much bigger values occur from time to time..4 Distribution of Maximum of N PTs each with Mean. N=. Distribution of Maximum of N PTs each with Mean. N= f(x[n]) f(x[n]).6..4 e x[n] e-6 x[n] Figure 2.4: Simulated Distribution (pdf) of the maximum of i.i.d. PT-samples, each with mean E(X) = :. Since = :4, E(X [N] ) = 38:95. The curve on the right is drawn on log-log scale and shows a linear behavior for large x [N], i.e. a power-tail =x +. The curve is [N] very asymmetric, its theoretical maximum is around x M = (c + N)= = 3:34, which is a factor 4:4 smaller than the mean. The empirical cumulative probabilities F (x M ) = :266 and F (E(X [N] )) = :89 give further evidence of the highly unsymmetric distribution: Most of the time the simulated maximum is going to be too small, therefore sometimes it is going to be much bigger than expected. Figure 2.4 shows the empirical pdf of X [], obtained by generating random numbers from a TPT-distribution (see Chapter 5..2) with mean, determining their maximum and repeating that process. The empirical mean of the simulated distributions of the largest order statistic was 48.4 for N = 3 and 4 runs of the simulation. For all other N = ; :::; 6 the mean was within % of the calculated E(X [N] ) E(X)N =. Also, the graph clearly shows the power-tail in the log-log plot. Moreover, the asymmetry of that distribution is obvious: The maximum of the pdf, the most likely value for X [N], can be calculated: F X[N] = (F X ) N =) f X[N] = N (F X ) N f X =) df X [N] dx = N(N )(F X) N 2 fx 2 + N (F X ) df N X dx : = N(F X ) N 2 (N )f 2 X + F X df X dx To get the maximum, the derivative of f X[N] is set to. Since for large x, F X (x) = c x ; f X(x) = c x ; and f c( + ) X(x) = : + x +2 f X [N] = =) c2 2 x (N ) = c c( + ) 2+2 x x +2

22 6 2 Background Solving that equation for x gives: r x M = c( + (N )) : + Using the tail of the distribution for that formula makes sense for large N, since then the largest order statistics is very likely to be large, so it will end up in the tail of the distribution. For large N, the equation can be further simplied: x M rc + N : Redoing that whole derivation for our Pareto function (c = ), leads exactly to the same approximation for x M. This really shows that the tail alone determines the largest order statistic for large N, since the latter derivation also includes the `body', but leads to the same result as the one above, which only uses the tail. Actually, the formula for E(X [N] ) in [Lipsky et al., 996] was derived for a Pareto function with mean. The same argument, that the tail is taking over responsibility for the largest order statistics in case of large N, suggests that it is also valid for the TPT-function. Simulations such as in Figure 2.4 conrmed that. 2.6 Self Similar Behavior and Long-Range Correlation The innite variance of our PT distributions with < < 2 also causes another peculiarity. The Central Limit Theorem (CLT) does not apply in its standard form: Given a set fx i g; i = ; 2; ::: of i.i.d. random variables with nite variance 2 and mean, the `average' of n of them, A n := n approaches in the limit n! a Normal Distribution with mean and variance 2 = =n, 2 which has the pdf " (x) = p exp 2 # x : (32) 2 2 The Normal Distribution is symmetrical about its mean and has exponential drop-os on both sides of it. For n!,!. Thus we use a scaling factor p n to get constant variance 2. Also, the mean is moved to. Then the random variable nx i= X i ; Z n := p n(a n ) approaches the Normal Distribution with mean and variance 2. When using a PT with < < 2 we do not have nite variance any more. The CLT then has to be modied (see [Feller, 97] and [Samorodnitsky & Taqqu, 994]): A dierent scaling factor comes in, Z n := n (A n ) where = =:

23 2.6 Self Similar Behavior and Long-Range Correlation 7 The square-root of n thus is replaced by the factor n, where = :28574 for = :4, so substantially smaller than :5. That implies, that the distribution of the A n gets narrow about its mean much more slowly for the PT-distributions than for nite variance distributions. Furthermore, in the limit n! the distribution of Z n does not approach the Normal Distribution any more, but a so called -stable distribution instead. In our case (X i > ), these distributions are asymmetric with a power-tail to the right and an exponential dropo to the left (see [Klinger, 997]). For deeper insight into the theory of -stables see [Samorodnitsky & Taqqu, 994]. It seems that Z n for our SM-models also converges towards an -stable distribution. However, this is not investigated here. Further research needs to be done in that eld. But we will look at the counting process here and there we have a similar situation as for the Z n : It also is an averaging process, though not with a xed number, n, of random variables, but with an over-all time limit P N i= X i < P N+ i= X i. The eects, however, are similar: While increasing, the distribution of the counting process gets narrow about its mean much more slowly than for classical (nite variance) processes. That is what usually is called Self-Similarity or high variability, one of the desired properties of our models. The other desired property that the PT-distributions in our SM models deliver, is the autocorrelation in the IA-times. Not only is there correlation but it also drops down very slowly with increasing lag-k. In classical models without PT-distributions the autocorrelation r(k) for large k drops down exponentially with increasing k (i.e. r(k)! c (c 2 ) k ). When using PT-distributions the correlation curve also has a power-tail with power ( ): r(k)! c r k That behavior is called Long-Range Correlation in the IA-times and is another goal of our modeling. The eect of long-range correlation also delivers a feasible way of determining the parameter for measured IA-times of real trac: Calculate the autocorrelation, draw the curve of r(k) on log-log scale and then determine the slope of the linear part.

24 8 3 Arrival Processes with Long-Range Correlation Chapter 3 Arrival Processes with Long-Range Correlation This chapter introduces three semi-markov processes for modeling network trac. All of them use PT distributions and can be calculated by our LAQT-methods. They all turn out to have long-range correlation in their IA-times, though the underlying PTrenewal process is used in dierent ways. The PT-distribution in principle has as its only varying parameter and therefore its mean, since we assume = :4 to be xed based on other research ([Leland et al., 994]). The parameter does not really seem to have much inuence on the results (see page 2), so it is assumed to be xed to :5 as well. Generally, the mean of our processes is nothing to worry about, since it can be set by scaling anyways, i.e. multiplying the inter-arrival times by some factor (therefore we look at the coecient of variation and the autocorrelation coecient, since they are not aected by multiplication of the random variable). 3. Merging of Power-Tail and Poisson Process The following model was brought up and is further discussed in [Fiorini et al., 995] and [Fiorini & Lipsky, 996]. Poisson + t Power-Tail t = Merged t Figure 3.: The Merged Process is an overlap of two independent renewal processes. In our case, one process has Poisson, the other one PT-distributed IA-times. A pure PT-renewal process does not appear to be a good trac model for two reasons: First, there are occasionally long periods with no trac at all (large PT IA-times), which does not really happen in real network trac. Secondly, as a renewal process it does not have any correlation in its IA-times.

25 3. Merging of Power-Tail and Poisson Process 9 Both problems are overcome when combining the PT-renewal process with other processes. That other process is chosen here to be Poisson; with most other processes, the resulting eects (i.e. long-range correlation in IA-times) are similar. Two dierent ways of combining will be investigated: The rst one merges the streams of arrival points of the renewal process, while the second one uses the PT-process to modulate the rate of the Poisson Process (see next section's Burst Process). The merging is done by independently generating a stream of PT-arrivals and Poissonarrivals, marked as arrival points on the time-axis. Then these two streams of arrival points are superimposed to build one single sorted stream (see Fig. 3.). The IA-times nally are the time dierences between the arrivals, which could be generated by dierent processes. So what are the eects? The long gaps will not happen anymore, since Poisson arrivals would ll them up. And there is going to be correlation in the inter-arrival times, because these lled up gaps result in periods of pure Poisson, which have a higher mean than the merged process. So having an IA-time longer than the mean increases the probability that some of the following ones are longer as well within the possible PT-gap. That is what positive correlation means. The only new parameter this merging process introduces, is the percentage p the Poisson arrivals have in terms of all arrivals. Since the overall arrival rate is the sum of both rates as it will be proven below, choosing p as the rate for the Poisson-process and p for the PT process, leads to an overall rate of with p being the percentage as claimed above. The means of the individual processes then are: PT : E(X ) = p ; Poisson : E(X ) = p =: So for example if p = : (thus % Poisson-arrivals), the IA-times during those long PT-gaps have a mean of, that is times as big as the overall mean. We next look at the LAQT matrices for this SM-Process: The TPT-T renewal process itself can be described as an SM-process. The L -matrix then has to take over the role of the entrance vector p. The state leaving rate r i (see (28)) is then spread out over the ith row of the L -matrix according to the probabilities in p : L = r p r 2 p. r T p B is the same matrix as for the GI-Process (3) = r p : The additional Poisson departures in the merged process do not extend the state space, but only add their rate = p to the diagonal of L, because a Poisson departure leaves the PT process in the same state. The M = B matrix also needs modication by adding the Poisson-rate to the state-leaving rate. So we get: L = L + I; B = M = M + I:

26 2 3 Arrival Processes with Long-Range Correlation B is still a diagonal matrix, so it can easily be inverted by hand to get V = B. The steady state entrance vector p can be shown ([Fiorini et al., 995], p. 3) to be p = + x p (I + V ): Putting in all the matrices, we get for the rst and second moments: E(X) = pv x = = + x + The last expression clearly shows that the rate of the merged process is the sum of the rates of the individual processes. E(X 2 ) = 2pV 2 = 2 ( T )( + x ) x T X i= ( 2 ) i + i where ; and are the parameters of the TPT-T distribution (see p. 3). The matrixformula for the correlation was already given in (2). The correlation-curve, as shown in. Auto-Correlation of Merged Process, p=.5 slope -.4 TPT-64 TPT-27 TPT-23 TPT-2 TPT-7 r(k)... e+6 e+7 k Figure 3.2: The autocorrelation coecient r(k) lag-k of the merged process with 5% Poisson departures. For high truncations T, the curve looks like a straight line (for bigger k > ) on a log-log scale. The smaller T, the earlier the correlation drops o, without having much inuence on r(k) for small k. Figure 3.2, shows the expected linear behavior on log-log-scale (i.e. long-range dependencies) if the used TPT-truncation, T, is high enough. The smaller T, the earlier r(k) drops o exponentially. For T = 64, the curve looks like a straight line with slope = :4 over the whole plotted range for k. Figure 3.3 shows how the coecient of variation, C, and the correlation lag-, r(), vary when changing p. The curves are calculated for a TPT-64 distribution. Increasing T further does not change the results. The calculation of C and r() for the two dierent sets of million inter-arrival times from Leland's Ethernet measurements ([Trace, 989]) gave the results: C = 3:22, C 2 = 3:3, r () = :2, r 2 () = :8, i.e. 3:2 C 3:3

27 3.2 A Single Source On/O-Model: -Burst Merged Process - coefficient of variation & correlation lag TPT-64: C TPT-64: 4*r() 2 C, 4*r() percentage p of poisson Figure 3.3: The coecient of variation and the autocorrelation lag- (multiplied by 4 to put it in the same scale) for the merged process over varying proportions, p, of Poisson arrivals. The mean of the whole process is always kept constant to. For p! the process goes to a PT-renewal process, which has innite variance (since = :4). Though r() = for the renewal process, [Fiorini & Lipsky, 996] show, that for a TPT-64 distribution, r() does not drop to until p = 6. For smaller p it seemed to be a linear drop-o to. So one could bring up the conjecture: lim p! r() > for an untruncated PT. and :8 r() :2. This would imply p = 9% and p = 33% respectively. For a more detailed discussion of this process and the correlation in particular, see [Fiorini & Lipsky, 996]. 3.2 A Single Source On/O-Model: -Burst The merged model has some nice characteristics that match those of real trac data. However, it would be dicult to explain physically, how such trac patterns could come up in a real network. We already mentioned that several researchers found evidence that le sizes are distributed according to a PT-distribution. When using xed size packets, this implies that the number of packets during one single le transfer is also PT-distributed. The following model, called -Burst, describes a series of non-overlapping le transfers with exponentially distributed gaps (OFF-times with mean B) in between. It belongs to the class of ON/OFF-models. However, there is only one source that is sending packets. This restriction will be abolished in the next section. During the PT-distributed ON-time (i.e. the le transfer), packets are generated with a constant rate (see Fig. 3.4). Let the mean length of the ON-times be L. The -Burst process with truncated PT distributions belongs to the bigger class of Markov Modulated Poisson Processes: The departures are always generated by a Poisson Process. However, the rate of that varies and is determined by the current state of an underlying Markov chain (Markov modulated). Our merged process from the last section does not

28 22 3 Arrival Processes with Long-Range Correlation -Burst Model PT M PT M PT M PT M ON ON ON ON OFF OFF OFF OFF Mean L B λ λ λ λ t Figure 3.4: The -Burst Process consists of alternating ON and OFF-periods. During the PTdistributed (with mean L) ON-periods, packets are generated by a Poisson Process with rate. The length of the OFF-periods is taken from a negative exponential distribution; no trac is generated during the OFF-periods (for now, later on there will be some low trac with rate l ). belong to that class, since there are also the departures from the PT-process. µ/γ ζ ζ 2 µ/γ OFF /B ζ T µ/γ T- ON: /L Figure 3.5: The underlying Markov chain of the -Burst process. The states on the right hand side model the TPT-T distributed ON-time, during which departures with rate occur. The process alternates between ON and OFF periods. Figure 3.5 shows that underlying Markov chain for our -Burst process. It has T + states: The single state on the left-hand side takes care of the OFF-period. Since it is exponentially distributed with rate =B, one state is enough for it. The column of states on the right is the familiar representation of the TPT-T distribution (compare with Fig. 2.2), so it stands for the ON-time. From the OFF-state (state ) the process goes to one of the T ON-states (numbered 2; : : : ; T + ) according to the PT entrance probabilities: P (m) = ;i+ i. Since ON and OFF times strictly alternate and there is only one OFF-state, there is only one choice for going back, so: P (m) k; =, k = 2; : : : ; T +. Remember that this is only the underlying process, which modulates the rate of the Poisson departures (therefore the superscript m) by describing the length of the ON-periods, so there are no

29 3.2 A Single Source On/O-Model: -Burst 23 departures (B (m) = ). The rate matrix is M (m) = diag( B ; ; : : : ); T whereby is chosen according to (29), so that the TPT-T distribution has mean L. These matrices now can easily be modied to include the modulated Poisson process; the state space is exactly the same. Since the departures do not change the state of the underlying chain, the L-matrix is diagonal with the corresponding Poisson rates (here only or ): 2 3 : : : L = : : : Since in this case L also is a diagonal matrix, and the Poisson-departures are a second possibility of leaving a phase of the SM-process, the matrix L adds to the state leaving rate: M = M (m) + L: Also, the state transition probabilities have to be changed: A transition to another state of the underlying Markov chain only occurs if the negative exponential state time (rate M (m) ii for state i) is smaller than the departure time (rate L ii ); compare with (7): P ij = P (m) ij M (m) ii M (m) ii + L ii (33) We could write down all the matrices now and do the calculations with them. However, we want to modify the model slightly before that: As it is, during the OFF-times, nothing at all happens. So again, there will be gaps with no trac at all, which is not really the case for real network trac. In addition, as we will see later, these gaps would lead to very high coecients of variation, higher than the values, we are aiming for. To avoid that, some amount of Poisson trac with relatively low rate l is superimposed (comparable to the merged process, here it is not a PT-renewal process, but the ON-OFF process, that gets merged). That additional Poisson trac could be seen as low background trac for the bursty le transfers of the ON-OFF model. Adding that background trac to the model is fairly easy, since the departure process is already Poisson. The underlying chain does not change at all: The rate l is added to the diagonal of the L-matrix, the calculations of M and P stay as they are, using the `new' L-matrix. Finally we get: L = l : : : l ; : : : + l

30 24 3 Arrival Processes with Long-Range Correlation P = M = B l p q l + B + l l + T where (q ) i = ; = i ; i = ; : : : ; T: = i + ( + l ) Let p := p =(+B l ). Since only the rst row and column of the matrix P have non-zero elements, P can be written as a sum of two rank- matrices Q ; Q 2 : P = Q + Q 2 ; Q = From these equations it is easy to show: Also, [; p ]; Q 2 = Q 2 = Q 2 2 = ; Q 2 := Q Q 2 = Q 2 := Q 2 Q = q p q : : :. : : :. q p P 2 = (Q + Q 2 ) 2 = Q 2 + Q 2 : : [; ; : : : ; ] Using these matrices helps a lot for getting the inverse V = B = (I P) M. It turns out that (I P) is a linear combination of the ve matrices I; Q ; Q 2 ; Q 2 ; Q 2. So assuming (I P) = I + aq + bq 2 + cq 2 + dq 2 ; it follows for the unknowns a; b; c; d from looking at the components of the following matrix equation: I = (I P) (I P) = (I Q Q 2 ) (I + aq + bq 2 + cq 2 + dq 2 ) Using the special structure of the participating matrices, the scalar equations for the single elements of the matrix equation give: a = b = c = d = p q Then V comes out as V = (I P) M = I + p q (Q + Q 2 + Q 2 + Q 2 ) M

P 1j. P jm. P ij. p m

P 1j. P jm. P ij. p m Analytic Model of Performance in Telecommunication Systems, Based on On-O Trac Sources with Self-Similar Behavior Lester Lipsky Department of Computer Science and Engineering University of Connecticut,

More information

In Proceedings of the Tenth International Conference on on Parallel and Distributed Computing Systems (PDCS-97), pages , October 1997

In Proceedings of the Tenth International Conference on on Parallel and Distributed Computing Systems (PDCS-97), pages , October 1997 In Proceedings of the Tenth International Conference on on Parallel and Distributed Computing Systems (PDCS-97), pages 322-327, October 1997 Consequences of Ignoring Self-Similar Data Trac in Telecommunications

More information

Multiplicative Multifractal Modeling of. Long-Range-Dependent (LRD) Trac in. Computer Communications Networks. Jianbo Gao and Izhak Rubin

Multiplicative Multifractal Modeling of. Long-Range-Dependent (LRD) Trac in. Computer Communications Networks. Jianbo Gao and Izhak Rubin Multiplicative Multifractal Modeling of Long-Range-Dependent (LRD) Trac in Computer Communications Networks Jianbo Gao and Izhak Rubin Electrical Engineering Department, University of California, Los Angeles

More information

BUFFER PROBLEMS IN TELECOMMUNICATIONS NETWORKS. Lester R. Lipsky and John E. Hatem. Department of Computer Science and Engineering

BUFFER PROBLEMS IN TELECOMMUNICATIONS NETWORKS. Lester R. Lipsky and John E. Hatem. Department of Computer Science and Engineering BUFFER PROBLEMS IN TELECOMMUNICATIONS NETWORKS. Lester R. Lipsky and John E. Hatem Department of Computer Science and Engineering University of Connecticut Storrs, CT 06269-3155 lester@brc.uconn.edu. and.

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle  holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Optimal Rejuvenation for. Tolerating Soft Failures. Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi.

Optimal Rejuvenation for. Tolerating Soft Failures. Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi. Optimal Rejuvenation for Tolerating Soft Failures Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi Abstract In the paper we address the problem of determining the optimal time

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

Network Traffic Characteristic

Network Traffic Characteristic Network Traffic Characteristic Hojun Lee hlee02@purros.poly.edu 5/24/2002 EL938-Project 1 Outline Motivation What is self-similarity? Behavior of Ethernet traffic Behavior of WAN traffic Behavior of WWW

More information

Dynamical Systems. August 13, 2013

Dynamical Systems. August 13, 2013 Dynamical Systems Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 Dynamical Systems are systems, described by one or more equations, that evolve over time.

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Environment (E) IBP IBP IBP 2 N 2 N. server. System (S) Adapter (A) ACV

Environment (E) IBP IBP IBP 2 N 2 N. server. System (S) Adapter (A) ACV The Adaptive Cross Validation Method - applied to polling schemes Anders Svensson and Johan M Karlsson Department of Communication Systems Lund Institute of Technology P. O. Box 118, 22100 Lund, Sweden

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

In Proceedings of the 1997 Winter Simulation Conference, S. Andradottir, K. J. Healy, D. H. Withers, and B. L. Nelson, eds.

In Proceedings of the 1997 Winter Simulation Conference, S. Andradottir, K. J. Healy, D. H. Withers, and B. L. Nelson, eds. In Proceedings of the 1997 Winter Simulation Conference, S. Andradottir, K. J. Healy, D. H. Withers, and B. L. Nelson, eds. LONG-LASTING TRANSIENT CONDITIONS IN SIMULATIONS WITH HEAVY-TAILED WORKLOADS

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18. IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

Lecturer: Olga Galinina

Lecturer: Olga Galinina Renewal models Lecturer: Olga Galinina E-mail: olga.galinina@tut.fi Outline Reminder. Exponential models definition of renewal processes exponential interval distribution Erlang distribution hyperexponential

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Jitter Analysis of an MMPP 2 Tagged Stream in the presence of an MMPP 2 Background Stream

Jitter Analysis of an MMPP 2 Tagged Stream in the presence of an MMPP 2 Background Stream Jitter Analysis of an MMPP 2 Tagged Stream in the presence of an MMPP 2 Background Stream G Geleji IBM Corp Hursley Park, Hursley, UK H Perros Department of Computer Science North Carolina State University

More information

HITTING TIME IN AN ERLANG LOSS SYSTEM

HITTING TIME IN AN ERLANG LOSS SYSTEM Probability in the Engineering and Informational Sciences, 16, 2002, 167 184+ Printed in the U+S+A+ HITTING TIME IN AN ERLANG LOSS SYSTEM SHELDON M. ROSS Department of Industrial Engineering and Operations

More information

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Lecture No. # 03 Moving from one basic feasible solution to another,

More information

Queueing Theory and Simulation. Introduction

Queueing Theory and Simulation. Introduction Queueing Theory and Simulation Based on the slides of Dr. Dharma P. Agrawal, University of Cincinnati and Dr. Hiroyuki Ohsaki Graduate School of Information Science & Technology, Osaka University, Japan

More information

1. Introduction. Consider a single cell in a mobile phone system. A \call setup" is a request for achannel by an idle customer presently in the cell t

1. Introduction. Consider a single cell in a mobile phone system. A \call setup is a request for achannel by an idle customer presently in the cell t Heavy Trac Limit for a Mobile Phone System Loss Model Philip J. Fleming and Alexander Stolyar Motorola, Inc. Arlington Heights, IL Burton Simon Department of Mathematics University of Colorado at Denver

More information

Lecture - 30 Stationary Processes

Lecture - 30 Stationary Processes Probability and Random Variables Prof. M. Chakraborty Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 30 Stationary Processes So,

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models

Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models Statistical regularity Properties of relative frequency

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Probability and Statistics Concepts

Probability and Statistics Concepts University of Central Florida Computer Science Division COT 5611 - Operating Systems. Spring 014 - dcm Probability and Statistics Concepts Random Variable: a rule that assigns a numerical value to each

More information

Stat 206: Sampling theory, sample moments, mahalanobis

Stat 206: Sampling theory, sample moments, mahalanobis Stat 206: Sampling theory, sample moments, mahalanobis topology James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Notation My notation is different from the book s. This is partly because

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

Modelling data networks stochastic processes and Markov chains

Modelling data networks stochastic processes and Markov chains Modelling data networks stochastic processes and Markov chains a 1, 3 1, 2 2, 2 b 0, 3 2, 3 u 1, 3 α 1, 6 c 0, 3 v 2, 2 β 1, 1 Richard G. Clegg (richard@richardclegg.org) November 2016 Available online

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

M/G/1 and M/G/1/K systems

M/G/1 and M/G/1/K systems M/G/1 and M/G/1/K systems Dmitri A. Moltchanov dmitri.moltchanov@tut.fi http://www.cs.tut.fi/kurssit/elt-53606/ OUTLINE: Description of M/G/1 system; Methods of analysis; Residual life approach; Imbedded

More information

Performance Evaluation of Queuing Systems

Performance Evaluation of Queuing Systems Performance Evaluation of Queuing Systems Introduction to Queuing Systems System Performance Measures & Little s Law Equilibrium Solution of Birth-Death Processes Analysis of Single-Station Queuing Systems

More information

On reaching head-to-tail ratios for balanced and unbalanced coins

On reaching head-to-tail ratios for balanced and unbalanced coins Journal of Statistical Planning and Inference 0 (00) 0 0 www.elsevier.com/locate/jspi On reaching head-to-tail ratios for balanced and unbalanced coins Tamas Lengyel Department of Mathematics, Occidental

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

Reading: Karlin and Taylor Ch. 5 Resnick Ch. 3. A renewal process is a generalization of the Poisson point process.

Reading: Karlin and Taylor Ch. 5 Resnick Ch. 3. A renewal process is a generalization of the Poisson point process. Renewal Processes Wednesday, December 16, 2015 1:02 PM Reading: Karlin and Taylor Ch. 5 Resnick Ch. 3 A renewal process is a generalization of the Poisson point process. The Poisson point process is completely

More information

Infinite series, improper integrals, and Taylor series

Infinite series, improper integrals, and Taylor series Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions

More information

Dynamic resource sharing

Dynamic resource sharing J. Virtamo 38.34 Teletraffic Theory / Dynamic resource sharing and balanced fairness Dynamic resource sharing In previous lectures we have studied different notions of fair resource sharing. Our focus

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

Outline. Finite source queue M/M/c//K Queues with impatience (balking, reneging, jockeying, retrial) Transient behavior Advanced Queue.

Outline. Finite source queue M/M/c//K Queues with impatience (balking, reneging, jockeying, retrial) Transient behavior Advanced Queue. Outline Finite source queue M/M/c//K Queues with impatience (balking, reneging, jockeying, retrial) Transient behavior Advanced Queue Batch queue Bulk input queue M [X] /M/1 Bulk service queue M/M [Y]

More information

Bulk input queue M [X] /M/1 Bulk service queue M/M [Y] /1 Erlangian queue M/E k /1

Bulk input queue M [X] /M/1 Bulk service queue M/M [Y] /1 Erlangian queue M/E k /1 Advanced Markovian queues Bulk input queue M [X] /M/ Bulk service queue M/M [Y] / Erlangian queue M/E k / Bulk input queue M [X] /M/ Batch arrival, Poisson process, arrival rate λ number of customers in

More information

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem Wade Trappe Lecture Overview Network of Queues Introduction Queues in Tandem roduct Form Solutions Burke s Theorem What

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

5 Eigenvalues and Diagonalization

5 Eigenvalues and Diagonalization Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 27, v 5) Contents 5 Eigenvalues and Diagonalization 5 Eigenvalues, Eigenvectors, and The Characteristic Polynomial 5 Eigenvalues

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE

G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE UDREA PÄUN Communicated by Marius Iosifescu The main contribution of this work is the unication, by G method using Markov chains, therefore, a

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

Packet Size

Packet Size Long Range Dependence in vbns ATM Cell Level Trac Ronn Ritke y and Mario Gerla UCLA { Computer Science Department, 405 Hilgard Ave., Los Angeles, CA 90024 ritke@cs.ucla.edu, gerla@cs.ucla.edu Abstract

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Chapter 11 - Sequences and Series

Chapter 11 - Sequences and Series Calculus and Analytic Geometry II Chapter - Sequences and Series. Sequences Definition. A sequence is a list of numbers written in a definite order, We call a n the general term of the sequence. {a, a

More information

A POPULATION-MIX DRIVEN APPROXIMATION FOR QUEUEING NETWORKS WITH FINITE CAPACITY REGIONS

A POPULATION-MIX DRIVEN APPROXIMATION FOR QUEUEING NETWORKS WITH FINITE CAPACITY REGIONS A POPULATION-MIX DRIVEN APPROXIMATION FOR QUEUEING NETWORKS WITH FINITE CAPACITY REGIONS J. Anselmi 1, G. Casale 2, P. Cremonesi 1 1 Politecnico di Milano, Via Ponzio 34/5, I-20133 Milan, Italy 2 Neptuny

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

The Growth of Functions. A Practical Introduction with as Little Theory as possible

The Growth of Functions. A Practical Introduction with as Little Theory as possible The Growth of Functions A Practical Introduction with as Little Theory as possible Complexity of Algorithms (1) Before we talk about the growth of functions and the concept of order, let s discuss why

More information

6.041/6.431 Fall 2010 Final Exam Solutions Wednesday, December 15, 9:00AM - 12:00noon.

6.041/6.431 Fall 2010 Final Exam Solutions Wednesday, December 15, 9:00AM - 12:00noon. 604/643 Fall 200 Final Exam Solutions Wednesday, December 5, 9:00AM - 2:00noon Problem (32 points) Consider a Markov chain {X n ; n 0,, }, specified by the following transition diagram 06 05 09 04 03 2

More information

Modelling data networks stochastic processes and Markov chains

Modelling data networks stochastic processes and Markov chains Modelling data networks stochastic processes and Markov chains a 1, 3 1, 2 2, 2 b 0, 3 2, 3 u 1, 3 α 1, 6 c 0, 3 v 2, 2 β 1, 1 Richard G. Clegg (richard@richardclegg.org) December 2011 Available online

More information

Chapter 2 Queueing Theory and Simulation

Chapter 2 Queueing Theory and Simulation Chapter 2 Queueing Theory and Simulation Based on the slides of Dr. Dharma P. Agrawal, University of Cincinnati and Dr. Hiroyuki Ohsaki Graduate School of Information Science & Technology, Osaka University,

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

1 Basic concepts from probability theory

1 Basic concepts from probability theory Basic concepts from probability theory This chapter is devoted to some basic concepts from probability theory.. Random variable Random variables are denoted by capitals, X, Y, etc. The expected value or

More information

IP Packet Level vbns Trac. fjbgao, vwani,

IP Packet Level vbns Trac.   fjbgao, vwani, IP Packet Level vbns Trac Analysis and Modeling Jianbo Gao a,vwani P. Roychowdhury a, Ronn Ritke b, and Izhak Rubin a a Electrical Engineering Department, University of California, Los Angeles, Los Angeles,

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati Lectures on Dynamic Systems and ontrol Mohammed Dahleh Munther Dahleh George Verghese Department of Electrical Engineering and omputer Science Massachuasetts Institute of Technology c hapter 8 Simulation/Realization

More information

A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING

A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING Stochastic Models, 21:695 724, 2005 Copyright Taylor & Francis, Inc. ISSN: 1532-6349 print/1532-4214 online DOI: 10.1081/STM-200056037 A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING N. D. van Foreest

More information

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Queueing Theory. VK Room: M Last updated: October 17, 2013. Queueing Theory VK Room: M1.30 knightva@cf.ac.uk www.vincent-knight.com Last updated: October 17, 2013. 1 / 63 Overview Description of Queueing Processes The Single Server Markovian Queue Multi Server

More information

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1 Queueing systems Renato Lo Cigno Simulation and Performance Evaluation 2014-15 Queueing systems - Renato Lo Cigno 1 Queues A Birth-Death process is well modeled by a queue Indeed queues can be used to

More information

Appendix A. Math Reviews 03Jan2007. A.1 From Simple to Complex. Objectives. 1. Review tools that are needed for studying models for CLDVs.

Appendix A. Math Reviews 03Jan2007. A.1 From Simple to Complex. Objectives. 1. Review tools that are needed for studying models for CLDVs. Appendix A Math Reviews 03Jan007 Objectives. Review tools that are needed for studying models for CLDVs.. Get you used to the notation that will be used. Readings. Read this appendix before class.. Pay

More information

Differential Equations

Differential Equations This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Modeling Parallel and Distributed Systems with Finite Workloads

Modeling Parallel and Distributed Systems with Finite Workloads Modeling Parallel and Distributed Systems with Finite Workloads Ahmed M. Mohamed, Lester Lipsky and Reda Ammar {ahmed, lester, reda@engr.uconn.edu} Dept. of Computer Science and Engineering University

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

Uniform random numbers generators

Uniform random numbers generators Uniform random numbers generators Lecturer: Dmitri A. Moltchanov E-mail: moltchan@cs.tut.fi http://www.cs.tut.fi/kurssit/tlt-2707/ OUTLINE: The need for random numbers; Basic steps in generation; Uniformly

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

VARIANCE REDUCTION IN SIMULATIONS OF LOSS MODELS

VARIANCE REDUCTION IN SIMULATIONS OF LOSS MODELS VARIANCE REDUCTION IN SIMULATIONS OF LOSS MODELS by Rayadurgam Srikant 1 and Ward Whitt 2 October 20, 1995 Revision: September 26, 1996 1 Coordinated Science Laboratory, University of Illinois, 1308 W.

More information

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017 CPSC 531: System Modeling and Simulation Carey Williamson Department of Computer Science University of Calgary Fall 2017 Motivating Quote for Queueing Models Good things come to those who wait - poet/writer

More information

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K Queueing Theory I Summary Little s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K " Little s Law a(t): the process that counts the number of arrivals

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

57:022 Principles of Design II Final Exam Solutions - Spring 1997

57:022 Principles of Design II Final Exam Solutions - Spring 1997 57:022 Principles of Design II Final Exam Solutions - Spring 1997 Part: I II III IV V VI Total Possible Pts: 52 10 12 16 13 12 115 PART ONE Indicate "+" if True and "o" if False: + a. If a component's

More information

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable Deciding positivity of real polynomials Eberhard Becker, Victoria Powers, and Thorsten Wormann Abstract. We describe an algorithm for deciding whether or not a real polynomial is positive semidenite. The

More information

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

[POLS 8500] Review of Linear Algebra, Probability and Information Theory [POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming

More information

PERFORMANCE-RELEVANT NETWORK TRAFFIC CORRELATION

PERFORMANCE-RELEVANT NETWORK TRAFFIC CORRELATION PERFORMANCE-RELEVANT NETWORK TRAFFIC CORRELATION Hans-Peter Schwefel Center for Teleinfrastruktur Aalborg University email: hps@kom.aau.dk Lester Lipsky Dept. of Comp. Sci. & Eng. University of Connecticut

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.262 Discrete Stochastic Processes Midterm Quiz April 6, 2010 There are 5 questions, each with several parts.

More information

NICTA Short Course. Network Analysis. Vijay Sivaraman. Day 1 Queueing Systems and Markov Chains. Network Analysis, 2008s2 1-1

NICTA Short Course. Network Analysis. Vijay Sivaraman. Day 1 Queueing Systems and Markov Chains. Network Analysis, 2008s2 1-1 NICTA Short Course Network Analysis Vijay Sivaraman Day 1 Queueing Systems and Markov Chains Network Analysis, 2008s2 1-1 Outline Why a short course on mathematical analysis? Limited current course offering

More information

ABC methods for phase-type distributions with applications in insurance risk problems

ABC methods for phase-type distributions with applications in insurance risk problems ABC methods for phase-type with applications problems Concepcion Ausin, Department of Statistics, Universidad Carlos III de Madrid Joint work with: Pedro Galeano, Universidad Carlos III de Madrid Simon

More information

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable International Journal of Wavelets, Multiresolution and Information Processing c World Scientic Publishing Company Polynomial functions are renable Henning Thielemann Institut für Informatik Martin-Luther-Universität

More information

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking Lecture 7: Simulation of Markov Processes Pasi Lassila Department of Communications and Networking Contents Markov processes theory recap Elementary queuing models for data networks Simulation of Markov

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.262 Discrete Stochastic Processes Midterm Quiz April 6, 2010 There are 5 questions, each with several parts.

More information

On the Structure of Low Autocorrelation Binary Sequences

On the Structure of Low Autocorrelation Binary Sequences On the Structure of Low Autocorrelation Binary Sequences Svein Bjarte Aasestøl University of Bergen, Bergen, Norway December 1, 2005 1 blank 2 Contents 1 Introduction 5 2 Overview 5 3 Denitions 6 3.1 Shift

More information