Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
|
|
- Job Jacobs
- 5 years ago
- Views:
Transcription
1 Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1
2 Recap: Probability theory important distributions Discrete distributions Geometric distribution Poisson distribution Continuous distributions Exponential distribution The memoryless property 2
3 Geometric distribution Geom( p), p (0,1) number of successes until the first failure in an independent series of simple random experiments (of Bernoulli type) p = probability of success in any single experiment Value set: S {0,1, } Point probabilities: P{ i} p (1 p) Mean value: E[] i ip i (1 p) p(1 p) Second moment: E[ 2 ] i i 2 p i (1 p) 2(p(1 p)) 2 + p(1 p) Variance: D 2 [] E[ 2 ] E[] 2 p(1 p) 2 i 3
4 Memoryless property of geometric distribution Geometric distribution has so called memoryless property: for all i,j {0,1,...} Prove! P{ i j i} P{ j} Tip: Prove first that P{ i} p i 4
5 Poisson distribution Poisson( a), a 0 limit of binomial distribution as n and p 0 in such a way that np a Value set: S {0,1, } Point probabilities: Mean value: E[] a P{ i} a i a e i! Second moment: E[( 1)] a 2 E[ 2 ] a 2 a Variance: D 2 [] E[ 2 ] E[] 2 a 5
6 Exponential distribution Exp( ), 0 continuous counterpart of geometric distribution ( failure prob. dt) P{ (t,t+h] t} ho(h), where o(h)/h 0 as h 0 Value space: S (0,) Probability density function (pdf): x ( x) e, x 0 Cumulative distribution function (cdf): f F x ( x) P{ x} 1 e, x Mean value: E[] 0 xexp(x) dx 1 Second moment: E[ 2 ] 0 x 2 exp(x) dx 2 2 Variance: D 2 [] E[ 2 ] E[]
7 Memoryless property of exponential distribution Exponential distribution has so called memoryless property: for all x,y (0,) Application: P{ x y x} P{ y} Assume that the call holding time is exponentially distributed with mean h minutes. Consider a call that has already lasted for x minutes. Due to memoryless property, this gives no information about the length of the remaining holding time: it is distributed as the original holding time and, on average, lasts still h minutes! 7
8 Minimum of exponential random variables Let 1 Exp( 1 ) and 2 Exp( 2 ) be independent. Then min : min{ 1, 2} Exp( 1 2) and P{ min i } i 1 2, i {1,2} 8
9 Recap: Stochastic processes Definition of a stochastic process Why are we interested in stochastic processes? Markov processes (= continuous time Markov chains) Example: Poisson process Transition rates Birth-death processes E.g. queues! Markov chains Transition probabilities State classification Possible periodicity Equilibrium/stationary distribution 9
10 Stochastic processes (1) Consider some quantity in a system It typically evolves in time randomly Example 1: the number of occupied channels in a telephone link at time t or at the arrival time of the n th customer Example 2: the number of packets in the buffer of a statistical multiplexer at time t or at the arrival time of the n th customer This kind of evolution is described by a stochastic process At any individual time t (or n) the system can be described by a random variable Thus, the stochastic process is a collection of random variables 10
11 Stochastic processes (2) Definition: A (real-valued) stochastic process ( t t I) is a collection of random variables t taking values in some (real-valued) set S, t () S, and indexed by a real-valued (time) parameter t I. Stochastic processes are also called random processes (or just processes) The index set I is called the parameter space of the process The value set S is called the state space of the process Note: Sometimes notation t is used to refer to the whole stochastic process (instead of a single random variable related to the time t) 11
12 Example Consider traffic process = [0, ])in a link between two telephone exchanges during some time interval [0,T] t denotes the number of occupied channels at time t Sample point tells us what is the number 0 of occupied channels at time 0, what are the remaining holding times of the calls going on at time 0, at what times new calls arrive, and what are the holding times of these new calls. From this information, it is possible to construct the realization () of the traffic process Note that all the randomness in the process is included in the sample point Given the sample point, the realization of the process is just a (deterministic) function of time 12
13 Categories of stochastic processes Reminder: Parameter space: set I of indices t I State space: set S of values Categories: Based on the parameter space: Discrete-time processes: parameter space discrete Continuous-time processes: parameter space continuous Based on the state space: Discrete-state processes: state space discrete Continuous-state processes: state space continuous In this course we will concentrate on the discrete-state processes (with either a discrete or a continuous parameter space (time)) Typical processes describe the number of customers in a queueing system (the state space being thus S {0,1,2,...}) 13
14 Arrival process An arrival process can be described as a point process ( n n 1,2,...) where n tells the arrival time of the n th customer (discrete-time, continuous-state) non-decreasing: n+1 n for all n (thus non-stationary!) typically it is assumed that the interarrival times n n-1 are independent and identically distributed (IID) renewal process then it is sufficient to specify the interarrival time distribution exponential IID interarrival times Poisson process a counter process (A(t) t 0) where A(t) tells the number of arrivals up to time t (continuous-time, discrete-state) non-decreasing: A(t+) A(t) for all t,0(thus non-stationary!) independent and identically distributed (IID) increments A(t) A(t) with Poisson distribution Poisson process 14
15 State process In simple cases the state of the system is described just by an integer e.g. the number (t) of calls or packets at time t This yields a state process that is continuous-time and discrete-state In more complicated cases, the state process is e.g. a vector of integers (cf. loss and queueing network models) Typically we are interested in whether the state process has a stationary distribution if so, what it is? Although the state of the system did not follow the stationary distribution at time 0, in many cases state distribution approaches the stationary distribution as t tends to 15
16 Poisson process Definition 1: A point process ( n n 1,2,...) is a Poisson process with intensity if the probability that there is an event during a short time interval (t, th] is h o(h) independently of the other time intervals Definition 2: A point process ( n n 1,2,...) is a Poisson process with intensity if the interarrival times n n1 are independent and identically distributed (IID) with joint distribution Exp() Definition 3: A counter process (A(t) t0) is a Poisson process with intensity if its increments in disjoint intervals are independent and follow a Poisson distribution as follows: A( t ) A( t) Poisson( ) 16
17 Three ways to characterize the Poisson process It is possible to show that all three definitions for a Poisson process are, indeed, equivalent A(t) no event with prob. 1ho(h) event with prob. ho(h) 17
18 Important properties of Poisson process Uniform distribution, sum, random sorting and sampling PASTA arriving customers see the system in the stationary state PASTA= Poisson Arrivals See Time Averages PASTA property is only valid for Poisson arrivals i.e., it is not valid for other arrival processes 18
19 Demos: D2/1 and D2/2 D2/1 Buses are leaving from a bus stop regularly in every 15 minutes. Taxis pass by according to a Poisson process with rate once in 15 minutes. You arrive at the bus stop at a random time instant. (a) What is your expected waiting time until the rst bus arrives? (b) What is your expected waiting time until the rst taxi arrives? (c) What is the probability that you have to wait longer than 10 minutes until the rst bus or taxi arrives? D2/2 Solve the previous problem by assuming now that you arrive at the bus stop just after a bus has left it. 19
20 Markov process Consider a continuous-time and discrete-state stochastic process (t) with state space S {0, 1,, N} or S {0, 1,...} Definition: The process (t) is a Markov process if P{ ( tn 1) xn1 ( t1) x1,, ( tn) xn} P ( t ) x ( t ) x } { n1 n1 n n for all n, t 1 t n+1 and x 1,, x n +1 This is called the Markov property Given the current state, the future of the process does not depend on its past (that is, how the process has evolved to the current state) As regards the future of the process, the current state contains all the required information 20
21 State transition rates Consider a time-homogeneous Markov process (t) The state transition rates q ij, where i, j S, are defined as follows: q ij 1 1 : lim p ( h) lim h0 h ij h0 h P{ ( h) j (0) i} The initial distribution P{(0) i}, i S, and the state transition rates q ij together determine the state probabilities P{(t) i}, i S, by the Kolmogorov equations Note that on this course we will consider only time-homogeneous Markov processes 21
22 Exponential holding times Assume that a Markov process is in state i During a short time interval (t, th], the conditional probability that there is a transition from state i to state j is q ij h o(h) (independently of the other time intervals) Let q i denote the total transition rate out of state i, that is: q : i q ij ji Then, during a short time interval (t, th], the conditional probability that there is a transition from state i to any other state is q i h o(h) (independently of the other time intervals) This is clearly a memoryless property Thus, the holding time in (any) state i is exponentially distributed with intensity q i 22
23 State transition probabilities Let T i denote the holding time in state i and T ij denote the (potential) holding time in state i that ends to a transition to state j T i Exp( q i ), T ij Exp( q ij ) T i can be seen as the minimum of independent and exponentially distributed holding times T ij (see lecture 1, slide 44) T i mint ji Let then p ij denote the conditional probability that, when in state i, there is a transition from state i to state j (the state transition probabilities); p ij i ij ij P{ T T } q q ij i 23
24 24 Markov chain Consider a discrete-time and discrete-state stochasticprocess n with state space S Definition: Process n is a Markov chain with (one-step) transition probabilities p ij if for all n and x 0,, x n 1, i, j Markov property is clearly satisfied: for all n, t 1 t n+1 and x 1,, x n 1 } ) ( ) ( { } ) (,, ) ( ) ( { n n n n n n n n x t x t P x t x t x t P ij n n n n n n p i j P i x x j P } { },,, {
25 Transition probability matrix It follows from the definition that the one-step state transitions of a Markov chain n are time-homogeneous: P { n 1 j n i} P{ 1 j 0 i} p ij for all n 0 and i, j S Denote the corresponding transition probability matrix by P : ( pij; i, j S) Matrix P is a stochastic matrix, i.e., all elements are non-negative and the sum of each row equals 1 it can be show that product of two stochastic matrices is a stochastic matrix any stochastic matrix constitutes a Markov chain 25
26 State classification Definition: There is a path from state i to state j (i j) if there is a directed path from state i to state j in the state transition diagram. In this case, starting from state i, the process visits state j with positive probability (sometimes in the future) Definition: States i and j communicate (i j) if i j and j i. Definition: Markov chain is irreducible if all states i S communicate with each other In general, for all chains, the states can be grouped into classes such that within each class all states communicate states from different classes do not communicate with each other Thus, communication defines an equivalence relation the classes are called the irreducible classes of states 26
27 Transient, recurrent and absorbing states Definition: A set of states is closed if it is impossible to move outside from the set Definition: A single state set which forms a closed set is an absorbing state Definition: A state is a transient state if the probability of returning back to the state at some time in the future is < 1 that is, there is a positive probability of never returning Definition: A state is a recurrent state if the probability of returning = 1 that is, the system returns with certainty to the state a state is null recurrent if the expectation of first return time = a state is positive recurrent if the expectation of first return time < 27
28 Example Closed set, single state = absorbing state Transient states (eventually the chain will go either left or right). Irreducible set of states. Closed set, all states are recurrent. Irreducible set of states. 28
29 Periodic and aperiodic Markov chains NOTE! Only possible for discrete time processes, i.e., chains! Definition: An irreducible Markov chain is periodic if there is i S and d such that P{ n i 0 i} 0 n kd for some k Otherwise the chain is aperiodic. Definition: A Markov chain is ergodic if it is aperiodic and all states are positive recurrent 29
30 Example Periodic set of states, =4 Aperiodic set. Also ergodic. 30
31 Equilibrium distribution (1) Definition: Let ( i i 0, i S) be a distribution defined on the state space S, i.e., it satisfies the normalization condition (N): is i 1 (N) It is the equilibrium distribution of the Markov process (t) if the following global balance equations (GBE) are satisfied for each i S: q (GBE) It is the equilibrium distribution of the Markov chain n if the following global balance equations (GBE) are satisfied for each i S: ji i ij ji p ji i ij ji j j q p ji ji (GBE) 31
32 Irreducible Markov processes and chains (1) Theorem: Assume that an irreducible Markov process (t), or irreducible and aperiodic Markov chain n has an equilibrium distribution. (i) Then the equilibrium distribution is unique, and equals the limiting distribution of the process, i i lim lim t n P{ ( t) P{ n i} i} (ii) If additionally the equilibrium distribution is the initial distribution, then the process is stationary, and the equilibrium distribution equals the stationary distribution, i P{ ( t) i} P{ i} i n for for all all t n 32
33 Irreducible Markov processes and chains (2) Proposition: If an irreducible Markov process (t) or irreducible and aperiodic Markov chain n does not have an equilibrium distribution, then lim t P{ ( t) i} 0 for all i lim n P{ n i} 0 for all i Proposition: An irreducible Markov process (t) or irreducible and aperiodic Markov chain n with a finite state space has always a unique equilibrium distribution. 33
34 Local balance equations Let ( i i 0, i S) be a distribution defined on S: Proposition: If the following local balance equations (LBE) are satisfied for each pair i,j: p is then is an equilibrium distribution. i Proof: (GBE) follows from (LBE) by summing over all j I ij (N) LBEs can be used for example for a birth-death chain or a birth-death process i j 1 p ji (LBE)
35 Some general guidelines for analysis of Markov processes and chains 1. Find a state description of the system list all possible states (the state space) merge identical states, remove non-relevant states give identification for all of the states draw the states as circles (including some identification) 2. Determine the transition rates or probabilities between states trivial when holding times are exponential and arrival process is Poisson (arrange rates/probabilities as a matrix) draw the transitions as directed graph 3. Solve the balance equations solution of linear equations, remember normalization with many states calculating by hand becomes tedious sometimes special structure of the transition diagram can be exploited 35
36 Demo: D4/2 D4/2 Suppose there are three copying machines in the department and one support person who knows how to repair the machines when they break down. If the number of working machines, then the probability for a new breakdown is 0.1. We assume that during one day only one machine can get broken. When there is at least one broken machine, the repairman will be able for fix one machine with probability 0.5 for the next day. a) Model the number of working machines as a Markov chain and draw the state transition diagram and give the transition probability matrix. b) What kind of Markov chain this is? Should it have an equilibrium distribution? c) Find out the equilibrium distribution if it exists. 36
Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking
Lecture 7: Simulation of Markov Processes Pasi Lassila Department of Communications and Networking Contents Markov processes theory recap Elementary queuing models for data networks Simulation of Markov
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationStochastic process. X, a series of random variables indexed by t
Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationPart I Stochastic variables and Markov chains
Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationLecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes
Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities
More informationECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains
ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer
More informationLECTURE #6 BIRTH-DEATH PROCESS
LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death
More informationIEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.
IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas
More informationReadings: Finish Section 5.2
LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout
More informationData analysis and stochastic modeling
Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt
More informationCDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv
More informationMarkov processes and queueing networks
Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution
More informationLecture 20: Reversible Processes and Queues
Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n
More informationIrreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1
Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate
More informationIEOR 6711, HMWK 5, Professor Sigman
IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.
More informationNorthwestern University Department of Electrical Engineering and Computer Science
Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability
More informationModelling data networks stochastic processes and Markov chains
Modelling data networks stochastic processes and Markov chains a 1, 3 1, 2 2, 2 b 0, 3 2, 3 u 1, 3 α 1, 6 c 0, 3 v 2, 2 β 1, 1 Richard G. Clegg (richard@richardclegg.org) November 2016 Available online
More informationThe Transition Probability Function P ij (t)
The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it
More informationContinuous-time Markov Chains
Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ October 23, 2017
More information57:022 Principles of Design II Final Exam Solutions - Spring 1997
57:022 Principles of Design II Final Exam Solutions - Spring 1997 Part: I II III IV V VI Total Possible Pts: 52 10 12 16 13 12 115 PART ONE Indicate "+" if True and "o" if False: + a. If a component's
More informationContinuous time Markov chains
Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationProbability, Random Processes and Inference
INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx
More informationLecture 4a: Continuous-Time Markov Chain Models
Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time
More informationExponential Distribution and Poisson Process
Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential
More informationQueueing Theory. VK Room: M Last updated: October 17, 2013.
Queueing Theory VK Room: M1.30 knightva@cf.ac.uk www.vincent-knight.com Last updated: October 17, 2013. 1 / 63 Overview Description of Queueing Processes The Single Server Markovian Queue Multi Server
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationStatistics 150: Spring 2007
Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities
More informationCDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical
CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one
More informationIntroduction to Queuing Networks Solutions to Problem Sheet 3
Introduction to Queuing Networks Solutions to Problem Sheet 3 1. (a) The state space is the whole numbers {, 1, 2,...}. The transition rates are q i,i+1 λ for all i and q i, for all i 1 since, when a bus
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationSTA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008
Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There
More informationQ = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?
IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2005, Professor Whitt, Second Midterm Exam Chapters 5-6 in Ross, Thursday, March 31, 11:00am-1:00pm Open Book: but only the Ross
More informationTCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis
TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationCS 798: Homework Assignment 3 (Queueing Theory)
1.0 Little s law Assigned: October 6, 009 Patients arriving to the emergency room at the Grand River Hospital have a mean waiting time of three hours. It has been found that, averaged over the period of
More informationPerformance Evaluation of Queuing Systems
Performance Evaluation of Queuing Systems Introduction to Queuing Systems System Performance Measures & Little s Law Equilibrium Solution of Birth-Death Processes Analysis of Single-Station Queuing Systems
More informationContinuous Time Processes
page 102 Chapter 7 Continuous Time Processes 7.1 Introduction In a continuous time stochastic process (with discrete state space), a change of state can occur at any time instant. The associated point
More informationStochastic Modelling Unit 1: Markov chain models
Stochastic Modelling Unit 1: Markov chain models Russell Gerrard and Douglas Wright Cass Business School, City University, London June 2004 Contents of Unit 1 1 Stochastic Processes 2 Markov Chains 3 Poisson
More informationStatistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013
Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Friday, Feb 8, 2013 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More information(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?
IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only
More informationChapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan
Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationModelling data networks stochastic processes and Markov chains
Modelling data networks stochastic processes and Markov chains a 1, 3 1, 2 2, 2 b 0, 3 2, 3 u 1, 3 α 1, 6 c 0, 3 v 2, 2 β 1, 1 Richard G. Clegg (richard@richardclegg.org) December 2011 Available online
More informationExample: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected
4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X
More informationBirth-Death Processes
Birth-Death Processes Birth-Death Processes: Transient Solution Poisson Process: State Distribution Poisson Process: Inter-arrival Times Dr Conor McArdle EE414 - Birth-Death Processes 1/17 Birth-Death
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationMarkov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015
Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of
More informationMarkov Chain Model for ALOHA protocol
Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability
More informationFigure 10.1: Recording when the event E occurs
10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable
More informationAnswers to selected exercises
Answers to selected exercises A First Course in Stochastic Models, Henk C. Tijms 1.1 ( ) 1.2 (a) Let waiting time if passengers already arrived,. Then,, (b) { (c) Long-run fraction for is (d) Let waiting
More informationHANDBOOK OF APPLICABLE MATHEMATICS
HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume II: Probability Emlyn Lloyd University oflancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester - New York - Brisbane
More informationThe exponential distribution and the Poisson process
The exponential distribution and the Poisson process 1-1 Exponential Distribution: Basic Facts PDF f(t) = { λe λt, t 0 0, t < 0 CDF Pr{T t) = 0 t λe λu du = 1 e λt (t 0) Mean E[T] = 1 λ Variance Var[T]
More informationTime Reversibility and Burke s Theorem
Queuing Analysis: Time Reversibility and Burke s Theorem Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. Yannis A. Korilis. Outline Time-Reversal
More informationMidterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley
Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten
More informationDisjointness and Additivity
Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten
More informationPoisson Processes. Stochastic Processes. Feb UC3M
Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 1
MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More informationName of the Student:
SUBJECT NAME : Probability & Queueing Theory SUBJECT CODE : MA 6453 MATERIAL NAME : Part A questions REGULATION : R2013 UPDATED ON : November 2017 (Upto N/D 2017 QP) (Scan the above QR code for the direct
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More informationBudapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány
Budapest University of Tecnology and Economics AndrásVetier Q U E U I N G January 25, 2000 Supported by Pro Renovanda Cultura Hunariae Alapítvány Klebelsberg Kunó Emlékére Szakalapitvány 2000 Table of
More informationQueuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe
Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem Wade Trappe Lecture Overview Network of Queues Introduction Queues in Tandem roduct Form Solutions Burke s Theorem What
More information18.175: Lecture 30 Markov chains
18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know
More informationStochastic Models in Computer Science A Tutorial
Stochastic Models in Computer Science A Tutorial Dr. Snehanshu Saha Department of Computer Science PESIT BSC, Bengaluru WCI 2015 - August 10 to August 13 1 Introduction 2 Random Variable 3 Introduction
More informationProbability and Statistics Concepts
University of Central Florida Computer Science Division COT 5611 - Operating Systems. Spring 014 - dcm Probability and Statistics Concepts Random Variable: a rule that assigns a numerical value to each
More informationProbability Distributions
Lecture : Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function (mgf)
More informationLink Models for Circuit Switching
Link Models for Circuit Switching The basis of traffic engineering for telecommunication networks is the Erlang loss function. It basically allows us to determine the amount of telephone traffic that can
More informationExamples of Countable State Markov Chains Thursday, October 16, :12 PM
stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without
More information1 Continuous-time chains, finite state space
Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm
More informationRandom Walk on a Graph
IOR 67: Stochastic Models I Second Midterm xam, hapters 3 & 4, November 2, 200 SOLUTIONS Justify your answers; show your work.. Random Walk on a raph (25 points) Random Walk on a raph 2 5 F B 3 3 2 Figure
More informationMarkov Chains (Part 3)
Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is
More informationLecturer: Olga Galinina
Renewal models Lecturer: Olga Galinina E-mail: olga.galinina@tut.fi Outline Reminder. Exponential models definition of renewal processes exponential interval distribution Erlang distribution hyperexponential
More informationProbability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models
Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models Statistical regularity Properties of relative frequency
More informationISM206 Lecture, May 12, 2005 Markov Chain
ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov
More informationQueueing Theory and Simulation. Introduction
Queueing Theory and Simulation Based on the slides of Dr. Dharma P. Agrawal, University of Cincinnati and Dr. Hiroyuki Ohsaki Graduate School of Information Science & Technology, Osaka University, Japan
More informationContinuous-Time Markov Chain
Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),
More informationHomework 4 due on Thursday, December 15 at 5 PM (hard deadline).
Large-Time Behavior for Continuous-Time Markov Chains Friday, December 02, 2011 10:58 AM Homework 4 due on Thursday, December 15 at 5 PM (hard deadline). How are formulas for large-time behavior of discrete-time
More information1 IEOR 4701: Continuous-Time Markov Chains
Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change
More informationLECTURE 3. Last time:
LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate
More informationMarkov chains. Randomness and Computation. Markov chains. Markov processes
Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space
More informationDISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition
DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition R. G. Gallager January 31, 2011 i ii Preface These notes are a draft of a major rewrite of a text [9] of the same name. The notes and the text are outgrowths
More informationDiscrete Markov Chain. Theory and use
Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical
More informationRandom variable X is a mapping that maps each outcome s in the sample space to a unique real number x, x. X s. Real Line
Random Variable Random variable is a mapping that maps each outcome s in the sample space to a unique real number,. s s : outcome Sample Space Real Line Eamples Toss a coin. Define the random variable
More informationSystem Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models
System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models Fatih Cavdur fatihcavdur@uludag.edu.tr March 20, 2012 Introduction Introduction The world of the model-builder
More informationLecturer: Olga Galinina
Lecturer: Olga Galinina E-mail: olga.galinina@tut.fi Outline Motivation Modulated models; Continuous Markov models Markov modulated models; Batch Markovian arrival process; Markovian arrival process; Markov
More informationChapter 6: Random Processes 1
Chapter 6: Random Processes 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.
More informationQueueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "
Queueing Theory I Summary Little s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K " Little s Law a(t): the process that counts the number of arrivals
More informationExercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010
Exercises Stochastic Performance Modelling Hamilton Institute, Summer Instruction Exercise Let X be a non-negative random variable with E[X ]
More informationGI/M/1 and GI/M/m queuing systems
GI/M/1 and GI/M/m queuing systems Dmitri A. Moltchanov moltchan@cs.tut.fi http://www.cs.tut.fi/kurssit/tlt-2716/ OUTLINE: GI/M/1 queuing system; Methods of analysis; Imbedded Markov chain approach; Waiting
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.262 Discrete Stochastic Processes Midterm Quiz April 6, 2010 There are 5 questions, each with several parts.
More informationLTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather
1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in
More informationSTAT Chapter 5 Continuous Distributions
STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range
More informationM/G/1 and M/G/1/K systems
M/G/1 and M/G/1/K systems Dmitri A. Moltchanov dmitri.moltchanov@tut.fi http://www.cs.tut.fi/kurssit/elt-53606/ OUTLINE: Description of M/G/1 system; Methods of analysis; Residual life approach; Imbedded
More information