Review: Discrete Event Random Processes. Hongwei Zhang

Size: px
Start display at page:

Download "Review: Discrete Event Random Processes. Hongwei Zhang"

Transcription

1 Revew: Dscrete Event Random Processes Hongwe Zhang

2 Outlne Markov chans and some renewal theory Markov chan Renewal processes, renewal reward processes, Markov renewal processes The excess dstrbuton Phase type dstrbuton PASTA Level crossng analyss Some mportant queueng models Reversblty of Markov chans and Jackson Network

3 Outlne Markov chans and some renewal theory Markov chan Renewal processes, renewal reward processes, Markov renewal processes The excess dstrbuton Phase type dstrbuton PASTA Level crossng analyss Some mportant queueng models Reversblty of Markov chans and Jackson Network

4 Markov chan Markov Chan Dscrete-Tme Markov Chans Calculatng Statonary Dstrbuton Global Balance Equatons Generalzed Markov Chans Contnuous-Tme Markov Chans

5 Markov chan Markov Chan Dscrete-Tme Markov Chans Calculatng Statonary Dstrbuton Global Balance Equatons Generalzed Markov Chans Contnuous-Tme Markov Chans

6 Markov Chan? Stochastc process that takes values n a countable set Example: {0,1,2,,m}, or {0,1,2, } Elements represent possble states Chan transts from state to state Memoryless (Markov) Property: Gven the present state, future transtons of the chan are ndependent of past hstory Markov Chans: dscrete- or contnuous- tme

7 Markov chan Markov Chan Dscrete-Tme Markov Chans Calculatng Statonary Dstrbuton Global Balance Equatons Generalzed Markov Chans Contnuous-Tme Markov Chans

8 Dscrete-Tme Markov Chan (DTMC) Dscrete-tme stochastc process {X n : n = 0,1,2, } Takes values n {0,1,2, } Memoryless property: Also wrtten as P,j P{ X = j X =, X =,..., X = } = P{ X = j X = } n+ 1 n n 1 n n+ 1 n P = P{ X = j X = } j n+ 1 n Transton probabltes P j P j 0, P = 1 j= 0 j Note: future and past are ndependent gven the present; but they are not uncondtonally ndependent. Transton probablty matrx P=[P j ]

9 Composton of DTMCs Gven two ndependent DTMCs X n, n 0, on S and Y n, n 0, on T wth transton probablty matrces P and Q; then Z n = (X n, Y n ) s a DTMC on S T wth Pr ( Z ) n+ 1 = ( s2, t2) Zn = ( s1, t1) = ps 1, s2qt1, t 2 Multple mutually ndependent DTMCs can be composed n a smlar fashon

10 Chapman-Kolmogorov Equatons n step transton probabltes Also wrtten as P,j (n) n P = P{ X = j X = }, n, m 0,, j 0 j n+ m m How to calculate? Chapman-Kolmogorov equatons n+ m n m j k kj k= 0 P = P P, n, m 0,, j 0 n P j s element (, j) n matrx P n Recursve computaton of state probabltes Thus, n n P ( ) = P

11 State Probabltes Statonary Dstrbuton State probabltes (tme-dependent) π = P{ X = j}, π = (π,π,...) In matrx form: n n n n j n 0 1 n n 1 n = = n 1 = n = n 1 = j = j = 0 = 0 P{ X j} P{ X } P{ X j X } π π P π = π P = π P =... = π P n n 1 n n If tme-dependent dstrbuton converges to a lmt π = lmπ n π = πp n π s called the statonary dstrbuton (or steady state dstrbuton) exstence depends on the structure of Markov chan

12 Irreducblty of DTMC States and j communcate: n m n, m : P > 0, P > 0 j j Denote as j Bnary relaton s an equvalence (.e., reflexve, symmetrc, transtve); the equvalence classes nduced by are called communcatng classes Irreducble Markov chan: all states communcate (and thus form a sngle communcatng class)

13 Frst ht probabltes f,j (n) Probablty of frst httng/vstng state j at tme n, when startng n state at tme 0 f f ( n), j (0), ( X j, X j,..., X j, X = j X = ) = Pr 1 2 n = 1, and for j =, (0), j = 0 T j : the frst passage tme from to j f 1 n 0 Probablty of vstng state j n fnte tme f startng n state, j = n=1 f f ( n), j

14 Aperodcty of DTMC Perod d of a state : d gcd Theorem: all the states n a communcatng class of a DTMC have the same perod. State s aperodc f d =1 = ( n) ( n) { n : f > 0} = gcd{ n : p 0}, j, j > Specal case: f p j,j > 0, then j s aperodc (why?) Aperodc Markov chan: none of the states s perodc

15 Lmt Theorems Theorem 0a: Irreducble aperodc Markov chan For every state j, the followng lmt π = lm P{ X = j X = }, = 0,1,2,... j n exsts and s ndependent of ntal state n 0 N j (k): number of vsts to state j up to tme k N j( k) P π j = lm X 0 = = 1 k k =>π j : frequency the process vsts state j

16 Exstence of Statonary Dstrbuton (or steady state dstrbuton) Theorem 0b: Irreducble aperodc Markov chan. There are two possbltes for scalars: π = lm P{ X = j X = } = lm P n j n 0 j n n 1. π j = 0, for all states j No statonary dstrbuton 2. π j > 0, for all states j π s the unque statonary dstrbuton Remark: If the number of states s fnte, case 2 s the only possblty

17 Postvty A state j s postve recurrent f the process returns to state j nfntely often Formal defnton: A state j s absorbng f p j,j = 1 A state j s transent f f j,j < 1 A state j s recurrent (or persstent) f f j,j = 1 n=1 j, j n A recurrent state j s postve f nf < ; otherwse, t s null Note: postve recurrent => rreducble always hold, but rreducble => postve recurrent s guaranteed to hold only for fnte MC

18 Example 0: a MC wth countably nfnte state space p p p p p p q q = 1-p q q q q q All states are postve recurrent f p < ½, null recurrent f p = ½, and transent f p > ½

19 Theorem D.2: for each communcatng class of a DTMC {X n }, exactly one of the followng holds: All the states n the class are transent All the states n the class are null recurrent All the states n the class are postve (recurrent) Thus, an rreducble DTMC s postve recurrent f any one of ts state s postve

20 Decdng postvty A communcatng class C s closed f, j : C, j C, = p j 0 Otherwse, the class s sad to be open Theorem D.3: gven a DTMC, An open communcatng class s transent A closed fnte communcaton class s postve recurrent

21 What about nfnte closed communcatng classes? Theorem D.4: an rreducble DTMC on state space S s postve recurrent ff. postve prob.dstrbuton π on S s.t. π = πp. where P s the state transton matrx. Note: f such probablty π exsts, t s unque and s called an nvarant probablty vector for the DTMC If π s nvarant, and f Pr(X 0 =) = π, then the DTMC so obtaned s a statonary random process

22 Alternatve approach: drft analyss of a sutable Lyapunov functon f(.) Theorem D.7: an rreducble DTMC X n, n 0, s recurrent f nonnegatve functon f(j), j S (state space), s.t. f(j) as j, and a fnte set A S s.t. ( f ( X ) X = ) f ( ) A: E n+ 1 n Α

23 Theorem D.8: an rreducble DTMC X n, n 0, s transent f nonnegatve functon f(j), j S, and a set A S s.t. and j A s.t. ( f ( X ) X = ) f ( ) A: E n+ 1 n k A: f ( j) < f ( k) Α j

24 Theorem D.9: an rreducble DTMC X n, n 0, s postve recurrent f nonnegatve functon f(j), j S, and a fnte set A S s.t. ( f ( X ) X = ) f ( ε A: E + 1 ) n n for some ε>0, and ( f ( X ) X = k) B k A E + : n 1 n for some fnte number B. Α

25 Theorem D.10: an rreducble DTMC X n, n 0, on {0,1,2, } s not postve recurrent f fnte values K>0 and B>0 s.t. 0 : K : E ( X X = ) <, and n+ 1 n ( X X = ), and E ( X X ) E n+ 1 n ( ) + X = B n Bounded downward drft n+ 1 n In the context of Theorems D.7-9, Theorem D.10 s for Lyapunov functon f(j)=j Ths theorem s useful n establshng nstablty results E.g., for a queue wth fnte # of servers where arrval rate s strctly greater than the overall servce rate, X n = queue occupancy

26 Exercse Use Theorems D.7-9 to prove the results for Example 0 shown earler

27 Convergence of postve recurrent DTMC Gven an rreducble, postve DTMC wth perod d and state space S, j S nd,lmn p j, j = dπ j If the DTMC s aperodc (.e., d=1),, j S,lm, n n p j = π j

28 Ergodcty A state s ergodc f t s aperodc and postve recurrent A MC s ergodc f every state s ergodc Ergodc chans have a unque statonary dstrbuton π j = 1/E(T jj ), j = 0, 1, 2, where T j s the frst passage tme from to j Note: Ergodcty Tme Averages = Stochastc Averages

29 Markov chan Markov Chan Dscrete-Tme Markov Chans Calculatng Statonary Dstrbuton Global Balance Equatons Generalzed Markov Chans Contnuous-Tme Markov Chans

30 Calculaton of Statonary Dstrbuton A. Fnte number of states B. Infnte number of states Solve explctly the system of equatons π = π P, j = 0,1,..., m Or, numercally from P n whch converges to a matrx wth rows equal to π m Sutable for a small number of states Cannot apply prevous methods to problem of nfnte dmenson Guess a soluton to recurrence: j j = 0 m j Pj π = 1 = 0 = 0 π = 1 = 0 π = π, j = 0,1,..., (detaled) balance equatons can help the guess

31 Example: Fnte Markov Chan Absent-mnded professor uses two umbrellas when commutng between home and offce. If t rans and an umbrella s avalable at her locaton, she takes t. If t does not ran, she always forgets to take an umbrella. Let p be the probablty of ran each tme she commutes. Q: What s the probablty that she gets wet on any gven day? Markov chan formulaton s the number of umbrellas avalable at her current locaton 1 p p 1 p Transton matrx p P = 0 1 p p 1 p p 0

32 Example: Fnte Markov Chan 1 p p 1 p p P = 0 1 p p 1 p p 0 π 0 = (1 p)π2 π = πp π = (1 p)π + pπ 1 p 1 1 π,π,π π = 1 p 3 3 p 3 p = 1 = 2 = π2 π0 π = + 1 p π0 + π1 + π2 = 1 1 p P{gets wet} = π0 p = p 3 p

33 Example: Fnte Markov Chan Takng p = 0.1: 1 p 1 1 π =,, = 0.310, 0.345, p 3 p 3 p P = Numercally determne lmt of P n ( ) n lm P = ( n 150) n Effectveness depends on structure of P

34 Markov chan Markov Chan Dscrete-Tme Markov Chans Calculatng Statonary Dstrbuton Global Balance Equatons Generalzed Markov Chans Contnuous-Tme Markov Chans

35 Global Balance Equatons Global Balance Equatons (GBE) π P = π P π P = π P, j 0 j j j j j j = 0 = 0 j j P s the frequency of transtons from j to π j j Frequency of Frequency of transtons out of j = transtons nto j Intuton: 1) j vsted nfntely often; 2) for each transton out of j there must be a subsequent transton nto j wth probablty 1

36 Global Balance Equatons (contd.) Alternatve Form of GBE { } π P = π P, S 0,1,2,... j j j j S S S j S If a probablty dstrbuton satsfes the GBE, then t s the unque statonary dstrbuton of the Markov chan Fndng the statonary dstrbuton: Guess dstrbuton from propertes of the system Verfy that t satsfes the GBE Specal structure of the Markov chan smplfes task

37 Global Balance Equatons Proof Frst form: π = π P and P = 1 j j j = 0 = 0 π P = π P π P = π P j j j j j j = 0 = 0 j j Second form: π P = π P π P = π P j j j j j j = 0 = 0 j S = 0 j S = 0 π j Pj + Pj = πpj + πpj j S S S j S S S π j Pj = π Pj j S S S j S

38 Markov chan Markov Chan Dscrete-Tme Markov Chans Calculatng Statonary Dstrbuton Global Balance Equatons Generalzed Markov Chans Contnuous-Tme Markov Chans

39 Generalzed Markov Chans Markov chan on a set of states {0,1, }, that whenever enters state The next state that wll be entered s j wth probablty P j Gven that the next state entered wll be j, the tme t spends at state untl the transton occurs s a RV wth dstrbuton F j {Z(t): t 0} descrbng the state of the chan at tme t: Generalzed Markov chan, or Sem-Markov process Does GMC have the Markov property? Future depends on 1) the present state, and 2) the length of tme the process has spent n ths state

40 Generalzed Markov Chans (contd.) T : tme process spends at state, before makng a transton holdng tme Probablty dstrbuton functon of T H ( t) = P{ T t} = P{ T t next state j} P = F ( t) P j j j j= 0 j= 0 E[ T ] t dh ( t) = 0 T : tme between successve transtons to X n s the n th state vsted. {X n : n=0,1, } Is a Markov chan: embedded Markov chan Has transton probabltes P j Sem-Markov process rreducble: f ts embedded Markov chan s rreducble

41 Lmt Theorems Gven an rreducble sem-markov process w/ E[T ] < For any state j, the followng lmt p = lm P{ Z( t) = j Z(0) = }, = 0,1,2,... j t exsts and s ndependent of the ntal state. p j = E[ T ] E[ T ] T j (t): tme spent at state j up to tme t Tj( t) P p j = lm Z(0) = = 1 t t j jj p j s equal to the proporton of tme spent at state j

42 Occupancy Dstrbuton Gven an rreducble sem-markov process where E[T ] <, and the embedded Markov chan s ergodc w/ statonary dstrbuton π then, wth probablty 1, the occupancy dstrbuton of the sem-markov process π = π P, j 0; π = 1 p j j = 0 = 0 j π je[ Tj ] =, j = 0,1,... π E[ T ] π j : proporton of transtons nto state j E[T j ]: mean tme spent at j Probablty of beng at j s proportonal to π j E[T j ]

43 Markov chan Markov Chan Dscrete-Tme Markov Chans Calculatng Statonary Dstrbuton Global Balance Equatons Generalzed Markov Chans Contnuous-Tme Markov Chans

44 Contnuous-Tme Markov Chans (def.?) Contnuous-tme process {X(t): t 0} takng values n {0,1,2, }. Whenever t enters state Tme t spends at state s exponentally dstrbuted wth parameter α When t leaves state, t enters state j wth probablty P j, where Σ j P j = 1 Contnuous-tme Markov chan s a sem-markov process wth α F, ( t) = 1 e t,, j = j 0,1,2,... Exponental holdng tme => a contnuous-tme Markov chan has the Markov property

45 CTMC: alternatve defnton {X(t)} on state space S s a contnuous tme Markov chan f t, s Pr 0, j S, ( X ( t + s) = j X ( u), u s) = Pr( X ( t + s) = j X ( s) ) Assume tme homogenety, we wrte p j ( X ( t + s) = j X ( s ), ( t) : = Pr ) = For an arbtrary tme t, tme to next state transton W(t) { s > 0 : X ( t + s) X ( )} W ( t) = nf t

46 Theorem D.11: for a CTMC {X(t)}, S, t, u 0 : Pr ( W ( t) > u X ( t) = ) = e α u for some constant α 0. Sojourn tme at a state s exponentally dstrbuted wth parameter α that only depend on A state S s called absorbng f α = 0.

47 Jump chan/embedded process Let T 0 =0, T 1, T 2, be the successve jump nstants (.e., nstants when state changes) of a CTMC, and let X n =X(T n ) Sequence T n, n 0, s called a sequence of embedded nstants, and X n, n 0, s called an jump chan or an embedded process

48 Theorem D.12: gven a CTMC {X(t)} wth jump nstants T n, n 0, and jump chan X n, n 0, for 0, 1,, n-1,, j S, t 0, t 1,, t n, u 0, Sojourn tme at a state and the next state entered are ndependent, and only depend on state Thus, the embedded process s a DTMC wth transton probablty p,j 0 then 0, and f 1, 0, where,...,,,,...,, Pr,,,, = > = = = = = = = > = + + S j j j u j n n n n n n n n p p p e p t T t T X X X u T T j X α α

49 A CTMC s rreducble and regular, f Its embedded Markov chan s rreducble, and Number of transtons n a fnte tme nterval s fnte wth probablty 1 Theorem D.13: {X(t)} s a CTMC wth embedded DTMC {X n } and sojourn tme parameters α S, then If v s.t. α v for all, then {X(t)} s regular If {X n } s recurrent, then {X(t)} s regular

50 CTMC: transence & recurrence Let τ j,j = tme untl the process frst returns to j after leavng t A state j n a CTMC s recurrent f Pr(τ j,j < )=1; otherwse, j s transent. A recurrent state j s postve f E(τ j,j )< ; otherwse, t s null. Same as n DTMC, the states of an rreducble CTMC are ether all transent, all postve, or all null

51 A state j s recurrent n CTMC ff. t s recurrent n the embedded DTMC; An rreducble CTMC s recurrent ff. the embedded DTMC s recurrent. Smlar results doest NOT hold for postvty of CTMC states

52 Transton rate matrx Q For,j S, j, defne q,j = α p,j ; can be nterpreted as, condtonal on beng at state, the rate of leavng to enter j for S, q, = -α Thus, the sum of each row of Q s 0 Theorem D.14: an rreducble regular CTMC s postve ff. postve prob. vector π s.t. πq=0 and. When such a π exsts, t s unque. Sπ = 1 Note: the j-th equaton s, meanng the S, j π = π α q, j uncondtonal rate of enterng j equals that of leavng j j j

53 If the postve prob. vector π exsts, t s also a statonary prob. vector; that s, f Pr(X(0)=) = π, then Pr(X(t)=) = π 1/ α π = τ p lm, j t, ( t) = π No noton of perodcty for CTMC j

54 Example M/M/1 queue

55 Basc Queueng Model Buffer Server(s) Arrvals Departures Queued In Servce A queue models any servce staton wth: One or multple servers A watng area or buffer Customers arrve to receve servce A customer that upon arrval does not fnd a free server wats n the buffer

56 Characterstcs of a Queue b m Number of servers m: one, multple, nfnte Buffer sze b Servce dscplne (schedulng) FCFS, LCFS, Processor Sharng (PS), etc Arrval process Servce statstcs

57 Arrval Process n +1 n n 1 τ n tn t τ n : nterarrval tme between customers n and n+1 τ n s a random varable { τ, n 1} s a stochastc process n Interarrval tmes are dentcally dstrbuted and have a common mean E[ τ ] = E[ τ ] = 1/ λ, where λ s called the arrval rate n

58 Servce-Tme Process n +1 n n 1 s n t s n : servce tme of customer n at the server { s, n 1} s a stochastc process n Servce tmes are dentcally dstrbuted wth common mean E[ s ] = E[ s ] = µ, where µ s called the servce rate n For packets, are the servce tmes really random?

59 Queue Descrptors Generc descrptor: A/S/m/k A denotes the arrval process For Posson arrvals we use M (for Markovan) S denotes the servce-tme dstrbuton M: exponental dstrbuton D: determnstc servce tmes G: general dstrbuton m s the number of servers k s the max number of customers allowed n the system ether n the buffer or n servce k s omtted when the buffer sze s nfnte

60 Queue Descrptors: Examples M/M/1: Posson arrvals, exponentally dstrbuted servce tmes, one server, nfnte buffer M/M/m: same as prevous wth m servers M/M/m/m: Posson arrvals, exponentally dstrbuted servce tmes, m server, no bufferng M/G/1: Posson arrvals, dentcally dstrbuted servce tmes follows a general dstrbuton, one server, nfnte buffer */D/ : A constant delay system

61 Example: M/M/1 Queue Arrval process: Posson wth rate λ Servce tmes: d, exponental wth parameter µ Servce tmes and nterarrval tmes: ndependent Sngle server Infnte watng room X(t): Number of customers n system at tme t (state) λ n n+1 µ λ µ λ µ λ µ

62 Exponental Random Varables X: exponental RV wth parameter λ Y: exponental RV wth parameter µ X, Y: ndependent Then: 1. mn{x, Y}: exponental RV wth parameter λ+µ 2. P{X<Y} = λ/(λ+µ) Proof: P{mn{ X, Y} > t} = P{ X > t, Y > t} = = P{ X > t} P{ Y > t} = P{mn{ X, Y} t} = 1 e P{ X < Y} = f ( x, y) dx dy = 0 0 µ y λ y y XY 0 0 y λ x µ y = e e dx dy 0 λ µ = 0 y µ y λx = µ e λe dx dy = = µ e (1 e ) dy = 0 µ y µ ( λ + µ ) y µ e dy ( λ µ ) e dy 0 0 λ t µ t ( λ + µ ) t = e e = e ( λ + µ ) t = + = λ + µ µ λ = 1 = λ + µ λ + µ

63 M/M/1 Queue: Markov Chan Formulaton Jumps of {X(t): t 0} trggered by arrvals and departures {X(t): t 0} can jump only between neghborng states Assume process at tme t s n state : N(t) = 1 X : tme untl the next arrval exponental wth parameter λ Y : tme untl the next departure exponental wth parameter µ T =mn{x,y }: tme process spends at state T : exponental wth parameter α = λ+µ

64 P,+1 =P{X < Y }= λ/(λ+µ), P,-1 =P{Y < X }= µ/(λ+µ) P 01 =1, and T 0 s exponental wth parameter λ {N(t): t 0} s a CTMC wth 1 j - 0, 1, 0,, 0 0, 1, 1, 1, 1, > = = = = = = + + j q q p q p q λ µ α λ α

65 πq=0 has a postve, summable (to 1) soluton ff. λ<µ If λ<µ, Prob{queue s non-empty} = 1-ρ, where ρ= λ/µ π = (1- ρ)ρ, = 0, 1, 2,, s the statonary dstrbuton

66 Outlne Markov chans and some renewal theory Markov chan Renewal processes, renewal reward processes, Markov renewal processes The excess dstrbuton Phase type dstrbuton PASTA Level crossng analyss Some mportant queueng models Reversblty of Markov chans and Jackson Network

67 Renewal process Gven a sequence of mutually ndependent r.v. s X k, k=1,2,3,, s.t. X k, k>=2 are..d., and X 1 can have a possbly dfferent dstrbuton, we defne the renewal nstants, Z k, k>=1, as Z k = X The # of renewals n tme (0, t] s called a renewal process M(t) k = 1 Example: a CTMC B(t) wth B(0) =, and let s consder vsts to state j X 1 : tme to frst vst j X k, k>=2: tmes between subsequent vsts to j M(t): # of vsts to j up to tme t

68 Renewal reward process To assocate a reward wth each renewal nterval Formally: Gven a renewal process wth lfetmes X k, k 1, assocate X k wth a reward R k s.t. R k, k 1, are mutually ndependent; R k can depend on X k Example: n the CTMC B(t), defne R k as the tme spent at a specfc state durng the k-th renewal nterval

69 Let C(t) as the total reward accrued untl tme t, then the reward C( t) rate s lmt t [Renewal Reward Theorem]: for E( R k )< and E( X k )<, the followng hold: Wth probablty 1, lm C( t) t = t E( R E( X 2 2 ) ) lm E( C( t)) t = t E( R E( X 2 2 ) ) Note: n general, E(R 2 )/E(X 2 ) E(R 2 /X 2 )

70 Markov renewal process (MRP) Let X n, n 0, be a random sequence wth state space S, and let T 0 T 1 T 2 be nondecreasng sequence of random tmes The random sequence (X n, T n ), n 0, s a Markov renewal process (MRP) f for 0, 1,, n-1,, j S, t 0 t 1 t n, u 0 MRP s a generalzaton of CTMC: 1) sojourn tme may not be ndependent of the next state, and 2) sojourn tme may not be exponentally dstrbuted { } X u T T j X t T t T X X X u T T j X n n n n n n n n n n n n = = = = = = = = = , Pr,...,,,,...,, Pr

71 Let p lmt does not depend on n; { X = j, T T u X } = lm Pr =, j u n+ 1 n+ 1 n n, assumng the Then, X n, n 0, s a DTMC on S wth transton prob. p,j,,j S Dstrbuton of sojourn tme gven the current and next states: Theorem D.16: Pr { T T u X = X j} H ( u) = Pr, =, j n+ 1 n n n+ 1 { T u, T T u,..., T T u } = n T n n n = 1 H ( u ) , Independent sojourn tmes gven the sequence of states at the end ponts

72 Dstrbuton of sojourn tme: Mean sojourn tme at state : σ = j S H u) = p H ( u) (, j, j j S p σ, whereσ s the mean of H ( u), j, j, j, j

73 Assocate a reward R k wth the nterval (T k-1, T k ), for k 1, s.t. R k s ndependent of anythng else gven (X k-1, X k ) and (T k -T k-1 ) Let r j be the expected reward n an nterval that begns n state j Suppose X k, k 0, s a postve recurrent DTMC on S wth statonary prob. j S π j σ < Vector π. Then under the condtons that, C( t) lm t t = j j S Sπ r j j, π σ j j wth probablty 1 j

74 Outlne Markov chans and some renewal theory Markov chan Renewal processes, renewal reward processes, Markov renewal processes The excess dstrbuton Phase type dstrbuton PASTA Level crossng analyss Some mportant queueng models Reversblty of Markov chans and Jackson Network

75 Excess dstrbuton, or excess-lfe/resdual-lfe dstrbuton Gven a nonnegatve r.v. X wth dstrbuton F(.) and fnte mean (1 F( u)) du 0 EX=, the excess dstrbuton s defned as F e ( y) = (1 F( u)) du 0 EX Can be nterpreted as the dstrbuton functon of the resdual lfe seen by a random observer of a renewal process wth..d. lfetme X

76 Interpretaton of F e (y) Consder the renewal process wth..d. lfetmes X k, k 1, wth dstrbuton F(.); defne Y(t) as the resdual lfe or excess lfe at a random tme t,.e., the tme untl the frst renewal n (t, ) Consder 1 lm t I t t 0 ) { Y ( u y} du 1 t lm Pr( Y ( u) y) du t t 0 Then, by Theorem D.15, 1 t 1 lm I t 0 ) t : long-run fracton of tme that the excess lfe s y w. p. du F ( y) { Y ( u y} e : tme average prob. that the excess lfe s y How? 1 t lm 0 Pr( Y ( u) y) du F ( y) t e t

77 Proof: defne reward functon R k = mn{x k, y}, then C( t) t = I 0 { Y ( u ) y} du E( R k ) = 0 uf R k ( u) du = 0 y uf X k ( u) du + y(1 F( y)) y = 1 0 F( u) du E( C( t))... t = Pr( Y ( u) 0 y) du

78 Outlne Markov chans and some renewal theory Markov chan Renewal processes, renewal reward processes, Markov renewal processes The excess dstrbuton Phase type dstrbuton PASTA Level crossng analyss Some mportant queueng models Reversblty of Markov chans and Jackson Network

79 Phase type dstrbuton For a CTMC X(t) on state space {1,2,, M, a} s.t. states {1,2,, M} are all transent and α s absorbng, the transton rate matrx of X(t) s of the form Q q 0 0 where Q s an M*M matrx, q s a column vector of sze M; M probablty vector α of sze M (.e., 0 α j 1, α j = 1 ) s.t., the CTMC starts n state j wth prob. α j, and then evolves to absorpton state a Then, the dstrbuton of the tme untl absorpton s sad to be phase type wth parameters (α, Q, q) When the process s at state j, t s sad to be at phase j j= 1

80 Example For The phase type dstrbuton s an Erlang dstrbuton of order 4, wth each stage beng exponentally dstrbuted wth mean 1/µ T q ) (0,0,0,, Q (1,0,0,0), = µ µ µ µ µ µ µ µ α = =

81 Why phase-type dstrbuton? Phase-type dstrbuton can be used to approxmate arbtrarly closely (n the sense of convergence n dstrbuton) any dstrbuton Ths fact may not always be useful for numercal approxmaton, due to the large # of phases requred for good approxmaton But t s very useful for theoretcal purposes: We can often prove results usng phase type dstrbutons thanks to ther smple structure; then We can prove that the result holds for any dstrbuton by consderng a sequence of phase type dstrbutons convergng to the general dstrbutons

82 Overflow process of M/M/c/c system The sequence of tmes at whch customers are dened servce forms a renewal process, and the dstrbuton of these tmes s phase type wth T q Q ) (0,0,..., ) ( ) ( ) ( (0,0,...,0,1), λ µ λ µ λ µ λ µ µ λ µ λ µ λ λ α = = =

83 Outlne Markov chans and some renewal theory Markov chan Renewal processes, renewal reward processes, Markov renewal processes The excess dstrbuton Phase type dstrbuton PASTA Level crossng analyss Some mportant queueng models Reversblty of Markov chans and Jackson Network

84 Posson arrvals see tme averages (PASTA) Observatons of a process X(t) at random tme ponts vs. Observatons of a process X(t) over all tme

85 Motvatng example Consder a stable D/D/1 queue where customers arrve perodcally at ntervals of length a and requres a servce tme b<a. Let X(t) be the number of customers n the system at tme t Then Average # of customers over all tme s 1 t lm X ( u) du = t 0 t Now, observe x(t) at t k =ka, k 0 (.e., what arrvals see on average) 1 n lm 1 k= 0 n X ( t n k ) = 0 b a Pont observatons dffer from average behavors

86 Formal characterzaton Let X(t), t 0, be a random process, and B be a subset of the state space of X(t); A(t) be a Posson arrval process wth rate λ, and t k, k 1, be the arrval ponts Then 1 t B t V ( t) = I 0 ) V { X ( u B} du s the fracton of tme over (0, t] that the process X(.) s n B A B ( t) = 1 A( t ) I k = 1 A( t) { X ( t ) B } k s the fracton of arrvals over (0, t] that see the process X(.) n B

87 Lack of antcpaton assumpton: for all t 0, A(t+u)-A(t), u 0, s ndependent of X(s), 0 s t.e., for all t 0, future arrvals are ndependent of the past of X(.) Note: the assumpton holds for ndependent Posson arrval processes Theorem D.17: under the lack of antcpaton assumpton, V B w p t.. 1 B w p ( ) V ff. V ( t).. 1 V A B B.e., tme average and arrval average are the same

88 Bernoull/Geometrc arrvals see tme averages (GASTA) For queueng processes that evolve at dscrete tmes t k =kt, k=0,1,2,, let X k denote the dscrete tme queue embedded at nstants t k Consder a Bernoull arrval process of rate p,.e., at tmes t k+ an arrval occurs wth prob. p Also called a Geometrc process snce nter-arrval tmes are geometrcally dstrbuted Due to lack of antcpaton, results smlar to PASTA holds for Bernoull/geometrc arrvals and can be called GASTA

89 Outlne Markov chans and some renewal theory Markov chan Renewal processes, renewal reward processes, Markov renewal processes The excess dstrbuton Phase type dstrbuton PASTA Level crossng analyss Some mportant queueng models Reversblty of Markov chans and Jackson Network

90 Level crossng analyss (LCA) When drect dervaton of statonary prob. dstrbutons (va π=πp or other means such as balance equatons) s dffcult, LCA may help obtan ancllary equatons that provde some nformaton about statonary dstrbuton Gven r.p. X(t) on [0, ) and a x 0 Up-crossng rate U x (t): # of tmes that X(.) crosses the level x from below Down-crossng rate D x (t): # of tmes that X(.) crosses the level x from above

91 Level crossng analyss s based on the followng facts U x ( t) D ( t) 1, and 1 lm U ( t) = lm t x t x t 1 D t x ( t), f ether lmt exsts The above lmts can usually be wrtten n terms of the statonary dstrbuton of the r.p.

92 Outlne Markov chans and some renewal theory Markov chan Renewal processes, renewal reward processes, Markov renewal processes The excess dstrbuton Phase type dstrbuton PASTA Level crossng analyss Some mportant queueng models Reversblty of Markov chans and Jackson Network

93 Some mportant queueng models M/G/c/c queue Processor sharng queue Symmetrc queues

94 M/G/c/c queue Posson arrvals wth fnte rate λ Servce requrements are..d. and generally dstrbuted wth dstrbuton F(.) and fnte mean 1/µ Servce requrement s also called the holdng tme, snce a customer holds a dedcated server for the entre duraton of ts servce Each arrvng customer s assgned to a free server f one exsts; otherwse, the arrvng customer s dened admsson and t goes away Gven an example of M/G/c/c queue?

95 X(t): # of customers n queue at tme t Let ρ = λ/µ In M/G/c/c, ρ equals to the average # of new arrvals durng the holdng tme of a customer (by Lttle s Theorem) [Exercse D.3]: f F(.) s an exponental dstrbuton functon, then X(t) s a postve recurrent CTMC on state space {0,1,,c}, wth statonary dstrbuton π = n n ρ n! j c ρ j= 0 j!

96 When F(.) s not an exponental dstrbuton functon, X(t) s not Markovan But (X(t), Y 1 (t),, Y X(t) (t)) s a Markov process, where Y (t) denotes the resdual servce requrement of the -th customer n the system; and n Pr( X ( t) = n, Y y,..., Y y ) = π = 1 F 1 1 ( y whereπ s as n the case of exponental hodlng tme, e n n n n F (.)s the excess dstrbuton of the holdng tme dstrbuton F(.) e ),

97 Processor sharng queue: M/G/1 PS Posson arrvals wth fnte rate λ Servce requrements are..d. and generally dstrbuted wth dstrbuton F(.) and fnte mean 1/µ Overall servce rate: 1 unt per second (far) processor sharng rule: when there n customers n system, the unfnshed work on the -th customer decreases at rate 1/n

98 Let ρ = λ µ X(t) denotes the # of customers at tme t If F(.) s for a exponental dstrbuton, then X(t) s a CTMC, and t s postve recurrent ff. <1, n whch case the statonary dstrbuton of X(t) s gven by π = ( 1 ρ) ρ n n

99 When F(.) s not an exponental dstrbuton functon, X(t) s not Markovan But (X(t), Y 1 (t),, Y X(t) (t)) s a Markov process, where Y (t) denotes the resdual servce requrement of the -th customer n the system; and f ρ<1 n n Pr( X ( t) = n, Y y,..., Y y ) = (1 ρ) ρ = 1 F where e 1 1 Note: the statonary dstrbuton of X(t) n an M/G/1 PS queue s the same as that n an M/M/1 queue, and thus s nsenstve to the dstrbuton of F(.) (except through ts mean) n F (.)s the excess dstrbuton of F(.) n e ( y ),

100 Sojourn tmes n M/G/1 PS Sojourn tme W: amount of tme that a customer stays n the system π = ( 1 ρ) ρ n Snce, E(x) = ρ/(1- ρ); then, by Lttle s Theorem, n E( S) E( W ) =,where E(S) 1 ρ s Moreover, E(W S = s) = 1 ρ 1 = and s µ the mean servce requrement

101 Symmetrc queue Consder the followng queue: Customers of class c, c C, arrve n ndependent Posson processes of rate λ c Customers of class c have a phase type servce requrement wth parameter (α c, Q c ) and mean 1/µ c An arrvng customer fndng (n-1) customers n the system jons n poston l, 1 l n, wth prob. γ(n, l) When there are n customers n the queue, the overall servce rate appled s ν(n); and a fracton δ(n, l) of the servce effort s appled to the customer at poston l

102 A system state: c n (t) φ n(t) c 3 φ 3 c 2 φ2 φ 1 c 1 ν(n(t )) class phase A aforementoned queueng system s sad to be a symmetrc queue f the functons δ(.,.) and γ(.,.) are such that δ(n, l) = γ(n, l) Postonng mples prorty

103 Examples M/PH/1 queue wth last-come-frst-serve preemptve resume (LCFS-PR) dscplne ν(n) = constant ν γ(n, 1) = 1, and γ(n, j) = 0 for j>1 δ(n, 1) = 1, and δ(n, j) = 0 for j>1 M/PH/1 processor sharng queue? ν(n) = constant ν γ(n, l) = δ(n, l) = 1/n, for 1 l n

104 M/PH/ queue? ν(n) = nν γ(n, l) = δ(n, l) = 1/n, for 1 l n

105 Statonary dstrbuton Let λc ρ = c C µ c Theorem D.18: the statonary dstrbuton of the # of customers n a symmetrc queue s gven by the prob. dstrbuton of system state x x ρ π ( x) = G ν (1) ν (2)... ν ( x ) where x = # of customers at state x, and G s the normalzaton constant Note: the dstrbuton s nsenstve to the servce requrement dstrbutons (except for ther mean 1/µ c, c C)

106 Outlne Markov chans and some renewal theory Markov chan Renewal processes, renewal reward processes, Markov renewal processes The excess dstrbuton Phase type dstrbuton PASTA Level crossng analyss Some mportant queueng models Reversblty of Markov chans and Jackson Network

107 Tme Reversblty and Burke s Theorem Tme-Reversal of Markov Chans Reversblty Truncatng a Reversble Markov Chan Burke s Theorem Queues n Tandem

108 Tme Reversblty and Burke s Theorem Tme-Reversal of Markov Chans Reversblty Truncatng a Reversble Markov Chan Burke s Theorem Queues n Tandem

109 Tme-Reversed Markov Chans {X n : n=0,1, } rreducble aperodc Markov chan wth transton probabltes P j j= 0 P j = 1, = 0,1,... Unque statonary dstrbuton (π j > 0) ff. GBE holds,.e., Process n steady state: π = π P, j = 0,1,... j = 0 j Starts at n=-, that s {X n : n =,-1,0,1, }, or Choose ntal state accordng to the statonary dstrbuton How does {X n } look reversed n tme? Pr{ X n = j} = π j = lm Pr { X n = j X 0 = } n

110 Tme-Reversed Markov Chans Defne Y n =X τ-n, for arbtrary τ>0 => {Y n } s the reversed process. Proposton 1: {Y n } s a Markov chan wth transton probabltes: P j * π j = P,, j j π = 0,1,2,... {Y n } has the same statonary dstrbuton π j wth the forward chan {X n } The reversed chan corresponds to the same process, looked at n the reversedtme drecton

111 Tme-Reversed Markov Chans Proof of Proposton 1: P = P{ Y = j Y =, Y =, K, Y = } * j m m 1 m 2 2 m k k = P{ X = j X =, X =, K, X = } τ m τ m+ 1 τ m+ 2 2 τ m+ k k = P{ X = j X =, X =, K, X = } n n+ 1 n+ 2 2 n+ k k P{ X n = j, X n+ 1 =, X n+ 2 = 2, K, X n+ k = k} = P{ X =, X =, K, X = } n+ 1 n+ 2 2 n+ k k n+ 2 = 2 K n+ k = k n = n+ 1 n n+ 1 P{ X,, X X j, X = } P{ X = j, X = } = P{ X =, K, X = X = } P{ X = } P{ X = j, X = } P{ X = } n n+ 1 = = n+ 1 n+ 2 2 n+ k k n+ 1 n+ 1 P{ X = X = j} P{ X = j} Pjπ P{ X = } π n+ 1 n n = = n+ 1 π P * j j π π π π 0 Pj = 0 = j P 0 j = = = = j π P{ X = j X = } = P{ Y = j Y = } n n+ 1 m m 1 j

112 Tme Reversblty and Burke s Theorem Tme-Reversal of Markov Chans Reversblty Truncatng a Reversble Markov Chan Burke s Theorem Queues n Tandem

113 Reversblty Stochastc process {X(t)} s called reversble f (X(t 1 ), X(t 2 ),, X(t n )) and (X(τ-t 1 ), X(τ-t 2 ),, X(τ-t n )) have the same probablty dstrbuton, for all τ, t 1,, t n Proposton D.1: f {X(t), t R} s statonary, then a tme reversed process s also statonary Proposton D.2: a reversble process s statonary (and consequently any tme reversal of a reversble process s statonary).

114 Markov chan {X n } s reversble f and only f the transton probabltes of forward and reversed chans are equal,.e., P j = P * j Detaled Balance Equatons Reversblty π P = π P,, j = 0,1,... j j j

115 Reversblty Dscrete-Tme Chans Theorem 1: If there exsts a set of postve numbers {π j }, that sum up to 1 and satsfy: π P = π P,, j = 0,1,... j j j Then: 1. {π j } s the unque statonary dstrbuton 2. The Markov chan s reversble Example: Dscrete-tme brth-death processes are reversble, snce they satsfy the DBE

116 Example: Brth-Death Process P 01 Pn 1, n P n, n n n+1 S S c P 00 P 10 Pn, n 1 P n, n P n + 1, n One-dmensonal Markov chan wth transtons only between neghborng states: P j =0, f -j >1 Detaled Balance Equatons (DBE) π P = π P n = 0,1,... n n, n+ 1 n+ 1 n+ 1, n Proof: GBE wth S ={0,1,,n} gve: n n π P = π P π P = π P j j j n n, n+ 1 n+ 1 n+ 1, n j= 0 = n+ 1 j= 0 = n+ 1

117 Tme-Reversed Markov Chans (Revsted) Theorem 2: Irreducble Markov chan wth transton probabltes P j. If there exst: A set of transton probabltes P j*, wth j P j* =1, 0, and A set of postve numbers {π j }, that sum up to 1, such that Then: * j π P = π P,, j j j P j * are the transton probabltes of the reversed chan, and {π j } s the statonary dstrbuton of the forward and the reversed chans 0 Remark: Used to fnd the statonary dstrbuton, by guessng the transton probabltes of the reversed chan even f the process s not reversble

118 Contnuous-Tme Markov Chans {X(t): - < t < } rreducble aperodc Markov chan wth transton rates q j, j Unque statonary dstrbuton (p > 0) ff. Process n steady state e.g., started at t =- : If {π j }, s the statonary dstrbuton of the embedded dscrete-tme chan: p q = p q, j = 0,1,... j j j j j Pr{ X ( t) = j} = p = lm Pr{ X ( t) = j X (0) = } j t π j / ν j p j =, ν j q j, j = 0,1,... j π / ν

119 Reversed Contnuous-Tme Markov Chans Reversed chan {Y(t)}, wth Y(t)=X(τ-t), for arbtrary τ>0 Proposton 2: 1. {Y(t)} s a contnuous-tme Markov chan wth transton rates: p * jq j qj =,, j = 0,1,..., j p 2. {Y(t)} has the same statonary dstrbuton {p } j wth the forward chan Remark: The transton rate out of state n the reversed chan s equal to the transton rate out of state n the forward chan α, j p q p q q = = = q = ν, = 0,1,... * j j j j j j j j p p

120 Reversblty Contnuous-Tme Chans Markov chan {X(t)} s reversble ff. the transton rates of forward and reversed chans are equal q = q *, or equvalently.e., Detaled Balance Equatons Reversblty Theorem 3: If there exsts a set of postve numbers {p }, j that sum up to 1 and satsfy: p q = p q,, j = 0,1,..., j j j j Then: 1. {p } j s the unque statonary dstrbuton 2. The Markov chan s reversble j p q = p q,, j = 0,1,..., j j j j j

121 Example: Brth-Death Process λ 0 λ 1 λn 1 S S c λ n n n+1 µ 1 µ 2 µ n µ n + 1 Transtons only between neghborng states q = λ, q = µ, q = 0, j > 1, + 1, 1 j Detaled Balance Equatons λ, 0,1,... n pn = µ n + 1pn + 1 n = Proof: GBE wth S ={0,1,,n} gve: n j j j n n n+ 1 n+ 1 j= 0 = n+ 1 j= 0 = n+ 1 M/M/1, M/M/c, M/M/ n p q = p q λ p = µ p

122 Reversed Contnuous-Tme Markov Chans (Revsted) Theorem 4: Irreducble contnuous-tme Markov chan wth transton rates q j. If there exst: A set of transton rates q j*, wth j q j* = j q j, 0, and A set of postve numbers {p j }, that sum up to 1, such that Then: p q * j = p q,, j 0, q j* are the transton rates of the reversed chan, and j j {p j } s the statonary dstrbuton of the forward and the reversed chans Remark: Used to fnd the statonary dstrbuton, by guessng the transton probabltes of the reversed chan even f the process s not reversble j

123 Reversblty: Trees Theorem 5: q 12 q q 10 q 16 q 21 q q 23 q 32 q 67 q Irreducble Markov chan, wth transton rates that satsfy q j >0 q j >0 Form a graph for the chan, where states are the nodes, and for each q j >0, there s a drected arc j Then, f graph s a tree contans no loops then Markov chan s reversble Remarks: Suffcent condton for reversblty Generalzaton of one-dmensonal brth-death process

124 Kolmogorov s Crteron (Dscrete Chan) Detaled balance equatons determne whether a Markov chan s reversble or not, based on statonary dstrbuton and transton probabltes Should be able to derve a reversblty crteron based only on the transton probabltes! Theorem D.20: A dscrete-tme Markov chan s reversble ff. P P LP P = P P LP P n 1 n n 1 1 n n n for every fnte sequence of states: 1, 2,, n, and any n Intuton: Probablty of traversng any loop 1 2 n 1 s equal to the probablty of traversng the same loop n the reverse drecton 1 n 2 1

125 Kolmogorov s Crteron (Contnuous Chan) Detaled balance equatons determne whether a Markov chan s reversble or not, based on statonary dstrbuton and transton rates Should be able to derve a reversblty crteron based only on the transton rates! Theorem 7: A contnuous-tme Markov chan s reversble f and only f: q q Lq q = q q Lq q n 1 n n 1 1 n n n for any fnte sequence of states: 1, 2,, n, and any n Intuton: Product of transton rates along any loop 1 2 n 1 s equal to the product of transton rates along the same loop traversed n the reverse drecton 1 n 2 1

126 Kolmogorov s Crteron (proof) Proof of Theorem D.20: Necessary: If the chan s reversble the DBE hold π π π = π P = π P M P P LP P = P P LP P = π P = π P n 1 n π P P P n 1 n n n 1 P n 1 n 1 1 n n 1 n n 1 1 n n n Suffcent: Fxng two states 1 =, and n =j and summng over all states 2,, n- we have 1 Takng the lmt n P P LP P = P P LP P P P = P P n 1 n 1,, j j j j,, j j j j n 1 n n 1 n 1 lm Pj Pj = Pj lm Pj π jpj = Pjπ n n

127 Theorem D.21: A dscrete-tme Markov chan s reversble ff. q q Lq q = q q Lq q n 1 n n 1 1 n n n for every mnmal, fnte sequence of states: 1, 2,, n

128 Example: M/M/2 Queue wth Heterogeneous Servers 0 αλ (1 α) λ µ A µ B 1A 1B µ B µ A λ λ λ λ 2 3 µ µ µ + µ A + B A B M/M/2 queue. Servers A and B wth servce rates µ A and µ B respectvely. When the system empty, arrvals go to A wth probablty α and to B wth probablty 1-α. Otherwse, the head of the queue takes the frst free server. Need to keep track of whch server s busy when there s 1 customer n the system. Denote the two possble states by: 1A and 1B. Reversblty: we only need to check the loop 0 1A 2 1B 0: q0,1 Aq1 A,2q2,1Bq1 B,0 = αλ λ µ A µ B q0,1bq1 B,2q2,1Aq1 A,0 = (1 α) λ λ µ B µ A Reversble f and only f α=1/2.

129 Tme Reversblty and Burke s Theorem Tme-Reversal of Markov Chans Reversblty Truncatng a Reversble Markov Chan Burke s Theorem Queues n Tandem

130 Truncaton of a Reversble Markov Chan Theorem D.22: {X(t)} reversble Markov process wth state space S, and statonary dstrbuton {p : j j S}. Truncated to a set E S, such that the resultng chan {Y(t)} s rreducble. Then, {Y(t)} s reversble and has statonary dstrbuton: p j p% j = p, j E k E k Remark: Ths s the condtonal probablty that, n steady-state, the orgnal process s at state j, gven that t s somewhere n E Proof: Verfy that: p j p p% jq j = p% qj q j = qj p jq j = pqj,, j S; j p p j E p% j k E p j = = 1 j E p k E k k k E k

131 Example λ 1 µ 1 λ 1 µ 1 λ 2 µ 2 λ 2 µ 2 Two ndependent arrvals nfnte buffers (a) Jont process of queue length (X 1 (t), X 2 (t)) s a CTMC buffer sze s K (b): same as (a) except for that two Arrvals share a capacty-lmted buffer For (a): π ( a ) n1 = (1 ρ ) ρ (1 ρ ) n1, n ρ 2 n 2 (b) s a truncated verson of (a) n the sense E = {(n1+n2) 0: n1+n2 K}, thus π n1, n 2 ( b ) = n1 n 2 (1 ρ ) ρ (1 ρ ) ρ k 1 k (1 ρ ) ρ (1 ρ ) ρ ( k 1, k 2 ) E

132 Tme Reversblty and Burke s Theorem Tme-Reversal of Markov Chans Reversblty Truncatng a Reversble Markov Chan Burke s Theorem Brth-death processes: Posson departures Queues n Tandem

133 Brth-death process {X(t)} brth-death process wth statonary dstrbuton {p j } Arrval epochs: ponts of ncrease for {X(t)} Departure epoch: ponts of decrease for {X(t)} {X(t)} completely determnes the correspondng arrval and departure processes Arrvals Departures

134 Forward & reversed chans of brth-death processes Posson arrval process: λ j =λ, for all j Brth-death process called a (λ, µ j )-process Examples: M/M/1, M/M/c, M/M/ queues Posson arrvals LAA: for any tme t, future arrvals are ndependent of {X(s): s t} (λ, µ j )-process at steady state s reversble: forward and reversed chans are stochastcally dentcal => Arrval processes of the forward and reversed chans are stochastcally dentcal => Arrval process of the reversed chan s Posson wth rate λ + the arrval epochs of the reversed chan are the departure epochs of the forward chan => Departure process of the forward chan s Posson wth rate λ

135 t t t t Reversed chan: arrvals after tme t are ndependent of the chan hstory up to tme t (LAA) => Forward chan: departures pror to tme t and future of the chan {X(s): s t} are ndependent

136 Burke s Theorem Theorem 10: Consder a (λ, µ )-process j (e.g., those n M/M/1, M/M/c, or M/M/ systems). Suppose that the system starts at steady-state. Then: 1. The departure process s Posson wth rate λ 2. At each tme t, the number of customers n the system s ndependent of the departure tmes pror to t Fundamental result for study of networks of M/M/* queues, where output process from one queue s the nput process of another

137 Tme Reversblty and Burke s Theorem Tme-Reversal of Markov Chans Reversblty Truncatng a Reversble Markov Chan Burke s Theorem Queues n Tandem

138 Sngle-Server Queues n Tandem λ Posson Staton 1 Staton 2 λ µ 1 µ 2 Customers arrve at queue 1 accordng to Posson process wth rate λ. Servce tmes exponental wth mean 1/µ. Assume servce tmes of a customer n the two queues are ndependent. Assume ρ =λ/µ <1 What s the jont statonary dstrbuton of N 1 and N 2 number of customers n each queue? p n n p n p n n1 n2 (, ) = (1 ρ ) ρ (1 ρ ) ρ = ( ) ( ) Result: n steady state the queues are ndependent

139 Note: f N1(t) s not ts statonary verson, N 1 (t) and N2(t) are NOT ndependent. The asymptotc result, however, stll holds. λ Posson Staton 1 Staton 2 Q1 s a M/M/1 queue. At steady state ts departure process s Posson wth rate λ. Thus Q2 s also M/M/1. Margnal statonary dstrbutons: p n λ n1 ( ) (1 ρ ) ρ, n 0,1, µ 1 µ 2 n2 = = p ( n ) = (1 ρ ) ρ, n = 0,1, To complete the proof: establsh ndependence at steady state Q1 at steady state: at tme t, N 1 (t) s ndependent of departures pror to t, whch are arrvals at Q2 up to t. Thus N 1 (t) and N 2 (t) ndependent: P{ N ( t) = n, N ( t) = n } = P{ N ( t) = n } P{ N ( t) = n } = p ( n ) P{ N ( t) = n } Lettng t, the jont statonary dstrbuton (, ) ( ) ( ) (1 ) n n p n n = p n p n = ρ ρ (1 ρ ) ρ

140 Queues n Tandem Theorem: Network consstng of K sngle-server queues n tandem. Servce tmes at queue exponental wth rate µ, ndependent of servce tmes at any queue j. Arrvals at the frst queue are Posson wth rate λ. The statonary dstrbuton of the network s: 1 K n K ρ ρ = 1 p( n, K, n ) = (1 ), n = 0,1,...; = 1,..., K At steady state the queues are ndependent; the dstrbuton of queue s that of an solated M/M/1 queue wth arrval and servce rates λ and µ n p ( n ) = (1 ρ ) ρ, n = 0,1,... Are the queues ndependent f not n steady state? Are stochastc processes {N 1 (t)} and {N 2 (t)} ndependent?

141 Queues n Tandem: State-Dependent Servce Rates Theorem 12: Network consstng of K queues n tandem. Servce tmes at queue exponental wth rate µ (n ) when there are n customers n the queue ndependent of servce tmes at any queue j. Arrvals at the frst queue are Posson wth rate λ. The statonary dstrbuton of the network s: where {p (n )} s the statonary dstrbuton of queue n solaton wth Posson arrvals wth rate λ Examples:./M/c and./m/ queues p( n, K, n ) = p ( n ), n = 0,1,...; = 1,..., K 1 K K = 1 If queue s./m/, then: ( λ / µ ) λ µ p n e n n / ( ) =, = 0,1,... n!

142 Jackson Networks Open Jackson Networks Network Flows State-Dependent Servce Rates Networks of Transmsson Lnes & Klenrock s Assumpton Closed Jackson Networks

143 Jackson Networks Open Jackson Networks Network Flows State-Dependent Servce Rates Networks of Transmsson Lnes & Klenrock s Assumpton Closed Jackson Networks

144 Networks of./m/1 Queues k γ 1 r k r j j γ r 0 Network of K nodes; Node s./m/1-fcfs queue wth servce rate µ External arrvals ndependent Posson processes γ : rate of external arrvals at node Markovan routng: customer completng servce at node s routed to node j wth probablty r j or exts the network wth probablty r 0 =1- j r j Routng matrx R=[r j ] rreducble external arrvals eventually ext the system

145 Jackson Network Defnton: A Jackson network s the CTMC {N(t)}, wth N(t)=(N 1 (t),, N K (t)) that descrbes the evoluton of the prevously defned network, where N (t) = # of customers at node Possble states: n=(n 1, n 2,, n K ), n=1,2,, =1,2,..,K For any state n, defne the followng operators: An = n + e arrval at D n = n e departure from T n = n e + e transton from to j j j e =(0,0,,1,0,,0) s unt vector wth length K and the -th poston beng 1 Transton rates for the Jackson network: q( n, An) = γ q( n, D n) = µ r 1{ n > 0}, j = 1,..., K 0 q( n, T n) = µ r 1{ n > 0} whle q(n,m)=0 for all other states m j j

146 Jackson s Theorem for Open Networks λ : total arrval rate at node Open network: for some node j: γ j >0 K λ = γ + λ, 1,..., j 1 jrj = K = Routng matrx s rreducble => Lnear system has a unque soluton λ 1, λ 2,, λ K Theorem 13: Consder a Jackson network, where ρ =λ /µ < 1, for every node. The statonary dstrbuton of the network s p( n) = p ( n ), n, K, n 0 = 1 1 K where for every node =1,2,,K K n p ( n ) = (1 ρ ) ρ, n 0 γ 1 γ k r k r 0 r j j

147 Jackson s Theorem (proof) Guess the reverse Markov chan and use Theorem 4 Clam: The network reversed n tme s a Jackson network wth the same servce rates, whle the arrval rates and routng probabltes are λ r γ γ = λ r, r =, r = * * j j * 0 j 0 λ λ Verfy that for any states n and m n, * p( m) q ( m, n) = p( n) q( n, m) Need to prove only for m=a n, D n, T j n. We show the proof for the frst two cases the thrd s smlar q ( A n, n) = q ( A n, D A n) = µ r = µ ( γ / λ ) * * * 0 * = µ γ λ = γ = ρ p( A n) q ( A n, n) p( n) q( n, A n) p( A n) ( / ) p( n) p( A n) p( n) q ( D n, n) = q ( D n, A D n) = γ = λ r * * * 0 * 0 0 p( D n) q ( D n, n) = p( n) q( n, D n) p( D n) λ r = p( n) µ r 1{ n > 0} ρ p( D n) = p( n)1{ n > 0}

148 Jackson s Theorem (proof cont.) Fnally, verfy that for any state n: * q( n, m) = q ( n, m) m n m n q( n, m) = γ + µ r 1{ n > 0} + µ r 1{ n > 0} j 0 m n, j = γ + µ [ r + r ] 1{ n > 0} Thus, we need to show that γ = λ r 0 λ r = λ λ r = λ λ r 0 j j j j j 0 j = γ + µ 1{ n > 0} q ( n, m) = γ + µ 1{ n > 0} = λ r + µ 1{ n > 0} * * 0 m n = λ ( λ γ ) = γ j j j j j

149 Output Theorem for Jackson Networks Theorem 14: The reversed chan of a statonary open Jackson network s also a statonary open Jackson network wth the same servce rates, whle the arrval rates and routng probabltes are λ r γ = λ r, r =, r = * * j j * 0 j 0 λ λ Theorem 15: In a statonary open Jackson network, the departure process from the system at node s Posson wth rate λ r 0. The departure processes are ndependent of each other, and at any tme t, ther past up to t s ndependent of the state of the system N(t). Remark: 1) The total arrval process at a gven node s not Posson. The departure process from the node s not Posson ether. However, the process of the customers that ext the network at the node s Posson. 2) In general, an open Jackson network need not be reversble γ

150 Arrval Theorem n Open Jackson Networks The composte arrval process at node n an open Jackson network has the PASTA property, although t need not be a Posson process Theorem 16: In an open Jackson network at steady-state, the probablty that a composte arrval at node fnds n customers at that node s equal to the (uncondtonal) probablty of n customers at that node: p ( n) = (1 ρ ) ρ n, n 0, = 1,..., K (Proof s omtted) k λ j

151 Jackson Networks Open Jackson Networks Network Flows State-Dependent Servce Rates Networks of Transmsson Lnes & Klenrock s Assumpton Closed Jackson Networks

TCOM 501: Networking Theory & Fundamentals. Lecture 7 February 25, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 7 February 25, 2003 Prof. Yannis A. Korilis TCOM 501: Networkng Theory & Fundamentals Lecture 7 February 25, 2003 Prof. Yanns A. Korls 1 7-2 Topcs Open Jackson Networks Network Flows State-Dependent Servce Rates Networks of Transmsson Lnes Klenrock

More information

Continuous Time Markov Chain

Continuous Time Markov Chain Contnuous Tme Markov Chan Hu Jn Department of Electroncs and Communcaton Engneerng Hanyang Unversty ERICA Campus Contents Contnuous tme Markov Chan (CTMC) Propertes of sojourn tme Relatons Transton probablty

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

6. Stochastic processes (2)

6. Stochastic processes (2) 6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process

More information

Continuous Time Markov Chains

Continuous Time Markov Chains Contnuous Tme Markov Chans Brth and Death Processes,Transton Probablty Functon, Kolmogorov Equatons, Lmtng Probabltes, Unformzaton Chapter 6 1 Markovan Processes State Space Parameter Space (Tme) Dscrete

More information

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal Markov chans M. Veeraraghavan; March 17, 2004 [Tp: Study the MC, QT, and Lttle s law lectures together: CTMC (MC lecture), M/M/1 queue (QT lecture), Lttle s law lecture (when dervng the mean response tme

More information

Introduction to Continuous-Time Markov Chains and Queueing Theory

Introduction to Continuous-Time Markov Chains and Queueing Theory Introducton to Contnuous-Tme Markov Chans and Queueng Theory From DTMC to CTMC p p 1 p 12 1 2 k-1 k p k-1,k p k-1,k k+1 p 1 p 21 p k,k-1 p k,k-1 DTMC 1. Transtons at dscrete tme steps n=,1,2, 2. Past doesn

More information

Analysis of Discrete Time Queues (Section 4.6)

Analysis of Discrete Time Queues (Section 4.6) Analyss of Dscrete Tme Queues (Secton 4.6) Copyrght 2002, Sanjay K. Bose Tme axs dvded nto slots slot slot boundares Arrvals can only occur at slot boundares Servce to a job can only start at a slot boundary

More information

Equilibrium Analysis of the M/G/1 Queue

Equilibrium Analysis of the M/G/1 Queue Eulbrum nalyss of the M/G/ Queue Copyrght, Sanay K. ose. Mean nalyss usng Resdual Lfe rguments Secton 3.. nalyss usng an Imbedded Marov Chan pproach Secton 3. 3. Method of Supplementary Varables done later!

More information

Queueing Networks II Network Performance

Queueing Networks II Network Performance Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

Applied Stochastic Processes

Applied Stochastic Processes STAT455/855 Fall 23 Appled Stochastc Processes Fnal Exam, Bref Solutons 1. (15 marks) (a) (7 marks) The dstrbuton of Y s gven by ( ) ( ) y 2 1 5 P (Y y) for y 2, 3,... The above follows because each of

More information

Time Reversibility and Burke s Theorem

Time Reversibility and Burke s Theorem Queuing Analysis: Time Reversibility and Burke s Theorem Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. Yannis A. Korilis. Outline Time-Reversal

More information

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Classes of States and Stationary Distributions

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Classes of States and Stationary Distributions Steven R. Dunbar Department of Mathematcs 203 Avery Hall Unversty of Nebraska-Lncoln Lncoln, NE 68588-0130 http://www.math.unl.edu Voce: 402-472-3731 Fax: 402-472-8466 Topcs n Probablty Theory and Stochastc

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

CS 798: Homework Assignment 2 (Probability)

CS 798: Homework Assignment 2 (Probability) 0 Sample space Assgned: September 30, 2009 In the IEEE 802 protocol, the congeston wndow (CW) parameter s used as follows: ntally, a termnal wats for a random tme perod (called backoff) chosen n the range

More information

Google PageRank with Stochastic Matrix

Google PageRank with Stochastic Matrix Google PageRank wth Stochastc Matrx Md. Sharq, Puranjt Sanyal, Samk Mtra (M.Sc. Applcatons of Mathematcs) Dscrete Tme Markov Chan Let S be a countable set (usually S s a subset of Z or Z d or R or R d

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

CS-433: Simulation and Modeling Modeling and Probability Review

CS-433: Simulation and Modeling Modeling and Probability Review CS-433: Smulaton and Modelng Modelng and Probablty Revew Exercse 1. (Probablty of Smple Events) Exercse 1.1 The owner of a camera shop receves a shpment of fve cameras from a camera manufacturer. Unknown

More information

CSCE 790S Background Results

CSCE 790S Background Results CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each

More information

FINITE-STATE MARKOV CHAINS

FINITE-STATE MARKOV CHAINS Chapter 4 FINITE-STATE MARKOV CHAINS 4.1 Introducton The countng processes {N(t), t 0} of Chapterss 2 and 3 have the property that N(t) changes at dscrete nstants of tme, but s defned for all real t 0.

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Random Walks on Digraphs

Random Walks on Digraphs Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected

More information

Adaptive Dynamical Polling in Wireless Networks

Adaptive Dynamical Polling in Wireless Networks BULGARIA ACADEMY OF SCIECES CYBERETICS AD IFORMATIO TECHOLOGIES Volume 8, o Sofa 28 Adaptve Dynamcal Pollng n Wreless etworks Vladmr Vshnevsky, Olga Semenova Insttute for Informaton Transmsson Problems

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 5. Monotonicity and Saturation Rule

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 5. Monotonicity and Saturation Rule Lectures on Stochastc Stablty Sergey FOSS Herot-Watt Unversty Lecture 5 Monotoncty and Saturaton Rule Introducton The paper of Loynes [8] was the frst to consder a system (sngle server queue, and, later,

More information

The Feynman path integral

The Feynman path integral The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space

More information

ECE697AA Lecture 17. Birth-Death Processes

ECE697AA Lecture 17. Birth-Death Processes Tlman Wolf Department of Electrcal and Computer Engneerng /4/8 ECE697AA ecture 7 Queung Systems II ECE697AA /4/8 Uass Amherst Tlman Wolf Brth-Death Processes Solvng general arov chan can be dffcult Smpler,

More information

DS-GA 1002 Lecture notes 5 Fall Random processes

DS-GA 1002 Lecture notes 5 Fall Random processes DS-GA Lecture notes 5 Fall 6 Introducton Random processes Random processes, also known as stochastc processes, allow us to model quanttes that evolve n tme (or space n an uncertan way: the trajectory of

More information

Problem Set 9 - Solutions Due: April 27, 2005

Problem Set 9 - Solutions Due: April 27, 2005 Problem Set - Solutons Due: Aprl 27, 2005. (a) Frst note that spam messages, nvtatons and other e-mal are all ndependent Posson processes, at rates pλ, qλ, and ( p q)λ. The event of the tme T at whch you

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

MARKOV CHAINS. 5. Recurrence and transience. and the length of the rth excursion to i by. Part IB Michaelmas 2009 YMS.

MARKOV CHAINS. 5. Recurrence and transience. and the length of the rth excursion to i by. Part IB Michaelmas 2009 YMS. Part IB Mchaelmas 009 YMS MARKOV CHAINS E-mal: yms@statslab.cam.ac.u 5. Recurrence transence Recurrence transence; equvalence of transence summablty of n- step transton probabltes; equvalence of recurrence

More information

Departure Process from a M/M/m/ Queue

Departure Process from a M/M/m/ Queue Dearture rocess fro a M/M// Queue Q - (-) Q Q3 Q4 (-) Knowledge of the nature of the dearture rocess fro a queue would be useful as we can then use t to analyze sle cases of queueng networs as shown. The

More information

Suggested solutions for the exam in SF2863 Systems Engineering. June 12,

Suggested solutions for the exam in SF2863 Systems Engineering. June 12, Suggested solutons for the exam n SF2863 Systems Engneerng. June 12, 2012 14.00 19.00 Examner: Per Enqvst, phone: 790 62 98 1. We can thnk of the farm as a Jackson network. The strawberry feld s modelled

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform

More information

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys

More information

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling Real-Tme Systems Multprocessor schedulng Specfcaton Implementaton Verfcaton Multprocessor schedulng -- -- Global schedulng How are tasks assgned to processors? Statc assgnment The processor(s) used for

More information

The equation of motion of a dynamical system is given by a set of differential equations. That is (1)

The equation of motion of a dynamical system is given by a set of differential equations. That is (1) Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders)

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders) Entropy of Marov Informaton Sources and Capacty of Dscrete Input Constraned Channels (from Immn, Codng Technques for Dgtal Recorders). Entropy of Marov Chans We have already ntroduced the noton of entropy

More information

Probability and Random Variable Primer

Probability and Random Variable Primer B. Maddah ENMG 622 Smulaton 2/22/ Probablty and Random Varable Prmer Sample space and Events Suppose that an eperment wth an uncertan outcome s performed (e.g., rollng a de). Whle the outcome of the eperment

More information

Network of Markovian Queues. Lecture

Network of Markovian Queues. Lecture etwork of Markovan Queues etwork of Markovan Queues ETW09 20 etwork queue ed, G E ETW09 20 λ If the frst queue was not empty Then the tme tll the next arrval to the second queue wll be equal to the servce

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

Changing Topology and Communication Delays

Changing Topology and Communication Delays Prepared by F.L. Lews Updated: Saturday, February 3, 00 Changng Topology and Communcaton Delays Changng Topology The graph connectvty or topology may change over tme. Let G { G, G,, G M } wth M fnte be

More information

Bounds on the bias terms for the Markov reward approach

Bounds on the bias terms for the Markov reward approach Bounds on the bas terms for the Markov reward approach Xnwe Ba 1 and Jasper Goselng 1 arxv:1901.00677v1 [math.pr] 3 Jan 2019 1 Department of Appled Mathematcs, Unversty of Twente, P.O. Box 217, 7500 AE

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

PERFORMANCE OF MULTICLASS MARKOVIAN QUEUEING NETWORKS VIA PIECEWISE LINEAR LYAPUNOV FUNCTIONS 1

PERFORMANCE OF MULTICLASS MARKOVIAN QUEUEING NETWORKS VIA PIECEWISE LINEAR LYAPUNOV FUNCTIONS 1 The Annals of Appled Probablty 2001, Vol. 11, No. 4, 1384 1428 PERFORMANCE OF MULTICLASS MARKOVIAN QUEUEING NETWORKS VIA PIECEWISE LINEAR LYAPUNOV FUNCTIONS 1 By Dmtrs Bertsmas, Davd Gamarnk and John N.

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 7, Number 2, December 203 Avalable onlne at http://acutm.math.ut.ee A note on almost sure behavor of randomly weghted sums of φ-mxng

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

MATH 5630: Discrete Time-Space Model Hung Phan, UMass Lowell March 1, 2018

MATH 5630: Discrete Time-Space Model Hung Phan, UMass Lowell March 1, 2018 MATH 5630: Dscrete Tme-Space Model Hung Phan, UMass Lowell March, 08 Newton s Law of Coolng Consder the coolng of a well strred coffee so that the temperature does not depend on space Newton s law of collng

More information

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur Module Random Processes Lesson 6 Functons of Random Varables After readng ths lesson, ou wll learn about cdf of functon of a random varable. Formula for determnng the pdf of a random varable. Let, X be

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Queuing system theory

Queuing system theory Elements of queung system: Queung system theory Every queung system conssts of three elements: An arrval process: s characterzed by the dstrbuton of tme between the arrval of successve customers, the mean

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

AN EXTENDED CLASS OF TIME-CONTINUOUS BRANCHING PROCESSES. Rong-Rong Chen. ( University of Illinois at Urbana-Champaign)

AN EXTENDED CLASS OF TIME-CONTINUOUS BRANCHING PROCESSES. Rong-Rong Chen. ( University of Illinois at Urbana-Champaign) AN EXTENDED CLASS OF TIME-CONTINUOUS BRANCHING PROCESSES Rong-Rong Chen ( Unversty of Illnos at Urbana-Champagn Abstract. Ths paper s devoted to studyng an extended class of tme-contnuous branchng processes,

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Application of Queuing Theory to Waiting Time of Out-Patients in Hospitals.

Application of Queuing Theory to Waiting Time of Out-Patients in Hospitals. Applcaton of Queung Theory to Watng Tme of Out-Patents n Hosptals. R.A. Adeleke *, O.D. Ogunwale, and O.Y. Hald. Department of Mathematcal Scences, Unversty of Ado-Ekt, Ado-Ekt, Ekt State, Ngera. E-mal:

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

Digital Signal Processing

Digital Signal Processing Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE THE ROYAL STATISTICAL SOCIETY 6 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER I STATISTICAL THEORY The Socety provdes these solutons to assst canddates preparng for the eamnatons n future years and for

More information

Distributions /06. G.Serazzi 05/06 Dimensionamento degli Impianti Informatici distrib - 1

Distributions /06. G.Serazzi 05/06 Dimensionamento degli Impianti Informatici distrib - 1 Dstrbutons 8/03/06 /06 G.Serazz 05/06 Dmensonamento degl Impant Informatc dstrb - outlne densty, dstrbuton, moments unform dstrbuton Posson process, eponental dstrbuton Pareto functon densty and dstrbuton

More information

Statistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics )

Statistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics ) Ismor Fscher, 8//008 Stat 54 / -8.3 Summary Statstcs Measures of Center and Spread Dstrbuton of dscrete contnuous POPULATION Random Varable, numercal True center =??? True spread =???? parameters ( populaton

More information

Canonical transformations

Canonical transformations Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

Engineering Risk Benefit Analysis

Engineering Risk Benefit Analysis Engneerng Rsk Beneft Analyss.55, 2.943, 3.577, 6.938, 0.86, 3.62, 6.862, 22.82, ESD.72, ESD.72 RPRA 2. Elements of Probablty Theory George E. Apostolaks Massachusetts Insttute of Technology Sprng 2007

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations

9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations Physcs 171/271 - Chapter 9R -Davd Klenfeld - Fall 2005 9 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys a set

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

Georgia Tech PHYS 6124 Mathematical Methods of Physics I Georga Tech PHYS 624 Mathematcal Methods of Physcs I Instructor: Predrag Cvtanovć Fall semester 202 Homework Set #7 due October 30 202 == show all your work for maxmum credt == put labels ttle legends

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

CHAPTER 17 Amortized Analysis

CHAPTER 17 Amortized Analysis CHAPTER 7 Amortzed Analyss In an amortzed analyss, the tme requred to perform a sequence of data structure operatons s averaged over all the operatons performed. It can be used to show that the average

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Randomness and Computation

Randomness and Computation Randomness and Computaton or, Randomzed Algorthms Mary Cryan School of Informatcs Unversty of Ednburgh RC 208/9) Lecture 0 slde Balls n Bns m balls, n bns, and balls thrown unformly at random nto bns usually

More information

Lecture 17 : Stochastic Processes II

Lecture 17 : Stochastic Processes II : Stochastc Processes II 1 Contnuous-tme stochastc process So far we have studed dscrete-tme stochastc processes. We studed the concept of Makov chans and martngales, tme seres analyss, and regresson analyss

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

REAL ANALYSIS I HOMEWORK 1

REAL ANALYSIS I HOMEWORK 1 REAL ANALYSIS I HOMEWORK CİHAN BAHRAN The questons are from Tao s text. Exercse 0.0.. If (x α ) α A s a collecton of numbers x α [0, + ] such that x α

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information