COUNTABLE-STATE MARKOV CHAINS

Size: px
Start display at page:

Download "COUNTABLE-STATE MARKOV CHAINS"

Transcription

1 Chapter 5 COUNTABLE-STATE MARKOV CHAINS 5.1 Itroductio ad classificatio of states Markov chais with a coutably-ifiite state space (more briefly, coutable-state Markov chais) exhibit some types of behavior ot possible for chais with a fiite state space. With the exceptio of the first example to follow ad the sectio o brachig processes, we label the states by the oegative itegers. This is appropriate whe modelig thigs such as the umber of customers i a queue, ad causes o loss of geerality i other cases. The followig two examples give some isight ito the ew issues posed by coutable state spaces. Example Cosider the familiar Beroulli process {S = X 1 + X ; 1} where {X ; 1} is a IID biary sequece with p X (1) = p ad p X ( 1) = (1 p) = q. The sequece {S ; 1} is a sequece of iteger radom variables (rv s ) where S = S with probability p ad S = S 1 1 with probability q. This sequece ca be modeled by the Markov chai i Figure 5.1. p p p p... z X z X z X X z 2Xy 1Xy 0 Xy 1 Xy... 2 q q = 1 p q q q Figure 5.1: A Markov chai with a coutable state space modelig a Beroulli process. If p > 1/2, the as time icreases, the state X becomes large with high probability, i.e., lim!1 Pr{X j} = 1 for each iteger j. Similarly, for p < 1/2, the state becomes highly egative. Usig the otatio of Markov chais, P 0 j is the probability of beig i state j at the ed of the th trasitio, coditioal o startig i state 0. The fial state j is the umber of positive trasitios k less the umber of egative trasitios k, i.e., j = 2k. Thus, 228

2 5.1. INTRODUCTION AND CLASSIFICATION OF STATES 229 usig the biomial formula, j + P 0j = p k q k where k = ; j + eve. (5.1) k 2 All states i this Markov chai commuicate with all other states, ad are thus i the same class. The formula makes it clear that this class, i.e., the etire set of states i the Markov chai, is periodic with period 2. For eve, the state is eve ad for odd, the state is odd. What is more importat tha the periodicity, however, is what happes to the state probabilities for large. As we saw i (1.88) while provig the cetral limit theorem for the biomial case, apple 1 (k p) 2 P 0j p exp where k = j + ; j + eve. (5.2) 2 pq 2pq 2 I other words, P 0 j, as a fuctio of j, looks like a quatized form of the Gaussia desity for large. The sigificat terms of that distributio are close to k = p, i.e., to j = (2p 1). For p > 1/2, the state icreases with icreasig. Its distributio is cetered at (2p 1), but the distributio is also spreadig out as p. For p < 1/2, the state similarly decreases ad spreads out. The most iterestig case is p = 1/2, where the distributio remais cetered at 0, but due to the spreadig, the PMF approaches 0 as 1/ p for all j. For this example, the, the probability of each state approaches zero as! 1, ad this holds for all choices of p, 0 < p < 1. If we attempt to defie a steady-state probability as 0 for each state, the these probabilities do ot sum to 1, so they caot be viewed as a steady-state distributio. Thus, for coutable-state Markov chais, the otios of recurrece ad steady-state probabilities will have to be modified from that with fiitestate Markov chais. The same type of situatio occurs wheever {S ; 1} is a sequece of sums of arbitrary IID iteger-valued rv s. Most coutable-state Markov chais that are useful i applicatios are quite di eret from Example 5.1.1, ad istead are quite similar to fiite-state Markov chais. The followig example bears a close resemblace to Example 5.1.1, but at the same time is a coutablestate Markov chai that will keep reappearig i a large umber of cotexts. It is a special case of a birth-death process, which we study i Sectio 5.2. Example Figure 5.2 is similar to Figure 5.1 except that the egative states have bee elimiated. A sequece of IID biary rv s {X ; 1}, with p X (1) = p ad p X ( 1) = q = 1 p, cotrols the state trasitios. Now, however, S = max(0, S 1 + X, so that S is a oegative rv. All states agai commuicate, ad because of the self trasitio at state 0, the chai is aperiodic. For p > 1/2, trasitios to the right occur with higher frequecy tha trasitios to the left. Thus, reasoig heuristically, we expect the state S at time to drift to the right with icreasig. Give S 0 = 0, the probability P 0 j of beig i state j at time, should the ted to zero for ay fixed j with icreasig. As i Example 5.1.1, we see that a

3 230 CHAPTER 5. COUNTABLE-STATE MARKOV CHAINS p p p p z X z z z 0 X 1 X : y X y 2 X y 3 Xy q q = 1 p q q q X X 4... Figure 5.2: A Markov chai with a coutable state space. If p > 1/2, the as time icreases, the state X becomes large with high probability, i.e., lim!1 Pr{X j} = 1 for each iteger j. steady state does ot exist. I more poetic terms, the state waders o ito the wild blue yoder. Oe way to uderstad this chai better is to look at what happes if the chai is trucated The trucatio of Figure 5.2 to k states is aalyzed i Exercise 3.9. The solutio there defies = p/q ad shows that if 6= 1, the i = (1 ) i /(1 k ) for each i, 0 apple i < k. For = 1, i = 1/k for each i. For < 1, the limitig behavior as k! 1 is i = (1 ) i. Thus for < 1 ( p < 1/2), the steady state probabilities for the trucated Markov chai approaches a limit which we later iterpret as the steady state probabilities for the utrucated chai. For > 1 (p > 1/2), o the other had, the steady-state probabilities for the trucated case are geometrically decreasig from the right, ad the states with sigificat probability keep movig to the right as k icreases. Although the probability of each fixed state j approaches 0 as k icreases, the trucated chai ever resembles the utrucated chai. Perhaps the most iterestig case is that where p = 1/2. The th order trasitio probabilities, P 0j ca be calculated exactly for this case (see Exercise 5.3) ad are very similar to those of Example I particular, 8 (j+)/2 >< 2 for j 0, (j + ) eve P 0j = (5.3) >: (j++1)/2 2 for j 0, (j + ) odd r apple 2 j 2 exp for j 0. (5.4) 2 We see that P 0j for large is approximated by the positive side of a quatized Gaussia distributio. It looks like the positive side of the PMF of (5.1) except that it is o loger periodic. For large, P 0j is cocetrated i a regio of width p aroud j = 0, ad the PMF goes to 0 as 1/ p for each j as! 1. Fortuately, the strage behavior of Figure 5.2 whe p q is ot typical of the Markov chais of iterest for most applicatios. For typical coutable-state Markov chais, a steady state does exist, ad the steady-state probabilities of all but a fiite umber of states (the umber depedig o the chai ad the applicatio) ca almost be igored for umerical calculatios.

4 5.1. INTRODUCTION AND CLASSIFICATION OF STATES Usig reewal theory to classify ad aalyze Markov chais The matrix approach used to aalyze fiite-state Markov chais does ot geeralize easily to the coutable-state case. Fortuately, reewal theory is ideally suited for this purpose, especially for aalyzig the log term behavior of coutable-state Markov chais. We must first revise the defiitio of recurret states. The defiitio for fiite-state Markov chais does ot apply here, ad we will see that, uder the ew defiitio, the Markov chai i Figure 5.2 is recurret for p apple 1/2 ad trasiet for p > 1/2. For p = 1/2, the chai is called ull-recurret, as explaied later. I geeral, we will fid that for a recurret state j, the sequece of subsequet etries to state j, coditioal o startig i j, forms a reewal process. The reewal theorems the specify the time-average relative-frequecy of state j, the limitig probability of j with icreasig time ad a umber of other relatios. We also wat to uderstad the sequece of epochs at which oe state, say j, is etered, coditioal o startig the chai at some other state, say i. We will see that, subject to the classificatio of states i ad j, this gives rise to a delayed reewal process. I preparig to study these reewal processes ad delayed reewal process, we eed to uderstad the iter-reewal itervals. The probability mass fuctios (PMF s) of these itervals are called first-passage-time probabilities i the otatio of Markov chais. Defiitio The first-passage-time probability, f ij (), of a Markov chai is the probability, coditioal o X 0 = i, that the first subsequet etry to state j occurs at discrete epoch. That is, f ij (1) = P ij ad for 2, f ij () = Pr{X =j, X 1 6=j, X 2 6=j,..., X 1 6=j X 0 =i}. (5.5) The distictio betwee f ij () ad P ij = Pr{X = j X 0 = i} is that f ij () is the probability that the first etry to j (after time 0) occurs at time, whereas P ij is the probability that ay etry to j occurs at time, both coditioal o startig i state i at time 0. The defiitio i (5.5) also applies for j = i; f ii () is thus the probability, give X 0 = i, that the first occurrece of state i after time 0 occurs at time. Sice the trasitio probabilities are idepedet of time, f kj ( 1) is also the probability, give X 1 = k, that the first subsequet occurrece of state j occurs at time. Thus we ca calculate f ij () from the iterative relatios f ij () = X P ik f kj ( 1); > 1; f ij (1) = P ij. (5.6) k=j 6 Note that the sum excludes k = j, sice P ij f jj ( 1) is the probability that state j occurs first at epoch 1 ad ext at epoch. Note also from the Chapma-Kolmogorov equatio that P ij = P k P ikp 1 kj. I other words, the oly di erece betwee the iterative expressios to calculate f ij () ad P ij is the exclusio of k = j i the expressio for f ij (). With this iterative approach, the first-passage-time probabilities f ij () for a give must be calculated for all i before proceedig to calculate them for the ext larger value of. This also gives us f jj (), although f jj () is ot used i the iteratio.

5 232 CHAPTER 5. COUNTABLE-STATE MARKOV CHAINS Let F ij (), for 1, be the probability, give X 0 = i, that state j occurs at some time betwee 1 ad iclusive. Thus, F ij () = X f ij (m). (5.7) m=1 For each i, j, F ij () is o-decreasig i ad (sice it is a probability) is upper bouded by 1. Thus F ij (1) (i.e., lim!1 F ij ()) must exist, ad is the probability, give X 0 = i, that state j will ever occur. If F ij (1) = 1, the, give X 0 = i, it is certai (with probability 1) that the chai will evetually eter state j. I this case, we ca defie a radom variable (rv) T ij, coditioal o X 0 = i, as the first-passage time from i to j. The f ij () is the PMF of T ij ad F ij () is the distributio fuctio of T ij. If F ij (1) < 1, the T ij is a defective rv, sice, with some o-zero probability, there is o first-passage to j. Defective rv s are ot cosidered to be rv s (i the theorems here or elsewhere), but they do have may of the properties of rv s. The first-passage time T jj from a state j back to itself is of particular importace. It has the PMF f jj () ad the distributio fuctio F jj (). It is a rv (as opposed to a defective rv) if F jj (1) = 1, i.e., if the state evetually returs to state j with probability 1 give that it starts i state j. This leads to the defiitio of recurrece. Defiitio A state j i a coutable-state Markov chai is recurret if F jj (1) = 1. It is trasiet if F jj (1) < 1. Thus each state j i a coutable-state Markov chai is either recurret or trasiet, ad is recurret if ad oly if a evetual retur to j (coditioal o X 0 = j) occurs with probability 1. Equivaletly, j is recurret if ad oly if T jj, the time of first retur to j, is a rv. Note that for the special case of fiite-state Markov chais, this defiitio is cosistet with the oe i Chapter 3. For a coutably-ifiite state space, however, the earlier defiitio is ot adequate. A example is provided by the case p > 1/2 i Figure 5.2. Here i ad j commuicate for all states i ad j, but it is ituitively obvious (ad show i Exercise 5.2, ad further explaied i Sectio 5.2) that each state is trasiet. If the iitial state X 0 of a Markov chai is a recurret state j, the T jj is the iteger time of the first recurrece of state j. At that recurrece, the Markov chai is i the same state j as it started i, ad the discrete iterval from T jj to the ext occurrece of state j, say T jj,2 has the same distributio as T jj ad is clearly idepedet of T jj. Similarly, the sequece of successive recurrece itervals, T jj, T jj,2, T jj,3,... is a sequece of IID rv s. This sequece of recurrece itervals 1 is the the sequece of iter-reewal itervals of a reewal process, where each reewal iterval has the distributio of T jj. These iter-reewal itervals have the PMF f jj () ad the distributio fuctio F jj (). Sice results about Markov chais deped very heavily o whether states are recurret or trasiet, we will look carefully at the probabilities F ij (). Substitutig (5.6) ito (5.7), we 1 Note that i Chapter 4 the iter-reewal itervals were deoted X 1, X 2,..., whereas here X 0, X 1,..., is the sequece of states i the Markov chai ad T jj, T jj,2,..., is the sequece of iter-reewal itervals.

6 5.1. INTRODUCTION AND CLASSIFICATION OF STATES 233 obtai F ij () = P ij + X P ik F kj ( 1); > 1; F ij (1) = P ij. (5.8) k=j 6 To uderstad the expressio P ij + P k P ik F kj ( 1), ote that the first term, P ij, is f ij (1) ad the secod term, P 6=j k P ik F kj ( 1), is equal to P f ij (`). 6=j `=2 We have see that F ij () is o-decreasig i ad upper bouded by 1, so the limit F ij (1) must exist. Similarly, P k6=j P ik F kj ( 1) is o-decreasig i ad upper bouded by 1, so it also has a limit, equal to P k6=j P ik F kj (1). Thus F ij (1) = P ij + X P ik F kj (1). (5.9) For ay give j, (5.9) ca be viewed as a set of liear equatios i the variables F ij (1) for each state i. There is ot always a uique solutio to this set of equatios. I fact, the set of equatios k=j 6 x ij = P ij + X P ik x kj ; all states i (5.10) k=j 6 always has a solutio i which x ij = 1 for all i. If state j is trasiet, however, there is aother solutio i which x ij is the true value of F ij (1) ad F jj (1) < 1. Exercise 5.1 shows that if (5.10) is satisfied by a set of oegative umbers {x ij ; 1 apple i apple J}, the F ij (1) apple x ij for each i. We have defied a state j to be recurret if F jj (1) = 1 ad have see that if j is recurret, the the returs to state j, give X 0 = j form a reewal process. All of the results of reewal theory ca the be applied to the radom sequece of iteger times at which j is etered. The mai results from reewal theory that we eed are stated i the followig lemma. Lemma Let {N jj (t); t 0} be the coutig process for occurreces of state j up to time t i a Markov chai with X 0 = j. The followig coditios are the equivalet. 1. state j is recurret. 2. lim t!1 N jj (t) = 1 with probability lim t!1 E [N jj (t)] = lim P t!1 P1appleapplet jj = 1. Proof: First assume that j is recurret, i.e., that F jj (1) = 1. This implies that the iterreewal times betwee occurreces of j are IID rv s, ad cosequetly {N jj (t); t 1} is a reewal coutig process. Recall from Lemma of Chapter 4 that, whether or ot the expected iter-reewal time E [T jj ] is fiite, lim t!1 N jj (t) = 1 with probability 1 ad lim t!1 E [N jj (t)] = 1.

7 234 CHAPTER 5. COUNTABLE-STATE MARKOV CHAINS Next assume that state j is trasiet. I this case, the iter-reewal time T jj is ot a rv, so {N jj (t); t 0} is ot a reewal process. A evetual retur to state j occurs oly with probability F jj (1) < 1, ad, sice subsequet returs are idepedet, the total umber of returs to state j is a geometric rv with mea F jj (1)/[1 F jj (1)]. Thus the total umber of returs is fiite with probability 1 ad the expected total umber of returs is fiite. This establishes the first three equivaleces. Fially, ote that P jj, the probability of a trasitio to state j at iteger time, is equal to the expectatio of a trasitio to j at iteger time (i.e., a sigle trasitio occurs with probability P jj ad 0 occurs otherwise). Sice N jj (t) is the sum of the umber of trasitios to j over times 1 to t, we have which establishes the fial equivalece. E [N P jj (t)] = X jj, 1appleapplet Our ext objective is to show that all states i the same class as a recurret state are also recurret. Recall that two states are i the same class if they commuicate, i.e., each has a path to the other. For fiite-state Markov chais, the fact that either all states i the same class are recurret or all trasiet was relatively obvious, but for coutable-state Markov chais, the defiitio of recurrece has bee chaged ad the above fact is o loger obvious. Lemma If state j is recurret ad states i ad j are i the same class, i.e., i ad j commuicate, the state i is also recurret. Proof: From Lemma 5.1.1, state j satisfies lim t!1 P1appleapplet jj = 1. Sice j ad i commuicate, there are itegers m ad k such that P m ij > 0 ad P k ji > 0. For every walk from state j to j i steps, there is a correspodig walk from i to i i m + + k steps, goig from i to j i m steps, j to j i steps, ad j back to i i k steps. Thus P P ii m++k m k P ij P jj P ji X 1 X 1 1 P P m++k m ii P P ij P X k ji jj = 1. ii =1 =1 =1 Thus, from Lemma 5.1.1, i is recurret, completig the proof. Sice each state i a Markov chai is either recurret or trasiet, ad sice, if oe state i a class is recurret, all states i that class are recurret, we see that if oe state i a class is trasiet, they all are. Thus we ca refer to each class as beig recurret or trasiet. This result shows that Theorem also applies to coutable-state Markov chais. We state this theorem separately here to be specific. Theorem For a coutable-state Markov chai, either all states i a class are trasiet or all are recurret. We ext look at the delayed coutig process {N ij (); 1}. If this is a delayed reewal coutig process, the we ca use delayed reewal processes to study whether the e ect of

8 5.1. INTRODUCTION AND CLASSIFICATION OF STATES 235 the iitial state evetually dies out. If state j is recurret, we kow that {N jj (); 1} is a reewal coutig process. I additio, i order for {N ij (); 1} to be a delayed reewal coutig process, it is ecessary for the first-passage time to be a rv, i.e., for F ij (1) to be 1. Lemma Let states i ad j be i the same recurret class. The F ij (1) = 1. Proof: Sice i is recurret, the umber of visits to i by time t, give X 0 = i, is a reewal coutig process N ii (t). There is a path from i to j, say of probability > 0. Thus the probability that the first retur to i occurs before visitig j is at most 1. The probability that the secod retur occurs before visitig j is thus at most (1 ) 2 ad the probability that the th occurs without visitig j is at most (1 ). Sice i is visited ifiitely ofte with probability 1 as! 1, the probability that j is ever visited is 0. Thus F ij (1) = 1. Lemma Let {N ij (t); t 0} be the coutig process for trasitios ito state j up to time t for a Markov chai give X 0 = i =6 j. The if i ad j are i the same recurret class, {N ij (t); t 0} is a delayed reewal process. Proof: From Lemma 5.1.3, T ij, the time util the first trasitio ito j, is a rv. Also T jj is a rv by defiitio of recurrece, ad subsequet itervals betwee occurreces of state j are IID, completig the proof. If F ij (1) = 1, we have see that the first-passage time from i to j is a rv, i.e., is fiite with probability 1. I this case, the mea time T ij to first eter state j startig from state i is of iterest. Sice T ij is a oegative radom variable, its expectatio is the itegral of its complemetary distributio fuctio, 1 T ij = 1 + X (1 F ij ()). (5.11) =1 It is possible to have F ij (1) = 1 but T ij = 1. As will be show i Sectio 5.2, the chai i Figure 5.2 satisfies F ij (1) = 1 ad T ij < 1 for p < 1/2 ad F ij (1) = 1 ad T ij = 1 for p = 1/2. As discussed before, F ij (1) < 1 for p > 1/2. This leads us to the followig defiitio. Defiitio A state j i a coutable-state Markov chai is positive-recurret if F jj (1) = 1 ad T jj < 1. It is ull-recurret if F jj (1) = 1 ad T jj = 1. Each state of a Markov chai is thus classified as oe of the followig three types positiverecurret, ull-recurret, or trasiet. For the example of Figure 5.2, ull-recurrece lies o a boudary betwee positive-recurrece ad trasiece, ad this is ofte a good way to look at ull-recurrece. Part f) of Exercise 6.3 illustrates aother type of situatio i which ull-recurrece ca occur. Assume that state j is recurret ad cosider the reewal process {N jj (t); t 0}. The limitig theorems for reewal processes ca be applied directly. From the strog law for

9 236 CHAPTER 5. COUNTABLE-STATE MARKOV CHAINS reewal processes, Theorem 4.3.1, lim N jj (t)/t = 1/T jj with probability 1. (5.12) t!1 From the elemetary reewal theorem, Theorem 4.6.1, lim E [N jj (t)/t] = 1/T jj. (5.13) t!1 Equatios (5.12) ad (5.13) are valid whether j is positive-recurret or ull-recurret. Next we apply Blackwell s theorem to {N jj (t); t 0}. Recall that the period of a give state j i a Markov chai (whether the chai has a coutable or fiite umber of states) is the greatest commo divisor of the set of itegers > 0 such that P jj > 0. If this period is d, the {N jj (t); t 0} is arithmetic with spa (i.e., reewals occur oly at times that are multiples of ). From Blackwell s theorem i the arithmetic form of (4.61), lim Pr{X = j X 0 = j} = /T jj. (5.14)!1 If state j is aperiodic (i.e., = 1), this says that lim!1 Pr{X = j X 0 = j} = 1/T jj. Equatios (5.12) ad (5.13) suggest that 1/T jj has some of the properties associated with a steady-state probability of state j, ad (5.14) stregthes this if j is aperiodic. For a Markov chai cosistig of a sigle class of states, all positive-recurret, we will stregthe this associatio further i Theorem by showig that there is a uique steady-state distributio, { j, j 0} such that j = 1/T jj for all j ad such that j = P i ip ij for all j 0 ad P j j = 1. The followig theorem starts this developmet by showig that ( ) are idepedet of the startig state. Theorem Let j be a recurret state i a Markov chai ad let i be ay state i the same class as j. Give X 0 = i, let N ij (t) be the umber of trasitios ito state j by time t ad let T jj be the expected recurrece time of state j (either fiite or ifiite). The If j is also aperiodic, the lim N ij (t)/t = 1/T jj with probability 1 (5.15) t!1 lim E [N ij (t)/t] = 1/T jj. (5.16) t!1 lim Pr{X = j X 0 = i} = 1/T jj. (5.17)!1 Proof: Sice i ad j are recurret ad i the same class, Lemma asserts that {N ij (t); t 0} is a delayed reewal process for j 6= i. Thus (5.15) ad (5.16) follow from Theorems ad of Chapter 4. If j is aperiodic, the {N ij (t); t 0} is a delayed reewal process for which the iter-reewal itervals T jj have spa 1 ad T ij has a iteger spa. Thus, (5.17) follows from Blackwell s theorem for delayed reewal processes, Theorem For i = j, ( ) follow from ( ), completig the proof. Theorem All states i the same class of a Markov chai are of the same type either all positive-recurret, all ull-recurret, or all trasiet.

10 5.1. INTRODUCTION AND CLASSIFICATION OF STATES 237 Proof: Let j be a recurret state. From Theorem 5.1.1, all states i a class are recurret or all are trasiet. Next suppose that j is positive-recurret, so that 1/T jj > 0. Let i be i the same class as j, ad cosider the reewal-reward process o {N jj (t); t 0} for which R(t) = 1 wheever the process is i state i (i.e., if X = i, the R(t) = 1 for apple t < + 1). The reward is 0 wheever the process is i some state other tha i. Let E [R ] be the expected reward i a iter-reewal iterval; this must be positive sice i is accessible from j. From the strog law for reewal-reward processes, Theorem 4.4.1, 1 Z t E [R ] lim R( )d = with probability 1. t!1 t 0 T jj The term o the left is the time-average umber of trasitios ito state i, give X 0 = j, ad this is 1/T ii from (5.15). Sice E [R ] > 0 ad T jj < 1, we have 1/T ii > 0, so i is positive-recurret. Thus if oe state is positive-recurret, the etire class is, completig the proof. If all of the states i a Markov chai are i a ull-recurret class, the 1/T jj = 0 for each state, ad oe might thik of 1/T jj = 0 as a steady-state probability for j i the sese that 0 is both the time-average rate of occurrece of j ad the limitig probability of j. However, these probabilities do ot add up to 1, so a steady-state probability distributio does ot exist. This appears rather paradoxical at first, but the example of Figure 5.2, with p = 1/2 will help to clarify the situatio. As time icreases (startig i state i, say), the radom variable X spreads out over more ad more states aroud i, ad thus is less likely to be i each idividual state. For each j, lim!1 P ij () = 0. Thus, P j {lim!1 P ij } = 0. O the other had, for every, P j P ij = 1. This is oe of those uusual examples where a limit ad a sum caot be iterchaged. I Chapter 3, we defied the steady-state distributio of a fiite-state Markov chai as a probability vector that satisfies = [P ]. Here we defie { i ; i 0} i the same way, as a set of umbers that satisfy j = X i P ij for all j; j 0 for all j; i X j = 1. (5.18) j Suppose that a set of umbers { i ; i 0} satisfyig (5.18) is chose as the iitial probability distributio P for a Markov chai, i.e., if Pr{X 0 = i} = i for all i. The Pr{X 1 = j} = i ip ij = j for all j, ad, by iductio, Pr{X = j} = j for all j ad all 0. The fact that Pr{X = j} = j for all j motivates the defiitio of steady-state distributio above. Theorem showed that 1/T jj is a steady-state probability for state j, both i a time-average ad a limitig esemble-average sese. The followig theorem brigs these ideas together. A irreducible Markov chai is a Markov chai i which all pairs of states commuicate. For fiite-state chais, irreducibility implied a sigle class of recurret states, whereas for coutably ifiite chais, a irreducible chai is a sigle class that ca be trasiet, ull-recurret, or positive-recurret. Theorem Assume a irreducible Markov chai with trasitio probabilities {P ij }. If (5.18) has a solutio, the the solutio is uique, i = 1/T ii > 0 for all i 0, ad the states are positive-recurret. Also, if the states are positive-recurret the (5.18) has a solutio.

11 238 CHAPTER 5. COUNTABLE-STATE MARKOV CHAINS Proof*: Let { j ; j 0} satisfy (5.18) ad be the iitial distributio of the Markov chai, i.e., Pr{X 0 =j} = j, j 0. The, as show above, Pr{X =j} = j for all 0, j 0. Let Ñ j (t) be the umber of occurreces of ay give state j from time 1 to t. Equatig Pr{X =j} to the expectatio of a occurrece of j at time, we have, h i (1/t)E Ñ j (t) = (1/t) X Pr{X =j} = j for all itegers t 1. 1appleapplet Coditioig this o the possible startig states i, ad usig the coutig processes {N ij (t); t 0} defied earlier, h i j = (1/t)E Ne j (t) = X i E [N ij (t)/t] for all iteger t 1. (5.19) i For ay give state i, let T ij be the time of the first occurrece of state j give X 0 = i. The if T ij < 1, we have N ij (t) apple N ij (T ij + t). Thus, for all t 1, E [N ij (t)] apple E [N ij (T ij + t)] = 1 + E [N jj (t)]. (5.20) The last step follows sice the process is i state j at time T ij, ad the expected umber of occurreces of state j i the ext t steps is E [N jj (t)]. Substitutig (5.20) i (5.19) for each i, j apple 1/t + E [N jj (t)/t)]. Takig the limit as t! 1 ad usig (5.16), j apple lim t!1 E [N jj (t)/t]. Sice P i i = 1, there is at least oe value of j for which j > 0, ad for this j, lim t!1 E [N jj (t)/t] > 0, ad cosequetly lim t!1 E [N jj (t)] = 1. Thus, from Lemma 5.1.1, state j is recurret, ad from Theorem 5.1.2, j is positive-recurret. From Theorem 5.1.3, all states are the positive-recurret. For ay j ad ay iteger M, (5.19) implies that j X i E [N ij (t)/t] for all t. (5.21) iapplem From Theorem 5.1.2, lim t!1 E [N ij (t)/t] = 1/T jj for all i. Substitutig this ito (5.21), we get j 1/T jj PiappleM i. Sice M is arbitrary, j 1/T jj. Sice we already showed that j apple lim t!1 E [N jj (t)/t] = 1/T jj, we have j = 1/T jj for all j. This shows both that j > 0 for all j ad that the solutio to (5.18) is uique. Exercise 5.5 completes the proof by showig that if the states are positive-recurret, the choosig j = 1/T jj for all j satisfies (5.18). I practice, it is usually easy to see whether a chai is irreducible. We shall also see by a umber of examples that the steady-state distributio ca ofte be calculated from (5.18). Theorem the says that the calculated distributio is uique ad that its existece guaratees that the chai is positive recurret. Example Age of a reewal process: Cosider a reewal process {N(t); t > 0} i which the iter-reewal radom variables {W ; 1} are arithmetic with spa 1. We will use a Markov chai to model the age of this process (see Figure 5.3). The probability that a reewal occurs at a particular iteger time depeds o the past oly through the iteger time back to the last reewal. The state of the Markov chai durig a uit iterval

12 5.1. INTRODUCTION AND CLASSIFICATION OF STATES 239 P 00 P 01 P 12 P 23 P 34 z X X X z 0 X 1 z 2 z : y P 10 3 HY P20 P30 P40 X 4... Figure 5.3: A Markov chai model of the age of a reewal process. will be take as the age of the reewal process at the begiig of the iterval. Thus, each uit of time, the age either icreases by oe or a reewal occurs ad the age decreases to 0 (i.e., if a reewal occurs at time t, the age at time t is 0). Pr{W > } is the probability that a iter-reewal iterval lasts for more tha time uits. We assume that Pr{W > 0} = 1, so that each reewal iterval lasts at least oe time uit. The probability P,0 i the Markov chai is the probability that a reewal iterval has duratio + 1, give that the iterval exceeds. Thus, for example, P 00 is the probability that the reewal iterval is equal to 1. P,+1 is 1 P,0, which is Pr{W > + 1} /Pr{W > }. We ca the solve for the steady state probabilities i the chai: for > 0, = 1 P 1, = 2 P 2, 1 P 1, = 0 P 0,1 P 1,2... P 1,. The first equality above results from the fact that state, for > 0 ca be etered oly from state 1. The subsequet equalities come from substitutig i the same expressio for 1, the p 2, ad so forth. Pr{W > 1} Pr{W > 2} Pr{W > } = 0 Pr{W > 0} Pr{W > 1}... Pr{W > 1} = 0 Pr{W > }. (5.22) We have cacelled out all the cross terms above ad used the fact that Pr{W > 0} = 1. Aother way to see that = 0 Pr{W > } is to observe that state 0 occurs exactly oce i each iter-reewal iterval; state occurs exactly oce i those iter-reewal itervals of duratio or more. Sice the steady-state probabilities must sum to 1, (5.22) ca be solved for 0 as = = (5.23) Pr{W > } E [W ]. P 1 =0 The secod equality follows by expressig E [W ] as the itegral of the complemetary distributio fuctio of W. Combiig this with (5.22), the steady-state probabilities for 0 are Pr{W > } =. (5.24) E [W ] I terms of the reewal process, is the probability that, at some large iteger time, the age of the process will be. Note that if the age of the process at a iteger time is, the the age icreases toward + 1 at the ext iteger time, at which poit it either drops

13 240 CHAPTER 5. COUNTABLE-STATE MARKOV CHAINS to 0 or cotiues to rise. Thus ca be iterpreted as the fractio of time that the age of the process is betwee ad + 1. Recall from (4.28) (ad the fact that residual life ad age are equally distributed) that the distributio fuctio of the time-average age is give by F Z () = R 0 Pr{W > w} dw/e [W ]. Thus, the probability that the age is betwee ad +1 is F Z (+1) F Z (). Sice W is a iteger radom variable, this is Pr{W > } /E [W ] i agreemet with our result here. The aalysis here gives a ew, ad ituitively satisfyig, explaatio of why the age of a reewal process is so di eret from the iter-reewal time. The Markov chai shows the ever icreasig loops that give rise to large expected age whe the iter-reewal time is heavy tailed (i.e., has a distributio fuctio that goes to 0 slowly with icreasig time). These loops ca be associated with the isosceles triagles of Figure 4.7. The advatage here is that we ca associate the states with steady-state probabilities if the chai is recurret. Eve whe the Markov chai is ull-recurret (i.e., the associated reewal process has ifiite expected age), it seems easier to visualize the pheomeo of ifiite expected age. 5.2 Birth-death Markov chais A birth-death Markov chai is a Markov chai i which the state space is the set of oegative itegers; for all i 0, the trasitio probabilities satisfy P i,i+1 > 0 ad P i+1,i > 0, ad for all i j > 1, P ij = 0 (see Figure 5.4). A trasitio from state i to i + 1 is regarded as a birth ad oe from i + 1 to i as a death. Thus the restrictio o the trasitio probabilities meas that oly oe birth or death ca occur i oe uit of time. May applicatios of birth-death processes arise i queueig theory, where the state is the umber of customers, births are customer arrivals, ad deaths are customer departures. The restrictio to oly oe arrival or departure at a time seems rather peculiar, but usually such a chai is a fiely sampled approximatio to a cotiuous-time process, ad the time icremets are the small eough that multiple arrivals or departures i a time icremet are ulikely ad ca be igored i the limit. p 0 p 1 p 2 p 3 0 X 1 X X 2 X : Xy y X y 3 Xy O q 1 O q 2 O q 3 O q 4 O z z z zx p 3 q 3 Figure 5.4: Birth-death Markov chai. We deote P i,i+1 by p i ad P i,i 1 by q i. Thus P ii = 1 p i q i. There is a easy way to fid the steady-state probabilities of these birth-death chais. I ay sample fuctio of the process, ote that the umber of trasitios from state i to i + 1 di ers by at most 1 from the umber of trasitios from i + 1 to i. If the process starts to the left of i ad eds to the right, the oe more i! i + 1 trasitio occurs tha i + 1! i, etc. Thus if we visualize a reewal-reward process with reewals o occurreces of state i ad uit reward o trasitios from state i to i + 1, the limitig time-average umber of trasitios per uit time is i p i. Similarly, the limitig time-average umber of trasitios per uit time from

14 5.3. REVERSIBLE MARKOV CHAINS 241 i + 1 to i is i+1 q i+1. Sice these two must be equal i the limit, i p i = i+1 q i+1 for i 0. (5.25) The ituitio i (5.25) is simply that the rate at which dowward trasitios occur from i + 1 to i must equal the rate of upward trasitios. Sice this result is very importat, both here ad i our later study of cotiuous-time birth-death processes, we show that (5.25) also results from usig the steady-state equatios i (5.18): i = p i 1 i 1 + (1 p i q i ) i + q i+1 i+1 ; i > 0 (5.26) 0 = (1 p 0 ) 0 + q 1 1. (5.27) From (5.27), p 0 0 = q 1 1. To see that (5.25) is satisfied for i > 0, we use iductio o i, with i = 0 as the base. Thus assume, for a give i, that p i 1 i 1 = q i i. Substitutig this i (5.26), we get p i i = q i+1 i+1, thus completig the iductive proof. It is coveiet to defie i as p i /q i+1. The we have i+1 = i i, ad iteratig this, i 1 Y 1 i = 0 j ; 0 =. (5.28) 1 + P 1 Q i 1 j=0 i=1 j=0 j If P Q i 1 0applej<i j < 1, the 0 is positive ad all the states are positive-recurret. If this sum of products is ifiite, the o state is positive-recurret. If j is bouded below 1, say j apple 1 for some fixed e > 0 ad all su cietly large j, the this sum of products will coverge ad the states will be positive-recurret. For the simple birth-death process of Figure 5.2, if we defie = q/p, the j = for all j. For < 1, (5.28) simplifies to i = o i for all i 0, 0 = 1, ad thus i = (1 ) i for i 0. Exercise 5.2 shows how to fid F ij (1) for all i, j i the case where 1. We have see that the simple birth-death chai of Figure 5.2 is trasiet if > 1. This is ot ecessarily so i the case where self-trasitios exist, but the chai is still either trasiet or ull-recurret. A example of this will arise i Exercise Reversible Markov chais May importat Markov chais have the property that, i steady state, the sequece of states looked at backwards i time, i.e.,... X +1, X, X 1,..., has the same probabilistic structure as the sequece of states ruig forward i time. This equivalece betwee the forward chai ad backward chai leads to a umber of results that are ituitively quite surprisig ad that are quite di cult to derive without usig this equivalece. We shall study these results here ad the exted them i Chapter 6 to Markov processes with a discrete state space. This set of ideas, ad its use i queueig ad queueig etworks, has bee a active area of queueig research over may years. It leads to may simple results for systems that iitially look very complex. We oly scratch the surface here ad refer the iterested reader to [13] for a more comprehesive treatmet. Before goig ito reversibility, we describe the backward chai for a arbitrary Markov chai.

15 242 CHAPTER 5. COUNTABLE-STATE MARKOV CHAINS The defiig characteristic of a Markov chai {X ; 0} is that for all 0, Pr{X +1 X, X 1,..., X 0 } = Pr{X +1 X }. (5.29) For homogeeous chais, which we have bee assumig throughout, Pr{X +1 = j X = i} = P ij, idepedet of. For ay k > 1, we ca exted (5.29) to get Pr{X +k, X +k 1,..., X +1 X, X 1,..., X 0 } = Pr{X +k X +k 1 } Pr{X +k 1 X +k 2 }... Pr{X +1 X } = Pr{X +k, X +k 1,..., X +1 X }. (5.30) By lettig A + be ay evet defied o the states X +1 to X +k ad lettig A be ay evet defied o X 0 to X 1, this ca be writte more succictly as Pr A + X, A = Pr A + X. (5.31) This says that, give state X, ay future evet A + is statistically idepedet of ay past evet A. This result, amely that past ad future are idepedet give the preset state, is equivalet to (5.29) for defiig a Markov chai, but it has the advatage of showig the symmetry betwee past ad future. This symmetry is best brought out by multiplyig both sides of (5.31) by Pr{A X }, obtaiig 2 Pr A +, A X = Pr A + X Pr A X. (5.32) This symmetric form says that, coditioal o the curret state, the past ad future states are statistically idepedet. Dividig both sides by Pr{A + X } the yields Pr A X, A + = Pr A X. (5.33) By lettig A be X 1 ad A + be X +1, X +2,..., X +k, this becomes Pr{X 1 X, X +1,..., X +k } = Pr{X 1 X }. This is the equivalet form to (5.29) for the backward chai, ad says that the backward chai is also a Markov chai. By Bayes law, Pr{X 1 X } ca be evaluated as Pr{X 1 X } = Pr{X X 1 } Pr{X 1 }. (5.34) Pr{X } Sice the distributio of X ca vary with, Pr{X 1 X } ca also deped o. Thus the backward Markov chai is ot ecessarily homogeeous. This should ot be surprisig, sice the forward chai was defied with some arbitrary distributio for the iitial state at time 0. This iitial distributio was ot relevat for equatios (5.29) to (5.31), but as soo as Pr{A X } was itroduced, the iitial state implicitly became a part of each equatio ad destroyed the symmetry betwee past ad future. For a chai i steady state, however, Pr{X = j} = Pr{X 1 = j} = j for all j, ad we have Pr{X 1 = j X = i} = P ji j / i. (5.35) 2 Much more broadly, ay 3 evets, say A, X 0, A + are said to be Markov if Pr A + X 0A = Pr A + X 0, ad this implies the more symmetric form Pr A A + X 0) = Pr A X 0 Pr A + X 0.

16 5.3. REVERSIBLE MARKOV CHAINS 243 Thus the backward chai is homogeeous if the forward chai is i steady state. For a chai with steady-state probabilities { i ; i 0}, we defie the backward trasitio probabilities P ij as i P ij = j P ji. (5.36) From (5.34), the backward trasitio probability P ij, for a Markov chai i steady state, is the equal to Pr{X 1 = j X = i}, the probability that the previous state is j give that the curret state is i. Now cosider a ew Markov chai with trasitio probabilities {P ij }. Over some segmet of time for which both this ew chai ad the old chai are i steady state, the set of states geerated by the ew chai is statistically idistiguishable from the backward ruig sequece of states from the origial chai. It is somewhat simpler, i talkig about forward ad backward ruig chais, however, to visualize Markov chais ruig i steady state from t = 1 to t = +1. If oe is ucomfortable with this, oe ca also visualize startig the Markov chai at some very egative time with the iitial distributio equal to the steady-state distributio. 0} is re Defiitio A Markov chai that has steady-state probabilities { i ; i versible if P ij = j P ji / i for all i, j, i.e., if P ij = P ij for all i, j. Thus the chai is reversible if, i steady state, the backward ruig sequece of states is statistically idistiguishable from the forward ruig sequece. Comparig (5.36) with the steady-state equatios (5.25) that we derived for birth-death chais, we have the importat theorem: Theorem Every birth-death chai with a steady-state probability distributio is reversible. We saw that for birth-death chais, the equatio i P ij = j P ji (which oly had to be cosidered for i j apple 1) provided a very simple way of calculatig the steady-state probabilities. Ufortuately, it appears that we must first calculate the steady-state probabilities i order to show that a chai is reversible. The followig simple theorem gives us a coveiet escape from this dilemma. Theorem Assume that a irreducible Markov chai has trasitio probabilities {P ij }. Suppose { i } is a set of positive umbers summig to 1 ad satisfyig i P ij = j P ji ; all i, j. (5.37) the, first, { i ; i is reversible. 0} is the steady-state distributio for the chai, ad, secod, the chai Proof: Give a solutio to (5.37) for all i ad j, we ca sum this equatio over i for each j. X i P ij = j X Pji = j. (5.38) i i

17 244 CHAPTER 5. COUNTABLE-STATE MARKOV CHAINS Thus the solutio to (5.37), alog with the costraits i > 0, P i i = 1, satisfies the steady-state equatios, (5.18), ad, from Theorem 5.1.4, this is the uique steady-state distributio. Sice (5.37) is satisfied, the chai is also reversible. It is ofte possible, sometimes by usig a educated guess, to fid a solutio to (5.37). If this is successful, the we are assured both that the chai is reversible ad that the actual steady-state probabilities have bee foud. Note that the theorem applies to periodic chais as well as to aperiodic chais. If the chai is periodic, the the steady-state probabilities have to be iterpreted as average values over the period, but from Theorem shows that (5.38) still has a uique solutio (assumig a irreducible chai). O the other had, for a chai with period d > 1, there are d subclasses of states ad the sequece {X } must rotate betwee these classes i a fixed order. For this same order to be followed i the backward chai, the oly possibility is d = 2. Thus periodic chais with periods other tha 2 caot be reversible. There are several simple tests that ca be used to show that some give irreducible chai is ot reversible. First, the steady-state probabilities must satisfy i > 0 for all i, ad thus, if P ij > 0 but P ji = 0 for some i, j, the (5.37) caot be satisfied ad the chai is ot reversible. Secod, cosider ay set of three states, i, j, k. If P ji P ik P kj is uequal to P jk P ki P ij the the chai caot be reversible. To see this, ote that (5.37) requires that i = j P ji /P ij = k P ki /P ik. Thus, j P ji P ik = k P ki P ij. Equatio (5.37) also requires that j P jk = k P kj. Takig the ratio of these equatios, we see that P ji P ik P kj = P jk P ki P ij. Thus if this equatio is ot satisfied, the chai caot be reversible. I retrospect, this result is ot surprisig. What it says is that for ay cycle of three states, the probability of three trasitios goig aroud the cycle i oe directio must be the same as the probability of goig aroud the cycle i the opposite (ad therefore backwards) directio. It is also true (see [16] for a proof), that a ecessary ad su ciet coditio for a chai to be reversible is that the product of trasitio probabilities aroud ay cycle of arbitrary legth must be the same as the product of trasitio probabilities goig aroud the cycle i the opposite directio. This does t seem to be a widely useful way to demostrate reversibility. There is aother result, geeralizig Theorem 5.3.2, for fidig the steady-state probabilities of a arbitrary Markov chai ad simultaeously fidig the trasitio probabilities of the backward chai. Theorem Assume that a irreducible Markov chai has trasitio probabilities {P ij }. Suppose { i } is a set of positive umbers summig to 1 ad that {P ij } is a set of trasitio probabilities satisfyig i P ij = j P ji ; all i, j. (5.39) The { i } is the steady-state distributio ad {P ij } is the set of trasitio probabilities for the backward chai.

18 5.4. THE M/M/1 SAMPLE-TIME MARKOV CHAIN 245 Proof: Summig (5.39) over i, we get the steady-state equatios for the Markov chai, so the fact that the give { i } satisfy these equatios asserts that they are the steady-state probabilities. Equatio (5.39) the asserts that {P ij } is the set of trasitio probabilities for the backward chai. The followig two sectios illustrate some importat applicatios of reversibility. 5.4 The M/M/1 sample-time Markov chai The M/M/1 Markov chai is a sampled-time model of the M/M/1 queueig system. Recall that the M/M/1 queue has Poisso arrivals at some rate ad IID expoetially distributed service times at some rate µ. We assume throughout this sectio that < µ (this is required to make the states positive-recurret). For some give small icremet of time, we visualize observig the state of the system at the sample times. As idicated i Figure 5.5, the probability of a arrival i the iterval from ( 1) to is modeled as, idepedet of the state of the chai at time ( 1) ad thus idepedet of all prior arrivals ad departures. Thus the arrival process, viewed as arrivals i subsequet itervals of duratio, is Beroulli, thus approximatig the Poisso arrivals. This is a sampled-time approximatio to the Poisso arrival process of rate for a cotiuous-time M/M/1 queue. z X z z X zx 0 X 1 X y X y 2 X y 3 Xy 4... O µ O µ O µ O µ O Figure 5.5: Sampled-time approximatio to M/M/1 queue for time icremet. Whe the system is o-empty (i.e., the state of the chai is oe or more), the probability of a departure i the iterval ( 1) to is µ, thus modellig the expoetial service times. Whe the system is empty, of course, departures caot occur. Note that i our sampled-time model, there ca be at most oe arrival or departure i a iterval ( 1) to. As i the Poisso process, the probability of more tha oe arrival, more tha oe departure, or both a arrival ad a departure i a icremet is of order 2 for the actual cotiuous-time M/M/1 system beig modeled. Thus, for very small, we expect the sampled-time model to be relatively good. At ay rate, we ca ow aalyze the model with o further approximatios. Sice this chai is a birth-death chai, we ca use (5.28) to determie the steady-state probabilities; they are i = 0 i ; = /µ < 1. Settig the sum of the i to 1, we fid that 0 = 1, so i = (1 ) i ; all i 0. (5.40)

19 246 CHAPTER 5. COUNTABLE-STATE MARKOV CHAINS Thus the steady-state probabilities exist ad the chai is a birth-death chai, so from Theorem 5.3.1, it is reversible. We ow exploit the cosequeces of reversibility to fid some rather surprisig results about the M/M/1 chai i steady state. Figure 5.6 illustrates a sample path of arrivals ad departures for the chai. To avoid the cofusio associated with the backward chai evolvig backward i time, we refer to the origial chai as the chai movig to the right ad to the backward chai as the chai movig to the left. There are two types of correspodece betwee the right-movig ad the left-movig chai: 1. The left-movig chai has the same Markov chai descriptio as the right-movig chai, ad thus ca be viewed as a M/M/1 chai i its ow right. We still label the sampled-time itervals from left to right, however, so that the left-movig chai makes trasitios from X +1 to X to X 1. Thus, for example, if X = i ad X 1 = i+1, the left-movig chai has a arrival i the iterval from to ( 1). 2. Each sample fuctio... x 1, x, x of the right-movig chai correspods to the same sample fuctio... x +1, x, x 1... of the left-movig chai, where X 1 = x 1 is to the left of X = x for both chais. With this correspodece, a arrival to the right-movig chai i the iterval ( 1) to is a departure from the leftmovig chai i the iterval to ( 1), ad a departure from the right-movig chai is a arrival to the left-movig chai. Usig this correspodece, each evet i the left-movig chai correspods to some evet i the right-movig chai. I each of the properties of the M/M/1 chai to be derived below, a property of the leftmovig chai is developed through correspodece 1 above, ad the that property is traslated ito a property of the right-movig chai by correspodece 2. Property 1: Sice the arrival process of the right-movig chai is Beroulli, the arrival process of the left-movig chai is also Beroulli (by correspodece 1). Lookig at a sample fuctio x +1, x, x 1 of the left-movig chai (i.e., usig correspodece 2), a arrival i the iterval to ( 1) of the left-movig chai is a departure i the iterval ( 1) to of the right-movig chai. Sice the arrivals i successive icremets of the left-movig chai are idepedet ad have probability i each icremet, we coclude that departures i the right-movig chai are similarly Beroulli. The fact that the departure process is Beroulli with departure probability i each icremet is surprisig. Note that the probability of a departure i the iterval (, ] is µ coditioal o X 1 1 ad is 0 coditioal o X 1 = 0. Sice Pr{X 1 1} = 1 Pr{X 1 = 0} =, we see that the ucoditioal probability of a departure i the iterval (, ] is µ = as asserted above. The fact that successive departures are idepedet is much harder to derive without usig reversibility (see exercise 5.13). Property 2: I the origial (right-movig) chai, arrivals i the time icremets after are idepedet of X. Thus, for the left-movig chai, arrivals i time icremets to the left of are idepedet of the state of the chai at. From the correspodece betwee sample paths, however, a left chai arrival is a right chai departure, so that for the right-movig chai, departures i the time icremets prior to are idepedet of X, which is equivalet to sayig that the state X is idepedet of the prior departures.

20 5.4. THE M/M/1 SAMPLE-TIME MARKOV CHAIN r r r Arrivals r r r r r - Departures r r r r r r r r H - H H r r r r H H r State H H Hr r H r r r r r r HHHr H HHHH r r r Arrivals H r HHHr Departures H HHH H HH r r Figure 5.6: Sample fuctio of M/M/1 chai over a busy period ad correspodig arrivals ad departures for right ad left-movig chais. Arrivals ad departures are viewed as occurrig betwee the sample times, ad a arrival i the left-movig chai betwee time ad ( + 1) correspods to a departure i the right-movig chai betwee ( + 1) ad. This meas that if oe observes the departures prior to time, oe obtais o iformatio about the state of the chai at. This is agai a surprisig result. To make it seem more plausible, ote that a uusually large umber of departures i a iterval from ( m) to idicates that a large umber of customers were probably i the system at time ( m), but it does t appear to say much (ad i fact it says exactly othig) about the umber remaiig at. The followig theorem summarizes these results. Theorem (Burke s theorem for sampled-time). Give a M/M/1 Markov chai i steady state with < µ, a) the departure process is Beroulli, b) the state X at ay time is idepedet of departures prior to. The proof of Burke s theorem above did ot use the fact that the departure probability is the same for all states except state 0. Thus these results remai valid for ay birth-death chai with Beroulli arrivals that are idepedet of the curret state (i.e., for which P i,i+1 = for all i 0). Oe importat example of such a chai is the sampled time approximatio to a M/M/m queue. Here there are m servers, ad the probability of departure from state i i a icremet is µi for i apple m ad µm for i > m. For the states to be recurret, ad thus for a steady state to exist, must be less tha µm. Subject to this restrictio, properties a) ad b) above are valid for sampled-time M/M/m queues.

Random Models. Tusheng Zhang. February 14, 2013

Random Models. Tusheng Zhang. February 14, 2013 Radom Models Tusheg Zhag February 14, 013 1 Radom Walks Let me describe the model. Radom walks are used to describe the motio of a movig particle (object). Suppose that a particle (object) moves alog the

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

Infinite Sequences and Series

Infinite Sequences and Series Chapter 6 Ifiite Sequeces ad Series 6.1 Ifiite Sequeces 6.1.1 Elemetary Cocepts Simply speakig, a sequece is a ordered list of umbers writte: {a 1, a 2, a 3,...a, a +1,...} where the elemets a i represet

More information

Generalized Semi- Markov Processes (GSMP)

Generalized Semi- Markov Processes (GSMP) Geeralized Semi- Markov Processes (GSMP) Summary Some Defiitios Markov ad Semi-Markov Processes The Poisso Process Properties of the Poisso Process Iterarrival times Memoryless property ad the residual

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 3 January 29, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 3 January 29, 2003 Prof. Yannis A. Korilis TCOM 5: Networkig Theory & Fudametals Lecture 3 Jauary 29, 23 Prof. Yais A. Korilis 3-2 Topics Markov Chais Discrete-Time Markov Chais Calculatig Statioary Distributio Global Balace Equatios Detailed Balace

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit Theorems Throughout this sectio we will assume a probability space (, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 15

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 15 CS 70 Discrete Mathematics ad Probability Theory Summer 2014 James Cook Note 15 Some Importat Distributios I this ote we will itroduce three importat probability distributios that are widely used to model

More information

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence Chapter 3 Strog covergece As poited out i the Chapter 2, there are multiple ways to defie the otio of covergece of a sequece of radom variables. That chapter defied covergece i probability, covergece i

More information

Math 155 (Lecture 3)

Math 155 (Lecture 3) Math 55 (Lecture 3) September 8, I this lecture, we ll cosider the aswer to oe of the most basic coutig problems i combiatorics Questio How may ways are there to choose a -elemet subset of the set {,,,

More information

Discrete Mathematics for CS Spring 2005 Clancy/Wagner Notes 21. Some Important Distributions

Discrete Mathematics for CS Spring 2005 Clancy/Wagner Notes 21. Some Important Distributions CS 70 Discrete Mathematics for CS Sprig 2005 Clacy/Wager Notes 21 Some Importat Distributios Questio: A biased coi with Heads probability p is tossed repeatedly util the first Head appears. What is the

More information

Discrete Mathematics and Probability Theory Spring 2012 Alistair Sinclair Note 15

Discrete Mathematics and Probability Theory Spring 2012 Alistair Sinclair Note 15 CS 70 Discrete Mathematics ad Probability Theory Sprig 2012 Alistair Siclair Note 15 Some Importat Distributios The first importat distributio we leared about i the last Lecture Note is the biomial distributio

More information

4.3 Growth Rates of Solutions to Recurrences

4.3 Growth Rates of Solutions to Recurrences 4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES 81 4.3 Growth Rates of Solutios to Recurreces 4.3.1 Divide ad Coquer Algorithms Oe of the most basic ad powerful algorithmic techiques is divide ad coquer.

More information

An Introduction to Randomized Algorithms

An Introduction to Randomized Algorithms A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis

More information

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3 MATH 337 Sequeces Dr. Neal, WKU Let X be a metric space with distace fuctio d. We shall defie the geeral cocept of sequece ad limit i a metric space, the apply the results i particular to some special

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 19

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 19 CS 70 Discrete Mathematics ad Probability Theory Sprig 2016 Rao ad Walrad Note 19 Some Importat Distributios Recall our basic probabilistic experimet of tossig a biased coi times. This is a very simple

More information

Massachusetts Institute of Technology

Massachusetts Institute of Technology 6.0/6.3: Probabilistic Systems Aalysis (Fall 00) Problem Set 8: Solutios. (a) We cosider a Markov chai with states 0,,, 3,, 5, where state i idicates that there are i shoes available at the frot door i

More information

Seunghee Ye Ma 8: Week 5 Oct 28

Seunghee Ye Ma 8: Week 5 Oct 28 Week 5 Summary I Sectio, we go over the Mea Value Theorem ad its applicatios. I Sectio 2, we will recap what we have covered so far this term. Topics Page Mea Value Theorem. Applicatios of the Mea Value

More information

4. Partial Sums and the Central Limit Theorem

4. Partial Sums and the Central Limit Theorem 1 of 10 7/16/2009 6:05 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 4. Partial Sums ad the Cetral Limit Theorem The cetral limit theorem ad the law of large umbers are the two fudametal theorems

More information

K. Grill Institut für Statistik und Wahrscheinlichkeitstheorie, TU Wien, Austria

K. Grill Institut für Statistik und Wahrscheinlichkeitstheorie, TU Wien, Austria MARKOV PROCESSES K. Grill Istitut für Statistik ud Wahrscheilichkeitstheorie, TU Wie, Austria Keywords: Markov process, Markov chai, Markov property, stoppig times, strog Markov property, trasitio matrix,

More information

(b) What is the probability that a particle reaches the upper boundary n before the lower boundary m?

(b) What is the probability that a particle reaches the upper boundary n before the lower boundary m? MATH 529 The Boudary Problem The drukard s walk (or boudary problem) is oe of the most famous problems i the theory of radom walks. Oe versio of the problem is described as follows: Suppose a particle

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

Approximations and more PMFs and PDFs

Approximations and more PMFs and PDFs Approximatios ad more PMFs ad PDFs Saad Meimeh 1 Approximatio of biomial with Poisso Cosider the biomial distributio ( b(k,,p = p k (1 p k, k λ: k Assume that is large, ad p is small, but p λ at the limit.

More information

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece 1, 1, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet

More information

Advanced Stochastic Processes.

Advanced Stochastic Processes. Advaced Stochastic Processes. David Gamarik LECTURE 2 Radom variables ad measurable fuctios. Strog Law of Large Numbers (SLLN). Scary stuff cotiued... Outlie of Lecture Radom variables ad measurable fuctios.

More information

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece,, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet as

More information

Recurrence Relations

Recurrence Relations Recurrece Relatios Aalysis of recursive algorithms, such as: it factorial (it ) { if (==0) retur ; else retur ( * factorial(-)); } Let t be the umber of multiplicatios eeded to calculate factorial(). The

More information

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22 CS 70 Discrete Mathematics for CS Sprig 2007 Luca Trevisa Lecture 22 Aother Importat Distributio The Geometric Distributio Questio: A biased coi with Heads probability p is tossed repeatedly util the first

More information

Application to Random Graphs

Application to Random Graphs A Applicatio to Radom Graphs Brachig processes have a umber of iterestig ad importat applicatios. We shall cosider oe of the most famous of them, the Erdős-Réyi radom graph theory. 1 Defiitio A.1. Let

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS MASSACHUSTTS INSTITUT OF TCHNOLOGY 6.436J/5.085J Fall 2008 Lecture 9 /7/2008 LAWS OF LARG NUMBRS II Cotets. The strog law of large umbers 2. The Cheroff boud TH STRONG LAW OF LARG NUMBRS While the weak

More information

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 12

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 12 Machie Learig Theory Tübige Uiversity, WS 06/07 Lecture Tolstikhi Ilya Abstract I this lecture we derive risk bouds for kerel methods. We will start by showig that Soft Margi kerel SVM correspods to miimizig

More information

Beurling Integers: Part 2

Beurling Integers: Part 2 Beurlig Itegers: Part 2 Isomorphisms Devi Platt July 11, 2015 1 Prime Factorizatio Sequeces I the last article we itroduced the Beurlig geeralized itegers, which ca be represeted as a sequece of real umbers

More information

Sequences. Notation. Convergence of a Sequence

Sequences. Notation. Convergence of a Sequence Sequeces A sequece is essetially just a list. Defiitio (Sequece of Real Numbers). A sequece of real umbers is a fuctio Z (, ) R for some real umber. Do t let the descriptio of the domai cofuse you; it

More information

Sequences, Mathematical Induction, and Recursion. CSE 2353 Discrete Computational Structures Spring 2018

Sequences, Mathematical Induction, and Recursion. CSE 2353 Discrete Computational Structures Spring 2018 CSE 353 Discrete Computatioal Structures Sprig 08 Sequeces, Mathematical Iductio, ad Recursio (Chapter 5, Epp) Note: some course slides adopted from publisher-provided material Overview May mathematical

More information

Ma 530 Infinite Series I

Ma 530 Infinite Series I Ma 50 Ifiite Series I Please ote that i additio to the material below this lecture icorporated material from the Visual Calculus web site. The material o sequeces is at Visual Sequeces. (To use this li

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit theorems Throughout this sectio we will assume a probability space (Ω, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Chapter 6 Infinite Series

Chapter 6 Infinite Series Chapter 6 Ifiite Series I the previous chapter we cosidered itegrals which were improper i the sese that the iterval of itegratio was ubouded. I this chapter we are goig to discuss a topic which is somewhat

More information

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis Recursive Algorithms Recurreces Computer Sciece & Egieerig 35: Discrete Mathematics Christopher M Bourke cbourke@cseuledu A recursive algorithm is oe i which objects are defied i terms of other objects

More information

Math 257: Finite difference methods

Math 257: Finite difference methods Math 257: Fiite differece methods 1 Fiite Differeces Remember the defiitio of a derivative f f(x + ) f(x) (x) = lim 0 Also recall Taylor s formula: (1) f(x + ) = f(x) + f (x) + 2 f (x) + 3 f (3) (x) +...

More information

Singular Continuous Measures by Michael Pejic 5/14/10

Singular Continuous Measures by Michael Pejic 5/14/10 Sigular Cotiuous Measures by Michael Peic 5/4/0 Prelimiaries Give a set X, a σ-algebra o X is a collectio of subsets of X that cotais X ad ad is closed uder complemetatio ad coutable uios hece, coutable

More information

Optimally Sparse SVMs

Optimally Sparse SVMs A. Proof of Lemma 3. We here prove a lower boud o the umber of support vectors to achieve geeralizatio bouds of the form which we cosider. Importatly, this result holds ot oly for liear classifiers, but

More information

Section 1.1. Calculus: Areas And Tangents. Difference Equations to Differential Equations

Section 1.1. Calculus: Areas And Tangents. Difference Equations to Differential Equations Differece Equatios to Differetial Equatios Sectio. Calculus: Areas Ad Tagets The study of calculus begis with questios about chage. What happes to the velocity of a swigig pedulum as its positio chages?

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Lecture 3 The Lebesgue Integral

Lecture 3 The Lebesgue Integral Lecture 3: The Lebesgue Itegral 1 of 14 Course: Theory of Probability I Term: Fall 2013 Istructor: Gorda Zitkovic Lecture 3 The Lebesgue Itegral The costructio of the itegral Uless expressly specified

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

Lecture 4. We also define the set of possible values for the random walk as the set of all x R d such that P(S n = x) > 0 for some n.

Lecture 4. We also define the set of possible values for the random walk as the set of all x R d such that P(S n = x) > 0 for some n. Radom Walks ad Browia Motio Tel Aviv Uiversity Sprig 20 Lecture date: Mar 2, 20 Lecture 4 Istructor: Ro Peled Scribe: Lira Rotem This lecture deals primarily with recurrece for geeral radom walks. We preset

More information

Limit Theorems. Convergence in Probability. Let X be the number of heads observed in n tosses. Then, E[X] = np and Var[X] = np(1-p).

Limit Theorems. Convergence in Probability. Let X be the number of heads observed in n tosses. Then, E[X] = np and Var[X] = np(1-p). Limit Theorems Covergece i Probability Let X be the umber of heads observed i tosses. The, E[X] = p ad Var[X] = p(-p). L O This P x p NM QP P x p should be close to uity for large if our ituitio is correct.

More information

Axioms of Measure Theory

Axioms of Measure Theory MATH 532 Axioms of Measure Theory Dr. Neal, WKU I. The Space Throughout the course, we shall let X deote a geeric o-empty set. I geeral, we shall ot assume that ay algebraic structure exists o X so that

More information

subcaptionfont+=small,labelformat=parens,labelsep=space,skip=6pt,list=0,hypcap=0 subcaption ALGEBRAIC COMBINATORICS LECTURE 8 TUESDAY, 2/16/2016

subcaptionfont+=small,labelformat=parens,labelsep=space,skip=6pt,list=0,hypcap=0 subcaption ALGEBRAIC COMBINATORICS LECTURE 8 TUESDAY, 2/16/2016 subcaptiofot+=small,labelformat=pares,labelsep=space,skip=6pt,list=0,hypcap=0 subcaptio ALGEBRAIC COMBINATORICS LECTURE 8 TUESDAY, /6/06. Self-cojugate Partitios Recall that, give a partitio λ, we may

More information

1 Introduction to reducing variance in Monte Carlo simulations

1 Introduction to reducing variance in Monte Carlo simulations Copyright c 010 by Karl Sigma 1 Itroductio to reducig variace i Mote Carlo simulatios 11 Review of cofidece itervals for estimatig a mea I statistics, we estimate a ukow mea µ = E(X) of a distributio by

More information

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Discrete Mathematics for CS Spring 2008 David Wagner Note 22 CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 22 I.I.D. Radom Variables Estimatig the bias of a coi Questio: We wat to estimate the proportio p of Democrats i the US populatio, by takig

More information

Lesson 10: Limits and Continuity

Lesson 10: Limits and Continuity www.scimsacademy.com Lesso 10: Limits ad Cotiuity SCIMS Academy 1 Limit of a fuctio The cocept of limit of a fuctio is cetral to all other cocepts i calculus (like cotiuity, derivative, defiite itegrals

More information

CHAPTER 10 INFINITE SEQUENCES AND SERIES

CHAPTER 10 INFINITE SEQUENCES AND SERIES CHAPTER 10 INFINITE SEQUENCES AND SERIES 10.1 Sequeces 10.2 Ifiite Series 10.3 The Itegral Tests 10.4 Compariso Tests 10.5 The Ratio ad Root Tests 10.6 Alteratig Series: Absolute ad Coditioal Covergece

More information

x a x a Lecture 2 Series (See Chapter 1 in Boas)

x a x a Lecture 2 Series (See Chapter 1 in Boas) Lecture Series (See Chapter i Boas) A basic ad very powerful (if pedestria, recall we are lazy AD smart) way to solve ay differetial (or itegral) equatio is via a series expasio of the correspodig solutio

More information

Statistics 511 Additional Materials

Statistics 511 Additional Materials Cofidece Itervals o mu Statistics 511 Additioal Materials This topic officially moves us from probability to statistics. We begi to discuss makig ifereces about the populatio. Oe way to differetiate probability

More information

MA131 - Analysis 1. Workbook 2 Sequences I

MA131 - Analysis 1. Workbook 2 Sequences I MA3 - Aalysis Workbook 2 Sequeces I Autum 203 Cotets 2 Sequeces I 2. Itroductio.............................. 2.2 Icreasig ad Decreasig Sequeces................ 2 2.3 Bouded Sequeces..........................

More information

The z-transform. 7.1 Introduction. 7.2 The z-transform Derivation of the z-transform: x[n] = z n LTI system, h[n] z = re j

The z-transform. 7.1 Introduction. 7.2 The z-transform Derivation of the z-transform: x[n] = z n LTI system, h[n] z = re j The -Trasform 7. Itroductio Geeralie the complex siusoidal represetatio offered by DTFT to a represetatio of complex expoetial sigals. Obtai more geeral characteristics for discrete-time LTI systems. 7.

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulatio 1 Itroductio Readig Assigmet: Read Chapter 1 of text. We shall itroduce may of the key issues to be discussed i this course via a couple of model problems. Model Problem 1 (Jackso

More information

7 Sequences of real numbers

7 Sequences of real numbers 40 7 Sequeces of real umbers 7. Defiitios ad examples Defiitio 7... A sequece of real umbers is a real fuctio whose domai is the set N of atural umbers. Let s : N R be a sequece. The the values of s are

More information

Problem Set 2 Solutions

Problem Set 2 Solutions CS271 Radomess & Computatio, Sprig 2018 Problem Set 2 Solutios Poit totals are i the margi; the maximum total umber of poits was 52. 1. Probabilistic method for domiatig sets 6pts Pick a radom subset S

More information

Math 451: Euclidean and Non-Euclidean Geometry MWF 3pm, Gasson 204 Homework 3 Solutions

Math 451: Euclidean and Non-Euclidean Geometry MWF 3pm, Gasson 204 Homework 3 Solutions Math 451: Euclidea ad No-Euclidea Geometry MWF 3pm, Gasso 204 Homework 3 Solutios Exercises from 1.4 ad 1.5 of the otes: 4.3, 4.10, 4.12, 4.14, 4.15, 5.3, 5.4, 5.5 Exercise 4.3. Explai why Hp, q) = {x

More information

Lecture 2. The Lovász Local Lemma

Lecture 2. The Lovász Local Lemma Staford Uiversity Sprig 208 Math 233A: No-costructive methods i combiatorics Istructor: Ja Vodrák Lecture date: Jauary 0, 208 Origial scribe: Apoorva Khare Lecture 2. The Lovász Local Lemma 2. Itroductio

More information

Sequences I. Chapter Introduction

Sequences I. Chapter Introduction Chapter 2 Sequeces I 2. Itroductio A sequece is a list of umbers i a defiite order so that we kow which umber is i the first place, which umber is i the secod place ad, for ay atural umber, we kow which

More information

Once we have a sequence of numbers, the next thing to do is to sum them up. Given a sequence (a n ) n=1

Once we have a sequence of numbers, the next thing to do is to sum them up. Given a sequence (a n ) n=1 . Ifiite Series Oce we have a sequece of umbers, the ext thig to do is to sum them up. Give a sequece a be a sequece: ca we give a sesible meaig to the followig expressio? a = a a a a While summig ifiitely

More information

Lecture 6: Integration and the Mean Value Theorem. slope =

Lecture 6: Integration and the Mean Value Theorem. slope = Math 8 Istructor: Padraic Bartlett Lecture 6: Itegratio ad the Mea Value Theorem Week 6 Caltech 202 The Mea Value Theorem The Mea Value Theorem abbreviated MVT is the followig result: Theorem. Suppose

More information

Integrable Functions. { f n } is called a determining sequence for f. If f is integrable with respect to, then f d does exist as a finite real number

Integrable Functions. { f n } is called a determining sequence for f. If f is integrable with respect to, then f d does exist as a finite real number MATH 532 Itegrable Fuctios Dr. Neal, WKU We ow shall defie what it meas for a measurable fuctio to be itegrable, show that all itegral properties of simple fuctios still hold, ad the give some coditios

More information

NICK DUFRESNE. 1 1 p(x). To determine some formulas for the generating function of the Schröder numbers, r(x) = a(x) =

NICK DUFRESNE. 1 1 p(x). To determine some formulas for the generating function of the Schröder numbers, r(x) = a(x) = AN INTRODUCTION TO SCHRÖDER AND UNKNOWN NUMBERS NICK DUFRESNE Abstract. I this article we will itroduce two types of lattice paths, Schröder paths ad Ukow paths. We will examie differet properties of each,

More information

Discrete probability distributions

Discrete probability distributions Discrete probability distributios I the chapter o probability we used the classical method to calculate the probability of various values of a radom variable. I some cases, however, we may be able to develop

More information

Output Analysis and Run-Length Control

Output Analysis and Run-Length Control IEOR E4703: Mote Carlo Simulatio Columbia Uiversity c 2017 by Marti Haugh Output Aalysis ad Ru-Legth Cotrol I these otes we describe how the Cetral Limit Theorem ca be used to costruct approximate (1 α%

More information

WHAT IS THE PROBABILITY FUNCTION FOR LARGE TSUNAMI WAVES? ABSTRACT

WHAT IS THE PROBABILITY FUNCTION FOR LARGE TSUNAMI WAVES? ABSTRACT WHAT IS THE PROBABILITY FUNCTION FOR LARGE TSUNAMI WAVES? Harold G. Loomis Hoolulu, HI ABSTRACT Most coastal locatios have few if ay records of tsuami wave heights obtaied over various time periods. Still

More information

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018)

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018) Radomized Algorithms I, Sprig 08, Departmet of Computer Sciece, Uiversity of Helsiki Homework : Solutios Discussed Jauary 5, 08). Exercise.: Cosider the followig balls-ad-bi game. We start with oe black

More information

( ) = p and P( i = b) = q.

( ) = p and P( i = b) = q. MATH 540 Radom Walks Part 1 A radom walk X is special stochastic process that measures the height (or value) of a particle that radomly moves upward or dowward certai fixed amouts o each uit icremet of

More information

Probability, Expectation Value and Uncertainty

Probability, Expectation Value and Uncertainty Chapter 1 Probability, Expectatio Value ad Ucertaity We have see that the physically observable properties of a quatum system are represeted by Hermitea operators (also referred to as observables ) such

More information

6. Sufficient, Complete, and Ancillary Statistics

6. Sufficient, Complete, and Ancillary Statistics Sufficiet, Complete ad Acillary Statistics http://www.math.uah.edu/stat/poit/sufficiet.xhtml 1 of 7 7/16/2009 6:13 AM Virtual Laboratories > 7. Poit Estimatio > 1 2 3 4 5 6 6. Sufficiet, Complete, ad Acillary

More information

Math 113 Exam 3 Practice

Math 113 Exam 3 Practice Math Exam Practice Exam will cover.-.9. This sheet has three sectios. The first sectio will remid you about techiques ad formulas that you should kow. The secod gives a umber of practice questios for you

More information

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4. 4. BASES I BAACH SPACES 39 4. BASES I BAACH SPACES Sice a Baach space X is a vector space, it must possess a Hamel, or vector space, basis, i.e., a subset {x γ } γ Γ whose fiite liear spa is all of X ad

More information

Entropy Rates and Asymptotic Equipartition

Entropy Rates and Asymptotic Equipartition Chapter 29 Etropy Rates ad Asymptotic Equipartitio Sectio 29. itroduces the etropy rate the asymptotic etropy per time-step of a stochastic process ad shows that it is well-defied; ad similarly for iformatio,

More information

The Random Walk For Dummies

The Random Walk For Dummies The Radom Walk For Dummies Richard A Mote Abstract We look at the priciples goverig the oe-dimesioal discrete radom walk First we review five basic cocepts of probability theory The we cosider the Beroulli

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

On forward improvement iteration for stopping problems

On forward improvement iteration for stopping problems O forward improvemet iteratio for stoppig problems Mathematical Istitute, Uiversity of Kiel, Ludewig-Mey-Str. 4, D-24098 Kiel, Germay irle@math.ui-iel.de Albrecht Irle Abstract. We cosider the optimal

More information

Lecture 5: April 17, 2013

Lecture 5: April 17, 2013 TTIC/CMSC 350 Mathematical Toolkit Sprig 203 Madhur Tulsiai Lecture 5: April 7, 203 Scribe: Somaye Hashemifar Cheroff bouds recap We recall the Cheroff/Hoeffdig bouds we derived i the last lecture idepedet

More information

If a subset E of R contains no open interval, is it of zero measure? For instance, is the set of irrationals in [0, 1] is of measure zero?

If a subset E of R contains no open interval, is it of zero measure? For instance, is the set of irrationals in [0, 1] is of measure zero? 2 Lebesgue Measure I Chapter 1 we defied the cocept of a set of measure zero, ad we have observed that every coutable set is of measure zero. Here are some atural questios: If a subset E of R cotais a

More information

MA131 - Analysis 1. Workbook 3 Sequences II

MA131 - Analysis 1. Workbook 3 Sequences II MA3 - Aalysis Workbook 3 Sequeces II Autum 2004 Cotets 2.8 Coverget Sequeces........................ 2.9 Algebra of Limits......................... 2 2.0 Further Useful Results........................

More information

NUMERICAL METHODS FOR SOLVING EQUATIONS

NUMERICAL METHODS FOR SOLVING EQUATIONS Mathematics Revisio Guides Numerical Methods for Solvig Equatios Page 1 of 11 M.K. HOME TUITION Mathematics Revisio Guides Level: GCSE Higher Tier NUMERICAL METHODS FOR SOLVING EQUATIONS Versio:. Date:

More information

Polynomial Functions and Their Graphs

Polynomial Functions and Their Graphs Polyomial Fuctios ad Their Graphs I this sectio we begi the study of fuctios defied by polyomial expressios. Polyomial ad ratioal fuctios are the most commo fuctios used to model data, ad are used extesively

More information

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ.

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ. 2 5. Weighted umber of late jobs 5.1. Release dates ad due dates: maximimizig the weight of o-time jobs Oce we add release dates, miimizig the umber of late jobs becomes a sigificatly harder problem. For

More information

It is always the case that unions, intersections, complements, and set differences are preserved by the inverse image of a function.

It is always the case that unions, intersections, complements, and set differences are preserved by the inverse image of a function. MATH 532 Measurable Fuctios Dr. Neal, WKU Throughout, let ( X, F, µ) be a measure space ad let (!, F, P ) deote the special case of a probability space. We shall ow begi to study real-valued fuctios defied

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

Math 116 Second Midterm November 13, 2017

Math 116 Second Midterm November 13, 2017 Math 6 Secod Midterm November 3, 7 EXAM SOLUTIONS. Do ot ope this exam util you are told to do so.. Do ot write your ame aywhere o this exam. 3. This exam has pages icludig this cover. There are problems.

More information

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4 MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.

More information

Series III. Chapter Alternating Series

Series III. Chapter Alternating Series Chapter 9 Series III With the exceptio of the Null Sequece Test, all the tests for series covergece ad divergece that we have cosidered so far have dealt oly with series of oegative terms. Series with

More information

Assignment 5: Solutions

Assignment 5: Solutions McGill Uiversity Departmet of Mathematics ad Statistics MATH 54 Aalysis, Fall 05 Assigmet 5: Solutios. Let y be a ubouded sequece of positive umbers satisfyig y + > y for all N. Let x be aother sequece

More information

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014.

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014. Product measures, Toelli s ad Fubii s theorems For use i MAT3400/4400, autum 2014 Nadia S. Larse Versio of 13 October 2014. 1. Costructio of the product measure The purpose of these otes is to preset the

More information

Introduction to Machine Learning DIS10

Introduction to Machine Learning DIS10 CS 189 Fall 017 Itroductio to Machie Learig DIS10 1 Fu with Lagrage Multipliers (a) Miimize the fuctio such that f (x,y) = x + y x + y = 3. Solutio: The Lagragia is: L(x,y,λ) = x + y + λ(x + y 3) Takig

More information

1 Approximating Integrals using Taylor Polynomials

1 Approximating Integrals using Taylor Polynomials Seughee Ye Ma 8: Week 7 Nov Week 7 Summary This week, we will lear how we ca approximate itegrals usig Taylor series ad umerical methods. Topics Page Approximatig Itegrals usig Taylor Polyomials. Defiitios................................................

More information

MATH 21 SECTION NOTES

MATH 21 SECTION NOTES MATH SECTION NOTES EVAN WARNER. March 9.. Admiistrative miscellay. These weekly sectios will be for some review ad may example problems, i geeral. Attedace will be take as per class policy. We will be

More information

CS/ECE 715 Spring 2004 Homework 5 (Due date: March 16)

CS/ECE 715 Spring 2004 Homework 5 (Due date: March 16) CS/ECE 75 Sprig 004 Homework 5 (Due date: March 6) Problem 0 (For fu). M/G/ Queue with Radom-Sized Batch Arrivals. Cosider the M/G/ system with the differece that customers are arrivig i batches accordig

More information

Lecture 6: Integration and the Mean Value Theorem

Lecture 6: Integration and the Mean Value Theorem Math 8 Istructor: Padraic Bartlett Lecture 6: Itegratio ad the Mea Value Theorem Week 6 Caltech - Fall, 2011 1 Radom Questios Questio 1.1. Show that ay positive ratioal umber ca be writte as the sum of

More information

ENGI Series Page 6-01

ENGI Series Page 6-01 ENGI 3425 6 Series Page 6-01 6. Series Cotets: 6.01 Sequeces; geeral term, limits, covergece 6.02 Series; summatio otatio, covergece, divergece test 6.03 Stadard Series; telescopig series, geometric series,

More information