WAITING-TIME DISTRIBUTION FOR THE r th OCCURRENCE OF A COMPOUND PATTERN IN HIGHER-ORDER MARKOVIAN SEQUENCES

Size: px
Start display at page:

Download "WAITING-TIME DISTRIBUTION FOR THE r th OCCURRENCE OF A COMPOUND PATTERN IN HIGHER-ORDER MARKOVIAN SEQUENCES"

Transcription

1 WAITING-TIME DISTRIBUTION FOR THE r th OCCURRENCE OF A COMPOUND PATTERN IN HIGHER-ORDER MARKOVIAN SEQUENCES Donald E. K. Martin 1 and John A. D. Aston 2 1 Mathematics Department, Howard University, Washington, DC, USA and Statistical Research Division, US Bureau of the Census, Washington, DC, USA 2 Institute of Statistical Science, Academia Sinica, Taipei, Taiwan, R.O.C. Abstract. We use finite Markov chain imbedding to compute the waiting-time distribution for the r th occurrence of a compound pattern in multi-state trials that are Markovian of a general order. The distribution of the number of occurrences of a pattern in n trials is obtained as a by-product. Results are given for both the overlapping and non-overlapping ways of counting patterns. The method given is shown to be applicable to very general cases, including high orders of Markovian dependence and long patterns. Key words and phrases: Multi-state trials, higher-order Markovian sequences, compound patterns, distribution of the number of runs and patterns, waiting-time distributions, finite Markov chain imbedding technique

2 1. Introduction In recent years, exact distribution theory for runs and patterns has been studied extensively and applied in the areas of statistics and applied probability. This research has been stimulated by new techniques such as conditional probability generating functions (Ebneshahrashoob and Sobel (199), Uchida (1998), and Han and Hirano (23a, 23b)) and finite Markov chain imbedding (Fu (1986,1996), Fu and Koutras (1994), Antzoulas (21), and Lou (23)). Typical in the study of distributions associated with runs and patterns is the assumption that the trials are independent or form a Markov chain, a first-order Markovian sequence (Fu and Chang, 22). In this paper we consider waiting-time distributions for the r th occurrence of a compound pattern in multi-state trials, assuming Markovian dependence of a general order. The distribution of the number of occurrences of a pattern in n trials is obtained as a byproduct. Using the finite Markov chain imbedding technique, results are given for both the overlapping and non-overlapping methods of counting patterns. The generality of the derived results is far-reaching. Included as special cases of the waiting- time distributions are the geometric and negative binomial distributions and more generally geometric and negative binomial distributions of order k, and sooner waiting-time distributions. The distribution of the number of occurrences of a compound pattern includes as special cases the binomial distribution and also the binomial distribution of order k. The theory has been applied in diverse areas, such as DNA sequence analysis, reliability, quality control, and start-up demonstration testing. We refer the reader to the books of Balakrishnan and Koutras (22) and Fu and Lou (23) for further discussion of the development of the theory and these and other applications. Let X m+ 1, X m+ 2,..., X, X1, X2,... be a multi-state m th-order Markovian sequence (meaning that the outcome of each trial depends on the outcomes of the m directly preceding trials, and depends only on these), with state space S { b b } =,..., 1 s ( s 2 ). For m 1, the associated initial probabilities and time-invariant transition probabilities of the sequence are respectively denoted by ( x,, x ) P( X = x,, X = x ), π m+ 1 m+ 1 m+ 1 p( x x,..., x ) P( X = x X = x,..., X = x ). t t m t 1 t t t m t m t 1 t 1-2 -

3 i 2, We call i a simple pattern if it is composed of a specified sequence of k i symbols b, b, where the symbols in the pattern are allowed to be repeated. A compound i ki η pattern is the union of simple patterns, i.e. = i= 1i, where η is fixed, the distinct simple patterns either pattern i are of lengths k i, i = 1,2,..., η, and a b denotes the occurrence of a or pattern b. Finally, denote the waiting time until the r b, i 1 th occurrence of in X1, X 2,... by W () r, and the number of occurrences of in X1, X2,..., X n by G ( n). Notice that P( W( r) n) = P( G( n) r), and thus the distribution of G ( n) can be obtained if we have that of W () r for various values of r. Unlike the overlapping case, with non-overlapping counting we re-start once a pattern has occurred. As an example, with = {123} {231}, the sequence has three nonoverlapping occurrences of, since we start over after each 3, but there are six overlapping occurrences of. In the next section, we give the main steps of our adaptation of finite Markov chain imbedding to compute distributions associated with W () r under both non-overlapping and overlapping counting. Numerical examples are given in Section 3. The examples show that our approach may be used in cases of long patterns and high orders of Markovian dependence. The final section is a summary. To compute the waiting-time distribution for the r th occurrence of, we associate with the sequence X 1, X 2,..., X, X1, X2,... m+ m+ a Markov chain { Yt, t =,1, } = n, where the set A is a set of absorbing states for { t } PW ( ( r) n) PY ( Aψ ) such that Y that corresponds to the r th occurrence of the compound pattern, and ψ denotes the initial distribution of Y. For a given value of m, the main steps of the computation are: (1) determining an appropriate state space Γ for the Markov chain { Y t } ; (2) determining the associated transition probabilities; (3) obtaining the row vector ψ which contains the initial probabilities for { Y t }, and multiplying it by the transition probability matrices TYt,, t = 1,2,, n that give one-step transition probabilities for states times t 1 to t and states in Γ. The probability P( W ( r) n) may then be obtained through multiplying by an appropriate vector to sum the probabilities of being absorbed in the absorbing states. A corollary of this is that the probability of being absorbed by each of the individual simple patterns accounting for the r th occurrence of may also be found

4 To aid with the exposition, as an example we use the compound pattern = {11111} {111} {} in what follows. Here, there are η = 3 simple patterns with s = 2 possible symbols ( and 1) for each observed X t, and k 1 = 5, k 2 = 4, and k 3 = 2 are the pattern lengths. 2. Computation of the waiting-time distribution In this section we give details of the steps used in the computations. Initially, nonoverlapping counting is used, but then we go back and convey necessary changes when patterns can overlap. 2.1 Determination of the state space Γ In the state space Γ we need information on the last m observations since the sequence is m th-order Markovian, and also on current progress into one of the simple patterns. Ending blocks will help in the latter regard. Ending blocks (Fu, 1996) of a simple pattern i { bi, b,, 1 i b 2 i ki } form { b,, i b } 1 i, where q can be any of the integers 1,2,, k 1 q i = are sub-patterns of the. The ending blocks of a compound pattern are the union of the sets of ending blocks for the simple patterns of which it is comprised, along with the symbol to indicate that none of the other ending blocks are currently active, if necessary. As examples, for the compound pattern given above, the set of ending blocks of {11111} is { 1, 11, 111, 1111 }, for {111} the set of ending blocks is { 1, 1, 11 }, and { } is the set of ending blocks for {}. The ending blocks of are given by {, 1,, 11, 1, 111, 11, 1111}. We also introduce what we will call finishing blocks. {,,, i = bi b 1 i b 2 i ki } are sub-patterns of the form { bi,, b ς i ki } Finishing blocks of, where ς can be any of the integers 1,2,, ki (and thus i is a finishing block of itself). Note that whereas ending blocks always start in the beginning of the simple pattern but may end at any point before the last symbol, finishing blocks may start at any point but always end with the last symbol. As an example, the finishing blocks of {111} are { 1, 11, 11, 111 }. To represent the relevant information, we allow the non-absorbing states of Γ to have three components: (i) an m -tuple of the s symbols, which gives the value of the last m - 4 -

5 observations; (ii) an ending block to denote present progress into a simple pattern; (iii) the number of simple-pattern occurrences up to that point. This will be represented as a vector triplet. We also include in the state space Γ absorbing states that correspond to the r th occurrence of taking place with the occurrence of specific simple patterns. Alternatively, we could choose to have only one absorbing state that denotes that occurred r times. To automate the determination of the state space Γ, we do the following steps. (a) We begin with all of the the vector triplets. m s ordered m -tuples of the s symbols as the first component of (b) Next we determine the possible ending blocks that can be paired with our first elements. With non-overlapping counting, we need to be cognizant of the fact that simple patterns may be completed in the middle or at the end of an m -tuple. If so, counting for patterns re-starts, and thus the active ending block could change. To account for this, for each m -tuple, after associating with the m -tuple the ending block obtained by examining it in its entirety, we also check after each symbol (and thus a total of m times) to determine if the string up to and including the symbol is a finishing block of a pattern, i.e. if a pattern could end there. We also check to see if a pattern of length less than m actually does end there. If a string is a finishing block, we determine a corresponding ending block by examining the remainder of the m -tuple. As examples, consider first the 3-tuple and compound pattern with m = 3, as before. We check the location after its first zero. A simple pattern () could end there, and there are two more zeroes after this spot, which is a simple pattern, and thus the effective ending block is (since the last two zeroes are counted as a pattern, there will be no active ending block with non-overlapping counting.) We then check after the second zero of. A pattern obviously can end there, as is a simple pattern. After, there is one more symbol,, which is taken as the active ending block. Finally, the whole 3-tuple has a simple pattern ending at its end, but the 3-tuple, ending block pair (, ) has already been included. Consider now as an example another 3-tuple, 111. A pattern could end after its first, second, or third 1. If a pattern ends after the first 1, the effective ending block is 11, if one ends after the second 1, the effective ending block is 1, and if a pattern ends after the third 1, the effective ending block is, so each of these three choices are entered as the second component of a state, with first component 111. In addition we must add ending blocks of lengths greater than or equal to m with their corresponding m -tuple. Thus, for, - 5 -

6 , 1, 11, 111 and 1111 will all be second components paired with first component 111 in states of Γ. For each m -tuple we check whether is included as the associated second component when the procedure above is carried. If not, it is added to enable initialization of the chain. For example, in the above algorithm, the state 1 generates only the pair (1, 1), and in order to carry out the initialization we add the pair (1, ). The states with ending-block components added in this manner will be dropped after the initialization (see Sub-section 2.3). Further states may need to be added to facilitate the initialization of the Markov chain { Y t }. These states are those that are only part of the initialization and can only be reached from states involved in the initialization of the chain. They will only be transient states for times t < m, and will be deleted after time m. An example of such a state is (1, ) which can be reached from the states (1, ) and (11, ) in the initialization but cannot be reached at any time after t=1. (c) To the pairs defined above we add a third value ρ (taking values,1,, r 1 ) which indicates the number of observed simple patterns up to time t. Thus if steps (a) and (b) above generate h pairs, then we will now have r h states represented by vector triplets. (d) Add in η absorbing states corresponding to r th pattern occurrences taking place with the occurrence of each of the simple patterns, or alternatively one state to simply indicate that absorption has taken place. For with m = 3 and r = 2, and non-overlapping counting, the state space Γ consists of 47 states. Of these, only 19 will remain in the long term, meaning that they have positive probability of occurring for all t < (albeit possibly very close to zero as opposed to actually identically zero for all t > τ for some τ ). 2.2 Computing the transition probabilities The transition probability matrices T,, t = 1,, n associated with the states of Γ are Yt derived by determining where the various states go at the next time period, and then noting the associated conditional probability of the transition, which is based on the last m observations. Each of the three components of the state and also the counting technique must be considered. As examples, for with m = 3, r = 2 and non-overlapping counting, if the - 6 -

7 present state is (11, 1, 1), a one on the next observation leaves the Markov chain { Y t } in state (11, 11, 1) with transition probability p (111), whereas with a zero the next state is be (1, 1, 1), with transition probability p ( 11). If the present state is (11, 1, 1), a one on the next observation gives the state (11, 11, 1) with transition probability p (111), whereas a zero leaves the chain in the absorbing state (, 2), with transition probability p ( 11). 2.3 Initial probabilities for { } t Y and the computation of P( W ( r) n) The computation of initial probability vector ψ is straightforward from the definitions above. The initial probabilities π ( x 1, m+, x) for { X } t are merely associated with the s m states of Γ corresponding to zero occurrences with ending block. All other states are assigned the probability zero. The Markov chain as defined is not computationally efficient, as some of the initial states will soon become unnecessary (by which we mean that there exists a τ such that the probability of entering the state is zero for all time points greater than τ ). Once a state becomes unnecessary, it is deleted, making the computation more efficient. For example, for with m = 3, for times t > 3 we only need the following 24 states: {(1, 1, ), (11, 11, ), (111, 111, ), (111, 1111, ), (11, 11, ), (11, 1, ), (,, 1), (1, 1, 1), (1, 1, 1), (11,, 1), (11, 11, 1), (1,, 1), (11, 1, 1), (11, 11, 1), (11, 1, 1), (11,, 1), (111, 1, 1), (111, 11, 1), (111, 111, 1), (111, 1111, 1), (111,, 1), (11111, 2), (111, 2), (, 2)}. For t > 7 the states (11, 11, ), (111, 111, ), (111, 1111, ), (11, 1, ), (111,, 1) are also unnecessary due to the nature of the simple patterns of. Note that we only delete states that have zero probability of occurring, and thus at all time points we retain the correspondence between ρ occurrences of, ρ =,1,, r 1, and states of Γ with ρ as the third component of the state, and also of r or more occurrences of, and the absorbing states A of Γ. The waiting-time probability is then computed as - 7 -

8 n (2.1) P( W ( r) n) = ψ TYt, U ( A), i= 1 where U ( A) is a column vector with ones at its end in positions corresponding to the absorbing states of Γ, and zeroes elsewhere (see Sub-section 2.6). 2.4 Computation of distributions under overlapping counting The difference between the overlapping and non-overlapping methods of counting lies in what happens when a simple pattern occurs. Whereas with non-overlapping counting we start over, when patterns can overlap, we do not. For ρ =, the states of Γ will be the same regardless of whether overlapping or nonoverlapping counting is used. However, for overlapping counting, when ρ >, because we don t start over when a pattern occurs, we no longer need to consider the possibility that a pattern could end in the middle or at the end of an m -tuple, eliminating that step from the = and m = 3, for process of determining states. Thus, for { } { } { } example, we don t need the states (11,, 1), (11, 1, 1), (11,, 1), (111,, 1), (111, 1, 1), and (111, 11, 1) for times t > m since they were obtained by assuming that a pattern ended in the middle of an m -tuple. For the m -tuples of states that are retained after time m, we also must remember to do overlapping counting within the m -tuple, and thus for, (,, 1) is not needed in the state space because it contains two overlapping patterns, and not one. Also, instead of the state (1,, 1) with non-overlapping counting we use (1,, 1) in the overlapping case (after the pattern, the second serves as the active ending block). The resulting state space after time m for with m = 3 and overlapping counting is Γ ={(1, 1, ), (11, 11, ), (111, 111, ), (111, 1111, ), (11, 11, ), (11, 1, ), (1, 1, 1), (1, 1, 1), (11, 11, 1), (1,, 1), (11, 11, 1), (11, 1, 1), (111, 111, 1), (111, 1111, 1), (11111, 2), (111, 2), (, 2)}. Notice that Γ contains 17 states, as opposed to 24 with non-overlapping counting. The changes in the state space when using overlapping instead of non-overlapping counting leads to corresponding changes in the transition probability matrix

9 To further compare the two counting methods, we consider a compound pattern for which one simple pattern lies totally within the other, 1 = {11} {1}. With non-overlapping counting, we would never count the pattern 11, because each occurrence of 1 causes the counting process to start from scratch. However, with overlapping counting, this is not the case, and in fact, whenever 11 occurs, so does 1. Consider the determination of the states of Γ (assuming that m = 3 ) for 1 when the two methods of counting are used. The states used for the initialization of the Markov chains are the same in either case. After time m, the ending blocks may vary, depending on the counting procedure used. With non-overlapping counting, the procedure of checking after each ending block to determine if it is a finishing block yields no states in the state space that aren t entered simply by examining complete 3-tuples. The resulting state space in the nonoverlapping case after time m is: Γ = {(,, ), (1, 1, ), (11, 1, ), (111, 1, ), (,, 1), (1, 1, 1), (11, 1, 1), (111, 1, 1), (1,, 1), (1,, 1), (11, 1, 1), (11,, 1), (,, 2), (1, 1, 2), (11, 1, 2), (111, 1, 2), (1,, 2), (1,, 2), (11, 1, 2), (11,, 2), (1, 3)}. Notice that only four states are needed for ρ = because the pattern 1 has occurred with the other four 3-tuples. With overlapping counting, because we don t restart with the occurrence of a simple pattern, for ρ = 1, 2,, r 1, (1,, ρ ) is replaced by (1, 1, ρ ), (11, 1, ρ ) is replaced by (11, 11, ρ ), and (11,, ρ ) is replaced by (11, 1, ρ ). Note that whereas with non-overlapping counting the r th simple pattern occurrence may be achieved in only one manner, with the r th occurrence of 1, with overlapping counting there are three possibilities, with the r th occurrence taking place through the occurrence of 1, the ( r 1) st and r th occurrence happening with 11 and 1 occurring simultaneously, or the r -th and ( r + 1) -st simple pattern occurrences occurring with 11 and 1 simultaneously. Thus we use three absorbing states in this case, for a total of 23 states. The possibility of ρ increasing by two also must be reflected in the transition probabilities. The waiting-time probability is again computed using (2.1). 2.5 Computing Limiting Absorption Probabilities - 9 -

10 The probability that a particular simple pattern accounts for the r th occurrence of is immediate from this setup, where instead of summing over the absorbing states we just select the absorbing state of the simple pattern of interest. The limiting probability as n of which simple pattern causes absorption can also be calculated very simply from the transition matrix T Y,1 and the initial distribution ψ of Y. First, partition the transition matrix T Y,1 as T Y,1 Q R =, I η where if Q is the ( λ η) ( λ η) matrix for transitions among transient states ( λ is the number of states in Γ ), R is the ( λ η) η matrix holding transition probabilities for transitions from the transient states into the absorbing states, is a η ( λ η) matrix of zeroes, and I η is an η η identity matrix. The limiting absorption probabilities a are obtained as (Resnick, 1994, pp ). a= ψ I Q R. 1 [( λ η ) ] 2.6 Theoretical justification The theoretical justification for carrying out the computations in the manner described above is carried in the following theorem. THEOREM 2.1 For an m th-order Markov chain { X }, the waiting-time distribution to the r th occurrence of a compound pattern may be computed as n P( W ( r) n) = ψ TYt, U ( A) i= 1 where ψ is a row vector holding the initial distribution for Y, and U ( A) = (,,,1,,1) is the column vector with ones at its end in locations corresponding to the absorbing states A of Γ. t PROOF The number of occurrences of G ( n) has been shown to be finite Markov chain imbeddable in the sense of Fu and Koutras (1994) (see, for example, Fu (1996)). Our - 1 -

11 construction is similar to Fu (1996), the only exception being the inclusion of m -tuples because the sequence is assumed to be m -th order Markovian. Thus, as constructed, we can define the partition on the state space Γ such that C ρ = {set of triplets with ρ as the third component}, ρ =,1,, r 1; C = { A}. By the Fu- Koutras Theorem (1994), PG ( ( n) = ρ) = PY ( n C ρ ψ ), ρ =,1,, r 1, and PG ( ( n) r) = PY ( Aψ ). But since P( W ( r) n) = P( G ( n) r), we have n n P( W( r) n) = P( G( n) r) = P( Yn Aψ) = ψ TY, t U ( A). t= 1 r+ 3. Numerical Examples We wrote a MATLAB program 1 to compute probabilities as described in the section 2 for general m, s, r, and n. Here we present computed probabilities for several examples. The first is the compound pattern = { 11111} { 111} { } used in the last section to help illustrate the computational method. Table 1 lists probabilities P( W (2) n) for n = 2,,25, and orders of dependence m = 1, 2, and 3 for non-overlapping counting, and Table 2 gives the corresponding probabilities when overlapping counting is used. The probabilities converge to one very quickly, and thus there is no need to show them for larger values of n. The initial distributions and transition probabilities that were used in Tables 1 and 2, and also for the third-order models of Figures 1-3 discussed below, are listed in Table 3. With transition probabilities and initial distributions so defined, the Markovian sequences of the various orders are stationary sequences, where the initial distributions correspond to limiting distributions. Actually, we computed the transition probabilities given through a set of orthogonal parameters that was used in Martin (2) to model binary stationary Markovian sequences. Figures 1 and 2 depict the probabilities r = 1, 2,,1 using non-overlapping and overlapping counting, respectively. P( W ( r) n) for Figure 3 depicts probabilities P( W (3) n) for the compound pattern 1 1 = {11} {1}, model order m = 3, and for both methods of counting patterns. By the nature of the counting procedures, P( W ( r) n) will always be greater when overlapping counting is used. However, for the absorption probabilities for individual simple patterns this is not necessarily the case, as some patterns become more likely to occur when overlapping

12 counting is used, and depending on the value of r, a finishing state will become more or less likely when the counting is changed. The next example is the compound pattern 2 = {ATAT} {TATA} {CGCG}, with model order 6 m =, and state space for the individual observations S { A, C, G, T} =, with A (adenine), C (cytosine), G (guanine), and T (thymine) representing the polymerized nucleotides (bases) of which DNA molecules are formed. In Figure 4, probabilities P( W (3) n) are represented for values n 1,. The initial distribution used for the example is a discrete uniform distribution over the 4 6 = 496 possible initial values. The transition probabilities px ( t xt 6,..., xt 1) were generated using a random number generator, with the limitation that the sum over x t of px ( t xt 6,..., xt 1) is one. Because of the large number of states in Γ (in excess of 24, for both types of counting), it is not feasible to list the transition probabilities that were used in this and the next example (which has approximately 2,5 states). This example shows that our method is able to handled compound patterns in Markov sequences of higher orders. Probabilities P( W (5) n) for the final example are depicted in Figure 5 for values 6 n In this case 3 =a b c, with a = { }, = { }, and = { }, the order of Markovian dependence is b c m = 5, and the state space for the individual observations is S = { 1, 2,3}. The initial distribution used for the example is again a discrete uniform distribution, in this case over the 5 3 = 243 possible initial values. The transition probabilities px ( t xt 5,..., xt 1) were again generated using a random number generator. This example illustrates the capability of the method to deal with long patterns that require large values of n for probabilities close to 1 to be observed. We note that the computer run time for the examples was minimal. Examples 1 and 2 ran almost instantly. The third example took about 4 minutes (most spent on the setup of the states), and the fourth example about 2 minutes (most spent on the calculation of the iterative n) on a Pentium IV with 1Gb RAM. 4. Summary In this paper we have given a method of computing waiting-time probabilities for the r th occurrence of a compound pattern in Markovian sequences of a general order. In 1 Available from the authors upon request

13 previous work, explicit solutions were displayed for independent trials and first-order Markovian sequences. Through examples we show that the method presented may be used in cases of both high orders of Markovian dependence and long patterns. Run time for a computer program implementing our algorithm is reasonable. References Antzoulakos, D. L. (21). Waiting time problems in a sequence of multi-state trials. Journal of Applied Probability, 38, Balakrishnan, N. and Koutras (22). M. V. Runs and Scans with Applications. John Wiley & Sons Inc.: New York. Fu, J. C. (1986). Reliability of consecutive- k -out-of- n : F systems with ( k 1) -step Markov dependence. IEEE Transactions on Reliability, R35, Fu, J. C. (1996). Distribution theory of runs and patterns associated with a sequence of multistate trials. Statistica Sinica, 6, Fu, J. C. and Chang, Y. M. (22). On probability generating functions for waiting time distributions of compound patterns in a sequence of multi-state trials. Journal of Applied Probability, 39, 7-8. Fu, J. C. and Koutras, M. V. (1994). Distribution theory of runs: A Markov chain approach. Journal of the American Statististical Association, 89, Fu, J. C. and Lou W. Y. W. (23). Distributional Theory of Runs and Scans and Its Applications: A Finite Markov Chain Approach. World Scientific: Singapore. Han, Q. and Hirano, K. (23a). Sooner and later waiting time problems for patterns in Markov dependent trials. Journal of Applied Probability, 4(1), Han, Q. and Hirano, K. (23b). Waiting time problem for an almost perfect match. Statistics & Probability Letters, 65(1),

14 Lou, W. Y. W. (23). The exact distribution of the k -tuple statistic for sequence homology. Statistics & Probability Letters, 61, Martin, D. E. K. (2). An algorithm to compute the probability of a run in binary fourthorder Markovian trials. Computers & Operations Research, 3(4), Resnick, S. I (1994). Adventures in Stochastic Processes, Birkhauser: Boston. Uchida, M. (1998). On number of success runs of specified length in a higher-order two-state Markov chain. Annals of the Institute of Statistical Mathematics, 5(3), Addresses for Correspondence: Donald E. K. Martin, John Aston, 852 Quill Point Drive, Institute of Statistical Science, Bowie, Academia Sinica, Maryland Academia Road, Sec 2 Taipei 115, Taiwan, ROC Fax: (31) ; donald.e.martin@census.gov. jaston@stat.sinica.edu.tw

15 Table 1. Computed probabilities P( W (2) n) using non-overlapping counting and for orders of dependence m = 1,2, and 3. The transition probabilities and initial distributions that were used are given in Table 3. m = 1 m = 2 m = 3 n

16 Table 2. Computed probabilities P( W (2) n) using overlapping counting for orders of dependence m = 1, 2, and 3. The transition probabilities and initial distributions that were used are given in Table 3. m = 1 m = 2 m = 3 n

17 Table 3. Transition probabilities and initial distributions that were used for Tables 1 and 2. The third-order transition and initial probabilities were also used in Figures 1-3. p = 1/2 p (1 1,1,1) = 3 / 4 π (1) = 1/ 2 π (1,1,1) = 21/1 p (1 1) = 3 / 5 p (1,1,1) = 7 /12 π () = 1/ 2 π (,1,1) = 9/1 p (1 ) = 2 / 5 p (1 1,,1) = 17 / 52 π (1,,1) = 13/ 2 p (1,,1) = 55/18 π (,,1) = 27/ 2 p (11,1) = 7 /1 p (11,1,) = 19 / 6 π (1,1) = 3/1 π (1,1, ) = 9 /1 p (1,1) = 9 / 2 p (1,1, ) = 73/ 22 π (,1) = 1/5 π (,1, ) = 11/1 p (11,) = 13/ 4 p (11,, ) = 331/ 54 π (1, ) = 1/ 5 π (1,, ) = 27 / 2 p (1, ) = 9 / 2 p (1,,) = 29 / 66 π (,) = 3/1 π (,,) = 33/2-17 -

18 Figure 1. Probabilities P( W (3) n) for the compound pattern = {11111} {111} {}, with model order m = 3, and state space for the individual S =,1. Probabilities are computed under non-overlapping counting. observations { } Transition probabilities and initial distributions for the third-order model are given in Table

19 Figure 2. Probabilities P( W (3) n) for the compound pattern = {11111} {111} {}, with model order m = 3, and state space for the individual S =,1. Probabilities are computed under non-overlapping counting. observations { } Transition probabilities and initial distributions for the third-order model are given in Table

20 Figure 3. Probabilities P( W (3) n) for the compound pattern 1 = {11} {1}, with 1 1 = {11}, 2 = {1}, model order m = 3, and state space for the individual observations S = {,1}. Probabilities are computed under both non-overlapping and overlapping counting. Transition probabilities and initial distributions for the third-order model are given in Table 3. P[W(3)<=n] non-overlapping overlapping n - 2 -

21 Figure 4. Probabilities P( W (3) n) for the compound pattern 2 2 = { ATAT} { TATA} { CGCG}, with model order m = 6, and state space for the individual observations S = { A, C, G, T}. Probabilities are computed under both overlapping and non-overlapping counting. The transition probabilities were randomly generated, and a discrete uniform initial distribution was used. Limiting probabilities for non-overlapping and overlapping counting and patterns { ATAT }, { TATA }, and { CGCG } are (.316,.287,.397) and (.325,.323,.352) respectively

22 Figure 5. Probabilities P( W (5) n) for the compound pattern 3 3 =a b c = { } { } { }, with model order m = 5, and state space for the individual observations S = { 1, 2,3}. Probabilities are computed under both overlapping and non-overlapping counting. The transition probabilities were randomly generated, and a discrete uniform initial distribution was used. Limiting probabilities for are (.841,.159) (for non-overlapping counting and patterns a and b ) and (.53,.95,.25,.199) (for overlapping counting and patterns a, b and cwith the fourth value representing that of a, coccurring at the same time and giving rise to the fifth and sixth occurrences. This can only happen when overlapping counting is used.)

Waiting time distributions of simple and compound patterns in a sequence of r-th order Markov dependent multi-state trials

Waiting time distributions of simple and compound patterns in a sequence of r-th order Markov dependent multi-state trials AISM (2006) 58: 291 310 DOI 10.1007/s10463-006-0038-8 James C. Fu W.Y. Wendy Lou Waiting time distributions of simple and compound patterns in a sequence of r-th order Markov dependent multi-state trials

More information

Computing Consecutive-Type Reliabilities Non-Recursively

Computing Consecutive-Type Reliabilities Non-Recursively IEEE TRANSACTIONS ON RELIABILITY, VOL. 52, NO. 3, SEPTEMBER 2003 367 Computing Consecutive-Type Reliabilities Non-Recursively Galit Shmueli Abstract The reliability of consecutive-type systems has been

More information

A Markov chain approach to quality control

A Markov chain approach to quality control A Markov chain approach to quality control F. Bartolucci Istituto di Scienze Economiche Università di Urbino www.stat.unipg.it/ bart Preliminaries We use the finite Markov chain imbedding approach of Alexandrou,

More information

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Markov Model. Model representing the different resident states of a system, and the transitions between the different states Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior

More information

On waiting time distribution of runs of ones or zeroes in a Bernoulli sequence

On waiting time distribution of runs of ones or zeroes in a Bernoulli sequence On waiting time distribution of runs of ones or zeroes in a Bernoulli sequence Sungsu Kim (Worcester Polytechnic Institute) Chongjin Park (Department of Mathematics and Statistics, San Diego State University)

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Waiting Time Distributions for Pattern Occurrence in a Constrained Sequence

Waiting Time Distributions for Pattern Occurrence in a Constrained Sequence Waiting Time Distributions for Pattern Occurrence in a Constrained Sequence November 1, 2007 Valeri T. Stefanov Wojciech Szpankowski School of Mathematics and Statistics Department of Computer Science

More information

EXPLICIT DISTRIBUTIONAL RESULTS IN PATTERN FORMATION. By V. T. Stefanov 1 and A. G. Pakes University of Western Australia

EXPLICIT DISTRIBUTIONAL RESULTS IN PATTERN FORMATION. By V. T. Stefanov 1 and A. G. Pakes University of Western Australia The Annals of Applied Probability 1997, Vol. 7, No. 3, 666 678 EXPLICIT DISTRIBUTIONAL RESULTS IN PATTERN FORMATION By V. T. Stefanov 1 and A. G. Pakes University of Western Australia A new and unified

More information

BINOMIAL DISTRIBUTION

BINOMIAL DISTRIBUTION BINOMIAL DISTRIBUTION The binomial distribution is a particular type of discrete pmf. It describes random variables which satisfy the following conditions: 1 You perform n identical experiments (called

More information

Name of the Student: Problems on Discrete & Continuous R.Vs

Name of the Student: Problems on Discrete & Continuous R.Vs Engineering Mathematics 08 SUBJECT NAME : Probability & Random Processes SUBJECT CODE : MA645 MATERIAL NAME : University Questions REGULATION : R03 UPDATED ON : November 07 (Upto N/D 07 Q.P) (Scan the

More information

Central Limit Theorem Approximations for the Number of Runs in Markov-Dependent Binary Sequences

Central Limit Theorem Approximations for the Number of Runs in Markov-Dependent Binary Sequences Central Limit Theorem Approximations for the Number of Runs in Markov-Dependent Binary Sequences George C. Mytalas, Michael A. Zazanis Department of Statistics, Athens University of Economics and Business,

More information

TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology

TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology DONALD M. DAVIS Abstract. We use ku-cohomology to determine lower bounds for the topological complexity of mod-2 e lens spaces. In the

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

The Distribution of Mixing Times in Markov Chains

The Distribution of Mixing Times in Markov Chains The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution

More information

21 Markov Decision Processes

21 Markov Decision Processes 2 Markov Decision Processes Chapter 6 introduced Markov chains and their analysis. Most of the chapter was devoted to discrete time Markov chains, i.e., Markov chains that are observed only at discrete

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden 1 Selecting Efficient Correlated Equilibria Through Distributed Learning Jason R. Marden Abstract A learning rule is completely uncoupled if each player s behavior is conditioned only on his own realized

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

CUMULATIVE DISTRIBUTION FUNCTION OF MARKOV ORDER 2 GEOMETRIC DISTRIBUTION OF ORDER K

CUMULATIVE DISTRIBUTION FUNCTION OF MARKOV ORDER 2 GEOMETRIC DISTRIBUTION OF ORDER K FUNCTIONAL DIFFERENTIAL EQUATIONS VOLUME 20 2013, NO 1 2 PP. 129 137 CUMULATIVE DISTRIBUTION FUNCTION OF MARKOV ORDER 2 GEOMETRIC DISTRIBUTION OF ORDER K E. SHMERLING Abstract. Simple formulas for calculating

More information

17.1 Binary Codes Normal numbers we use are in base 10, which are called decimal numbers. Each digit can be 10 possible numbers: 0, 1, 2, 9.

17.1 Binary Codes Normal numbers we use are in base 10, which are called decimal numbers. Each digit can be 10 possible numbers: 0, 1, 2, 9. ( c ) E p s t e i n, C a r t e r, B o l l i n g e r, A u r i s p a C h a p t e r 17: I n f o r m a t i o n S c i e n c e P a g e 1 CHAPTER 17: Information Science 17.1 Binary Codes Normal numbers we use

More information

The Markov Chain Imbedding Technique

The Markov Chain Imbedding Technique The Markov Chain Imbedding Technique Review by Amărioarei Alexandru In this paper we will describe a method for computing exact distribution of runs and patterns in a sequence of discrete trial outcomes

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Complex Numbers: Definition: A complex number is a number of the form: z = a + bi where a, b are real numbers and i is a symbol with the property: i

Complex Numbers: Definition: A complex number is a number of the form: z = a + bi where a, b are real numbers and i is a symbol with the property: i Complex Numbers: Definition: A complex number is a number of the form: z = a + bi where a, b are real numbers and i is a symbol with the property: i 2 = 1 Sometimes we like to think of i = 1 We can treat

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

The Design Procedure. Output Equation Determination - Derive output equations from the state table

The Design Procedure. Output Equation Determination - Derive output equations from the state table The Design Procedure Specification Formulation - Obtain a state diagram or state table State Assignment - Assign binary codes to the states Flip-Flop Input Equation Determination - Select flipflop types

More information

Unsupervised Learning with Permuted Data

Unsupervised Learning with Permuted Data Unsupervised Learning with Permuted Data Sergey Kirshner skirshne@ics.uci.edu Sridevi Parise sparise@ics.uci.edu Padhraic Smyth smyth@ics.uci.edu School of Information and Computer Science, University

More information

Visually Identifying Potential Domains for Change Points in Generalized Bernoulli Processes: an Application to DNA Segmental Analysis

Visually Identifying Potential Domains for Change Points in Generalized Bernoulli Processes: an Application to DNA Segmental Analysis University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Information Sciences 2009 Visually Identifying Potential Domains for

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Lecture Notes: Markov chains

Lecture Notes: Markov chains Computational Genomics and Molecular Biology, Fall 5 Lecture Notes: Markov chains Dannie Durand At the beginning of the semester, we introduced two simple scoring functions for pairwise alignments: a similarity

More information

IMPLIED DISTRIBUTIONS IN MULTIPLE CHANGE POINT PROBLEMS

IMPLIED DISTRIBUTIONS IN MULTIPLE CHANGE POINT PROBLEMS IMPLIED DISTRIBUTIONS IN MULTIPLE CHANGE POINT PROBLEMS J. A. D. ASTON 1,2, J. Y. PENG 3 AND D. E. K. MARTIN 4 1 CENTRE FOR RESEARCH IN STATISTICAL METHODOLOGY, WARWICK UNIVERSITY 2 INSTITUTE OF STATISTICAL

More information

On monotonicity of expected values of some run-related distributions

On monotonicity of expected values of some run-related distributions Ann Inst Stat Math (2016) 68:1055 1072 DOI 10.1007/s10463-015-0525-x On monotonicity of expected values of some run-related distributions Sigeo Aki 1 Katuomi Hirano 2 Received: 9 May 2014 / Revised: 11

More information

Finding the Value of Information About a State Variable in a Markov Decision Process 1

Finding the Value of Information About a State Variable in a Markov Decision Process 1 05/25/04 1 Finding the Value of Information About a State Variable in a Markov Decision Process 1 Gilvan C. Souza The Robert H. Smith School of usiness, The University of Maryland, College Park, MD, 20742

More information

Performance of Round Robin Policies for Dynamic Multichannel Access

Performance of Round Robin Policies for Dynamic Multichannel Access Performance of Round Robin Policies for Dynamic Multichannel Access Changmian Wang, Bhaskar Krishnamachari, Qing Zhao and Geir E. Øien Norwegian University of Science and Technology, Norway, {changmia,

More information

Birth-death chain models (countable state)

Birth-death chain models (countable state) Countable State Birth-Death Chains and Branching Processes Tuesday, March 25, 2014 1:59 PM Homework 3 posted, due Friday, April 18. Birth-death chain models (countable state) S = We'll characterize the

More information

All About Numbers Definitions and Properties

All About Numbers Definitions and Properties All About Numbers Definitions and Properties Number is a numeral or group of numerals. In other words it is a word or symbol, or a combination of words or symbols, used in counting several things. Types

More information

6 Solving Queueing Models

6 Solving Queueing Models 6 Solving Queueing Models 6.1 Introduction In this note we look at the solution of systems of queues, starting with simple isolated queues. The benefits of using predefined, easily classified queues will

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Binary consecutive covering arrays

Binary consecutive covering arrays Ann Inst Stat Math (2011) 63:559 584 DOI 10.1007/s10463-009-0240-6 Binary consecutive covering arrays A. P. Godbole M. V. Koutras F. S. Milienos Received: 25 June 2008 / Revised: 19 January 2009 / Published

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.262 Discrete Stochastic Processes Midterm Quiz April 6, 2010 There are 5 questions, each with several parts.

More information

COUNCIL ROCK HIGH SCHOOL MATHEMATICS. A Note Guideline of Algebraic Concepts. Designed to assist students in A Summer Review of Algebra

COUNCIL ROCK HIGH SCHOOL MATHEMATICS. A Note Guideline of Algebraic Concepts. Designed to assist students in A Summer Review of Algebra COUNCIL ROCK HIGH SCHOOL MATHEMATICS A Note Guideline of Algebraic Concepts Designed to assist students in A Summer Review of Algebra [A teacher prepared compilation of the 7 Algebraic concepts deemed

More information

Counting Runs of Ones with Overlapping Parts in Binary Strings Ordered Linearly and Circularly

Counting Runs of Ones with Overlapping Parts in Binary Strings Ordered Linearly and Circularly International Journal of Statistics and Probability; Vol. 2, No. 3; 2013 ISSN 1927-7032 E-ISSN 1927-7040 Published by Canadian Center of Science and Education Counting Runs of Ones with Overlapping Parts

More information

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6) Markov chains and the number of occurrences of a word in a sequence (4.5 4.9,.,2,4,6) Prof. Tesler Math 283 Fall 208 Prof. Tesler Markov Chains Math 283 / Fall 208 / 44 Locating overlapping occurrences

More information

Session-Based Queueing Systems

Session-Based Queueing Systems Session-Based Queueing Systems Modelling, Simulation, and Approximation Jeroen Horters Supervisor VU: Sandjai Bhulai Executive Summary Companies often offer services that require multiple steps on the

More information

P 1.5 X 4.5 / X 2 and (iii) The smallest value of n for

P 1.5 X 4.5 / X 2 and (iii) The smallest value of n for DHANALAKSHMI COLLEGE OF ENEINEERING, CHENNAI DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING MA645 PROBABILITY AND RANDOM PROCESS UNIT I : RANDOM VARIABLES PART B (6 MARKS). A random variable X

More information

Scheduling Markovian PERT networks to maximize the net present value: new results

Scheduling Markovian PERT networks to maximize the net present value: new results Scheduling Markovian PERT networks to maximize the net present value: new results Hermans B, Leus R. KBI_1709 Scheduling Markovian PERT networks to maximize the net present value: New results Ben Hermans,a

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle  holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

MATH 118 FINAL EXAM STUDY GUIDE

MATH 118 FINAL EXAM STUDY GUIDE MATH 118 FINAL EXAM STUDY GUIDE Recommendations: 1. Take the Final Practice Exam and take note of questions 2. Use this study guide as you take the tests and cross off what you know well 3. Take the Practice

More information

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history.

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history. c Dr Oksana Shatalov, Fall 2010 1 9.1: Markov Chains DEFINITION 1. Markov process, or Markov Chain, is an experiment consisting of a finite number of stages in which the outcomes and associated probabilities

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

Lecture 3: Markov chains.

Lecture 3: Markov chains. 1 BIOINFORMATIK II PROBABILITY & STATISTICS Summer semester 2008 The University of Zürich and ETH Zürich Lecture 3: Markov chains. Prof. Andrew Barbour Dr. Nicolas Pétrélis Adapted from a course by Dr.

More information

Supplementary Technical Details and Results

Supplementary Technical Details and Results Supplementary Technical Details and Results April 6, 2016 1 Introduction This document provides additional details to augment the paper Efficient Calibration Techniques for Large-scale Traffic Simulators.

More information

Karaliopoulou Margarita 1. Introduction

Karaliopoulou Margarita 1. Introduction ESAIM: Probability and Statistics URL: http://www.emath.fr/ps/ Will be set by the publisher ON THE NUMBER OF WORD OCCURRENCES IN A SEMI-MARKOV SEQUENCE OF LETTERS Karaliopoulou Margarita 1 Abstract. Let

More information

Appendix: Simple Methods for Shift Scheduling in Multi-Skill Call Centers

Appendix: Simple Methods for Shift Scheduling in Multi-Skill Call Centers Appendix: Simple Methods for Shift Scheduling in Multi-Skill Call Centers Sandjai Bhulai, Ger Koole & Auke Pot Vrije Universiteit, De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands Supplementary Material

More information

57:022 Principles of Design II Final Exam Solutions - Spring 1997

57:022 Principles of Design II Final Exam Solutions - Spring 1997 57:022 Principles of Design II Final Exam Solutions - Spring 1997 Part: I II III IV V VI Total Possible Pts: 52 10 12 16 13 12 115 PART ONE Indicate "+" if True and "o" if False: + a. If a component's

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and REVIEW FOR EXAM II The exam covers sections 3.4 3.6, the part of 3.7 on Markov chains, and 4.1 4.3. 1. The LU factorization: An n n matrix A has an LU factorization if A = LU, where L is lower triangular

More information

) ( ) Thus, (, 4.5] [ 7, 6) Thus, (, 3) ( 5, ) = (, 6). = ( 5, 3).

) ( ) Thus, (, 4.5] [ 7, 6) Thus, (, 3) ( 5, ) = (, 6). = ( 5, 3). 152 Sect 9.1 - Compound Inequalities Concept #1 Union and Intersection To understand the Union and Intersection of two sets, let s begin with an example. Let A = {1, 2,,, 5} and B = {2,, 6, 8}. Union of

More information

Counting. 1 Sum Rule. Example 1. Lecture Notes #1 Sept 24, Chris Piech CS 109

Counting. 1 Sum Rule. Example 1. Lecture Notes #1 Sept 24, Chris Piech CS 109 1 Chris Piech CS 109 Counting Lecture Notes #1 Sept 24, 2018 Based on a handout by Mehran Sahami with examples by Peter Norvig Although you may have thought you had a pretty good grasp on the notion of

More information

Discrete Probability

Discrete Probability Discrete Probability Counting Permutations Combinations r- Combinations r- Combinations with repetition Allowed Pascal s Formula Binomial Theorem Conditional Probability Baye s Formula Independent Events

More information

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of

More information

Bisection Ideas in End-Point Conditioned Markov Process Simulation

Bisection Ideas in End-Point Conditioned Markov Process Simulation Bisection Ideas in End-Point Conditioned Markov Process Simulation Søren Asmussen and Asger Hobolth Department of Mathematical Sciences, Aarhus University Ny Munkegade, 8000 Aarhus C, Denmark {asmus,asger}@imf.au.dk

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

Stochastic Realization of Binary Exchangeable Processes

Stochastic Realization of Binary Exchangeable Processes Stochastic Realization of Binary Exchangeable Processes Lorenzo Finesso and Cecilia Prosdocimi Abstract A discrete time stochastic process is called exchangeable if its n-dimensional distributions are,

More information

Since D has an exponential distribution, E[D] = 0.09 years. Since {A(t) : t 0} is a Poisson process with rate λ = 10, 000, A(0.

Since D has an exponential distribution, E[D] = 0.09 years. Since {A(t) : t 0} is a Poisson process with rate λ = 10, 000, A(0. IEOR 46: Introduction to Operations Research: Stochastic Models Chapters 5-6 in Ross, Thursday, April, 4:5-5:35pm SOLUTIONS to Second Midterm Exam, Spring 9, Open Book: but only the Ross textbook, the

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Solutions to Problem Set 5

Solutions to Problem Set 5 UC Berkeley, CS 74: Combinatorics and Discrete Probability (Fall 00 Solutions to Problem Set (MU 60 A family of subsets F of {,,, n} is called an antichain if there is no pair of sets A and B in F satisfying

More information

ARC 102. A: Pair. B: Ruined Square. DEGwer 2018/09/01. For International Readers: English editorial starts on page 6.

ARC 102. A: Pair. B: Ruined Square. DEGwer 2018/09/01. For International Readers: English editorial starts on page 6. ARC 102 DEGwer 2018/09/01 For International Readers: English editorial starts on page 6. A: Pair K K/2 K/2 i n t k ; s c a n f ( %d, &k ) ; p r i n t f ( %d\n, ( k / 2 ) ( ( k + 1) / 2 ) ) ; B: Ruined

More information

Homework 3 posted, due Tuesday, November 29.

Homework 3 posted, due Tuesday, November 29. Classification of Birth-Death Chains Tuesday, November 08, 2011 2:02 PM Homework 3 posted, due Tuesday, November 29. Continuing with our classification of birth-death chains on nonnegative integers. Last

More information

Name of the Student: Problems on Discrete & Continuous R.Vs

Name of the Student: Problems on Discrete & Continuous R.Vs Engineering Mathematics 05 SUBJECT NAME : Probability & Random Process SUBJECT CODE : MA6 MATERIAL NAME : University Questions MATERIAL CODE : JM08AM004 REGULATION : R008 UPDATED ON : Nov-Dec 04 (Scan

More information

Chapter 1A -- Real Numbers. iff. Math Symbols: Sets of Numbers

Chapter 1A -- Real Numbers. iff. Math Symbols: Sets of Numbers Fry Texas A&M University! Fall 2016! Math 150 Notes! Section 1A! Page 1 Chapter 1A -- Real Numbers Math Symbols: iff or Example: Let A = {2, 4, 6, 8, 10, 12, 14, 16,...} and let B = {3, 6, 9, 12, 15, 18,

More information

RISKy Business: An In-Depth Look at the Game RISK

RISKy Business: An In-Depth Look at the Game RISK Rose-Hulman Undergraduate Mathematics Journal Volume 3 Issue Article 3 RISKy Business: An In-Depth Look at the Game RISK Sharon Blatt Elon University, slblatt@hotmail.com Follow this and additional works

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

Summary of Results on Markov Chains. Abstract

Summary of Results on Markov Chains. Abstract Summary of Results on Markov Chains Enrico Scalas 1, 1 Laboratory on Complex Systems. Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale Amedeo Avogadro, Via Bellini 25 G,

More information

3.4. ZEROS OF POLYNOMIAL FUNCTIONS

3.4. ZEROS OF POLYNOMIAL FUNCTIONS 3.4. ZEROS OF POLYNOMIAL FUNCTIONS What You Should Learn Use the Fundamental Theorem of Algebra to determine the number of zeros of polynomial functions. Find rational zeros of polynomial functions. Find

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.262 Discrete Stochastic Processes Midterm Quiz April 6, 2010 There are 5 questions, each with several parts.

More information

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal. 35 Mixed Chains In this chapter we learn how to analyze Markov chains that consists of transient and absorbing states. Later we will see that this analysis extends easily to chains with (nonabsorbing)

More information

6 Evolution of Networks

6 Evolution of Networks last revised: March 2008 WARNING for Soc 376 students: This draft adopts the demography convention for transition matrices (i.e., transitions from column to row). 6 Evolution of Networks 6. Strategic network

More information

Scientific Method. Section 1. Observation includes making measurements and collecting data. Main Idea

Scientific Method. Section 1. Observation includes making measurements and collecting data. Main Idea Scientific Method Section 1 2B, 2C, 2D Key Terms scientific method system hypothesis model theory s Observation includes making measurements and collecting data. Sometimes progress in science comes about

More information

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t Wiebke S. Diestelkamp Department of Mathematics University of Dayton Dayton, OH 45469-2316 USA wiebke@udayton.edu

More information

IN THIS paper we investigate the diagnosability of stochastic

IN THIS paper we investigate the diagnosability of stochastic 476 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 50, NO 4, APRIL 2005 Diagnosability of Stochastic Discrete-Event Systems David Thorsley and Demosthenis Teneketzis, Fellow, IEEE Abstract We investigate

More information

MA6451 PROBABILITY AND RANDOM PROCESSES

MA6451 PROBABILITY AND RANDOM PROCESSES MA6451 PROBABILITY AND RANDOM PROCESSES UNIT I RANDOM VARIABLES 1.1 Discrete and continuous random variables 1. Show that the function is a probability density function of a random variable X. (Apr/May

More information

MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS

MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS By MATTHEW GOFF A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY WASHINGTON STATE UNIVERSITY Department

More information

College Algebra with Corequisite Support: A Blended Approach

College Algebra with Corequisite Support: A Blended Approach College Algebra with Corequisite Support: A Blended Approach 978-1-63545-058-3 To learn more about all our offerings Visit Knewtonalta.com Source Author(s) (Text or Video) Title(s) Link (where applicable)

More information

SERIES

SERIES SERIES.... This chapter revisits sequences arithmetic then geometric to see how these ideas can be extended, and how they occur in other contexts. A sequence is a list of ordered numbers, whereas a series

More information

INVARIANT SUBSETS OF THE SEARCH SPACE AND THE UNIVERSALITY OF A GENERALIZED GENETIC ALGORITHM

INVARIANT SUBSETS OF THE SEARCH SPACE AND THE UNIVERSALITY OF A GENERALIZED GENETIC ALGORITHM INVARIANT SUBSETS OF THE SEARCH SPACE AND THE UNIVERSALITY OF A GENERALIZED GENETIC ALGORITHM BORIS MITAVSKIY Abstract In this paper we shall give a mathematical description of a general evolutionary heuristic

More information

Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. Revised submission to IEEE TNN

Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. Revised submission to IEEE TNN Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables Revised submission to IEEE TNN Aapo Hyvärinen Dept of Computer Science and HIIT University

More information

CISC 4090: Theory of Computation Chapter 1 Regular Languages. Section 1.1: Finite Automata. What is a computer? Finite automata

CISC 4090: Theory of Computation Chapter 1 Regular Languages. Section 1.1: Finite Automata. What is a computer? Finite automata CISC 4090: Theory of Computation Chapter Regular Languages Xiaolan Zhang, adapted from slides by Prof. Werschulz Section.: Finite Automata Fordham University Department of Computer and Information Sciences

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

How to Pop a Deep PDA Matters

How to Pop a Deep PDA Matters How to Pop a Deep PDA Matters Peter Leupold Department of Mathematics, Faculty of Science Kyoto Sangyo University Kyoto 603-8555, Japan email:leupold@cc.kyoto-su.ac.jp Abstract Deep PDA are push-down automata

More information

UNCORRECTED PROOFS. P{X(t + s) = j X(t) = i, X(u) = x(u), 0 u < t} = P{X(t + s) = j X(t) = i}.

UNCORRECTED PROOFS. P{X(t + s) = j X(t) = i, X(u) = x(u), 0 u < t} = P{X(t + s) = j X(t) = i}. Cochran eorms934.tex V1 - May 25, 21 2:25 P.M. P. 1 UNIFORMIZATION IN MARKOV DECISION PROCESSES OGUZHAN ALAGOZ MEHMET U.S. AYVACI Department of Industrial and Systems Engineering, University of Wisconsin-Madison,

More information

ACHIEVING OPTIMAL DESIGN OF THE PRODUCTION LINE WITH OBTAINABLE RESOURCE CAPACITY. Miao-Sheng CHEN. Chun-Hsiung LAN

ACHIEVING OPTIMAL DESIGN OF THE PRODUCTION LINE WITH OBTAINABLE RESOURCE CAPACITY. Miao-Sheng CHEN. Chun-Hsiung LAN Yugoslav Journal of Operations Research 12 (2002), Number 2, 203-214 ACHIEVING OPTIMAL DESIGN OF THE PRODUCTION LINE WITH OBTAINABLE RESOURCE CAPACITY Miao-Sheng CHEN Graduate Institute of Management Nanhua

More information