Stochastic processes and

Size: px
Start display at page:

Download "Stochastic processes and"

Transcription

1 Stochastic processes and Markov chains (part I) Wessel van Wieringen nl Department of Epidemiology and Biostatistics, VUmc & Department of Mathematics, VU University Amsterdam, The Netherlands

2 Stochastic processes

3 Stochastic processes Example 1 The intensity of the sun. Measured every day by the KNMI. Stochastic variable X t represents the sun s intensity at day t, 0 t T. Hence, X t assumes values in R + (positive values only).

4 Stochastic processes Example 2 DNA sequence of 11 bases long. At each base position there is an A, C, G or T. Stochastic variable X i is the base at position i, i = 1,,11. In case the sequence has been observed, say: (x 1, x 2,, x 11 ) = ACCCGATAGCT, then A is the realization of X 1, C that of X 2, et cetera.

5 Stochastic processes position t t+1 t+2 t+3 base A A T C A A A A G G G G C C C C T T T T position t position t+1 position t+2 position t+3

6 Stochastic processes Example 3 A patient s heart pulse during surgery. Measured continuously during interval [0, T]. Stochastic variable X t represents the occurrence of a heartbeat at time t, 0 t T. Hence, X t assumes only the values 0 (no heartbeat) and 1 (heartbeat).

7 Stochastic processes Example 4 Brain activity of a human under experimental conditions. Measured continuously during interval [0, T]. Stochastic variable X t represents the magnetic field at time t, 0 t T. Hence, X t assumes values on R.

8 Stochastic processes Differences between examples Discrete X t Continuous Discret te Example 2 Example 1 ntinuou Con Time s Example 3 Example 4

9 Stochastic processes The state space S is the collection of all possible values that the random variables of the stochastic process may assume. If S = {E 1, E 2 2,, E s s} } discrete, then X t is a discrete stochastic variable. examples 2 and 3. If S = [0, ) ) discrete, then X t is a continuous stochastic variable. examples 1 and 4.

10 Stochastic processes Stochastic process: Discrete time: t = 0, 1, 2, 3,. S = {-3, -2, -1, 0, 1, 2,..} P(one step up) = ½ = P(one step down) Absorbing state at

11 Stochastic processes The first passage time of a certain state E i in S is the timetatwhichx t X t = E i for the first time since the start of the process. What is the first passage 7 6 time of 2? And of -1?

12 Stochastic processes The absorbing state is a state E i for which the following holds: if X t = E i than X t+1 = E i for all s t. The process will never leave state E i The absorbing state is

13 Stochastic processes The time of absorption of an absorbing state is the first passage time of that state The time of absorption is

14 Stochastic processes The recurrence time is the first time t at which the process has returned to its initial state The recurrence time is

15 Stochastic processes A stochastic process is described by a collection of time points, the state space and the simultaneous distribution of the variables X t, i.e., the distributions of all X t and their dependency. There are two important types of processes: Poisson process: all variables are identically and independently distributed. Examples: queues for counters, call centers, servers, et cetera. Markov process: the variables are dependent in a simple manner.

16 Markov processes

17 Markov processes A 1 st order Markov process in discrete time is a sto- chastic process {X t } t=1,2, for which the following holds: P(X t+1 =x t+1 X t =x t,, X 1 =x 1 ) = P(X t+1 =x t+1 X t =x t ). In other words, only the present determines the future, the past is irrelevant. From every state with probability 0.5 one step up or down (except for state -3)

18 Markov processes The 1 st order Markov property, i.e. P(X t+1 =x t+1 X t =x t,, X 1 =x 1 ) = P(X t+1 =x t+1 X t =x t ), does not imply independence between X t-1 and X t+1. In fact, often P(X t+1 =x t+1 X t-1 =x t-1 ) is unequal to zero. For instance, P(X t+1 =2 X t-1 =0) = P(X t+1 =2 2 X t =1) * P(X t =1 X t-1 =0) = ¼ ¼

19 Markov processes An m-order Markov process in discrete time is a stochastic process {X t } t=1,2, for which the following holds: P(X t+1 =x t+1 X t =x t,, X 1 =x 1 ) = P(X t+1 =x t+1 X t =x t,,x t-m+1 = x t-m+1). Loosely, the future depends on the most recent past. Suppose m=9: 9 preceeding bases t+1 CTTCCTCAAGATGCGTCCAATCCCTCAAAC Distribution of the t+1-th base depends on the 9 preceeding ones.

20 Markov processes A Markov process is called a Markov chain if the state space is discrete, i.e., is finite or countable. In these lecture series we consider Markov chains in discrete time. Recall the DNA example.

21 Markov processes The transition probabilities are the but also the P(X t+1 =x t+1 X t =x t ), P(X t+1 =x t+1 X s =x s ) for s < t, where x t+1, x t in {E 1,, E s }=S S. P(X t+1 =3 X t =2) = 0.5 P(X t+1 =0 X t =1) = 0.5 P(X t+1 =5 X t =7) =

22 Markov processes A Markov process is called time homogeneous if the transition probabilities are independent of t: For example: P(X t+1 =x 1 X t =x 2 ) = P(X s+1 =x 1 X s =x 2 ). P(X 2 =-1 1 X X 1 =2) = P(X 6 =-1 1 X X 5 =2) P(X 584 =5 X 583 =4) = P(X 213 =5 X 212 =4)

23 Markov processes Example of a time inhomogeneous process h(t) Bathtub curve t Infant mortality Random failures Wearout failures

24 Markov processes Consider a DNA sequence of 11 bases. Then, S={a, c, g, t}, X i is the base of position i, and {X i } i=1,, 11 is a Markov chain if the base of position i only depends on the base of position i-1, and not on those before i-1. If this is plausible, a Markov chain is an acceptable model for base ordering in DNA sequences. A C G T state diagram

25 Markov processes In remainder, only time homogeneous Markov processes. We then denote the transition probabilities of a finite time homogeneous Markov chain in discrete time {X t } t=1,2, with S={E 1,, E s } as: P(X t+1 =E j X t =E i ) = p ij (does not depend on t). Putting the p ij in a matrix yields the transition matrix: The rows of this matrix sum to one.

26 Markov processes In the DNA example: where p AA = P(X t+1 = A X t = A) and: p AA + p AC + p AG + p AT = 1 p AA A p AC p CA p GC p GC p GA p AG p CA + p CC + p CG + p CT = 1 G T p TG et cetera p GG p GA p AG p GT p TC C p CC p CT p TT

27 Markov processes The initial distribution π = (π 1,,π s ) T gives the probabilities of the initial state: π i = P(X 1 = E i ) for i = 1,, s, and π π s = 1. Together the initial distribution π and the transition g matrix P determine the probability distribution of the process, the Markov chain {X t } t=1,2,

28 Markov processes We now show how the couple (π, P) determines the probability distribution (transition probabilities) of time steps larger than one. Hereto define for n = 2 : general n :

29 Markov processes For n = 2 : just the definition

30 Markov processes For n = 2 : use the fact that P(A, B) +P(A P(A, B C )=P(A)

31 Markov processes For n = 2 : use the definition of conditional probability: P(A, B) = P(A B) P(B)

32 Markov processes For n = 2 : use the Markov property

33 Markov processes For n = 2 :

34 Markov processes In summary, we have shown that: In a similar fashion we can show that: I d th t iti t i f t i th In words: the transition matrix for n steps is the onestep transition matrix raised to the power n.

35 Markov processes The general case: is proven by induction to n. One needs the Kolmogorov-Chapman equations: for all n, m 0 and i, j =1,,S.

36 Markov processes Proof of the Kolmogorov-Chapman equations:

37 Markov processes Induction proof of Assume Then:

38 Markov processes A numerical example: Then: matrix multiplication ( rows times columns )

39 Markov processes Thus: = P(X t+2 = I X t = I) is composed of two probabilities: probability of this route plus probability of this route I I I I I I II II II II II II position t position t+1 position t+2 position t position t+1 position t+2

40 Markov processes In similar fashion we may obtain: Sum over probs. of all possible paths between 2 states: I I I I I I II II II pos. t pos. t+1 pos. t+2 II II II pos. t+3 pos. t+4 pos. t+5 Similarly:

41 Markov process How do we sample from a Markov process? Reconsider the DNA example. A C G T p AA A p AC p CA p GC p GC p GA p AG p GA p AG p TC C p CT p CC f r o m A C G T p GT to G T A C G T p GG p TG p TT

42 Markov process 1 Sample the first base from the initial distribution π: P(X 1 =A) = 0.45, P(X 1 =C) = 0.10, et cetera Suppose we have drawn an A. Our DNA sequence now consists of one base, namely A. After the first step, our sequence looks like: A 2 Sample the next base using P given the previous base. Here X 1 =A, thus we sample from: P(X 2 =A X 1 =A) = 0.1, P(X 2 =C X 1 =A) = 0.1, et cetera. In case X 1 =C, we would have sampled from: P(X 2 =A X 1 =C) = 0.2, P(X 2 =C X 1 =C) = 0.3, et cetera.

43 Markov process 2 After the second step: AT 3 The last step is iterated until the last base. After the third step: ATC 9 After nine steps: ATCCGATGC GC

44 Andrey Markov

45 Andrey Markov

46 Andrey Markov In the famous first application of Markov chains, Andrey Markov studied the sequence of 20,000 letters in Pushkin s poem Eugeny Onegin, discovering that - the stationary vowel probability is 0.432, - the probability of a vowel following a vowel is 0.128, and - the probability of a vowel following a consonant is Markov also studied the sequence of 100,000 letters in Aksakov s novel The Childhood of Bagrov, the Grandson. This was in 1913, long before the computer age! Basharinl et al. (2004)

47 Parameter estimation

48 Parameter estimation T A G T T C G G A A G C A C G T G G A T G A G G A A C C C G C C A C C C C A T C C G C C T G G T G T T T G A T A G C T G T A T T C G G G C A T G G C G G C C C A C G C T A T T C G T C T G T A C G C A A G C C T C G T T T T A T C C C C G C C G C C A G A G T A C C G C A G C A C C C G G C G C G A C G G G A C C G C G C C G C G G C C A A C T A G C G C A G C C G C A G A C C G C C A C G T G C G C A G G A C T T C C C A C C C T C A C C C T T C C G C T C A C C C T T A T C G G A G G G C A C T C G C A C G G A T A G T T G G T A G G G G G A C A T C C C A G G G T A G C C G C A C G A T G C G G A T A G G G G T A T T T A T G G C G C C C T C A C C C A G G C G A T C T C G G G C C C G G C T G G G C G C C G G G C G G A C C C A G A T A G A T C G G T G C C C C G G C C C C A A C C G A A G G G T A G G A C T G T C G G A C C T G C G G C G C C C G C G A C G A T A C G A C A G A C A G G A A A G A A A G C A T C C A G G G A C G G C G C G C T T G C T G G A C T G A C C A G C C A C T C C A T G G G G T C C G T A C C C G C C C G G G G G C A C T T T C G C C C C C G C G G A G A G C T G A G A C A A T C G A T G T G C C C A G C C G A G G G G G C C C G C C A G C C C A G G G G A C C G G A C T A A G G G C C A C C G T A G G G T A C A G C A A C C C C G G A C A C T G G C G A C C C C G C A C C A C C C C C A C C C G T A G C T C A G T A C A C G A G G A G A G G A A C C C G C G C G A A G C C C C T G G G C G C T C T T G C T G G T C A G A A A T A C C G T C G T T T A T C C G T G G A C T C G T A C G A C C T G C C A C C C T G G C A G G A C T G A C T G C C G C C A C C C C C C G T C C A C T A C G G A A A G T G A G G G G G C C A A C C G C G T A G G A C A A G A G G G T C T C G C C T C G G G G G G T T G C C G G C C G G G G C A G G T T C A G G A G T T A T T T C C A G G C C C C A C G G A C T G T T T T T C C G T C C C A G A T G G G A T G G A A T T A A C G C A C G C C A T C A C A C C T G C A C G C C G G C C C C C T G G A G A C C G T G C G C A A How G A G G G do G G C we T A C estimate C C A C T T T C T the G C G T parameters A T A T A C G T T A G G A C T C G A A G C C C A G G G T A G T G A C G G A C G C G C T C G A T C C C C C G T T C A C C G G G A C C C T G C C G G A T T T A A G G C G C G T G C C G G G C C C G T G C T C A C C C C C G G A C of A G a C C Markov A G A C A C A T chain T T C T C C from T G C G A the G T G G data? T G C C G C T T A C C G G G T C T G A T A C G G A G T A G A G G C C C C C C G G C T A T A G C C A C A T G C C T A C A T C C T C G C G T C A C A C G G G C T C A G T T C G C C T G G G G A C G G A T G C C C C G A T C G C G T G G G A T A C C C G C G G C T T A C G A A T C A A A C A C G C C G C G G C G T G C G G G C G G G G A G G C C T A G G C T T C G G G A G G G G C A T T G C G C C C T C G A T A C G A C C G G G T C C C C G C C C A C G C T C T T T G G G T G T C G G C G C C G C A C T Maximum C C C T G C G T G C likelihood! C G G T C T C C T T G T G T A A C G G T G A T A C G G G G G T A C C G G T G T G C C A G C A G G C C T C T A A G C T A T A G C C C C T G C C C A C C T T C G T G A C A C C C G G G T T G C C C C C A C G T C C C G G T A G T C A A A G G G C T A C C C A G C G C C G C A T C G A C G T C C C T T G G G T G G C G G T T A A C A G C A G A T C T T T G C C A G A A G T C G C G C A A C A A A G G C A C T C T G C A G C C T G G C C A C C A G G C G G C C C T C T C G A G C G G C C A G C A G G A C C C G G G C A C G G C A C T C T A T A G C A C G G T A G C C A C T G C A G T A C C G T A G G A C C A C C C G G T G A C A C A G T G G A G A C C G G G G T T A C A C C A C T C C C C C G A G G T T A G C C A T C C T C G A G A A C G A C T A A G C G C G G G A A C G A G G C G A G C A G C C A A T A C G T G C G A A G A A C G T C T G C G G A G C G C C A T A C A C G A T A A C T A C G G T T T A T A G G T A C G T C G C A T G G A G G T T C G C G G C T G T A A A A C C C A C A C A T A C T C G G T C C C A C A C G G G T G T A A G C A C T C G T C A G T C C G C A G C C T C G T C T C G C C G A G G A T C A C G A G T C G G T C G C C C T C A A A G G A G C T T C A A G T T G G A C C C T G A C C G G G G A C T T G A G A A G T A A G C A C G C C C G A C G A A G C G G G G C C A G C A C C G G T A A A C C T G C G A C A A G T C C T T T A C C G C T A G G G T A C T G C G T G T C A A C C A G T C A C G A G T G C C C C A C C C T G G C G A C C C G G C G A G G A C C G C C C C C T T C C G G G A A G G T C A G A C C T G T G A G C A G A G A G C C A G G A T G C C A C T A A C G C T C A C C G T A C C T T C C G C C T G A C T C G C C G G T C C G C G C G C G A G C A A C C C C T A A C C A C C A A C C G G C G G A G G A C C C A G C A G A A T A T G G C G A A C C G G C C C C C G C G A C T G C G C G A T C G G C A G A C T C C T G T G C G G G G A G C A T A C A G G C T C T C C T C G G A G C G T G G C T C A G T G C C G C G T G G G C C G C T T T C T C C A G C C C C T A C A C G C C C A C T A C T C G A A C G T T T T T G T G G G C C T G A C T T G C C G C G T A C C C G C G C C T T T C G C C C T G A C G A C C C G C G T T A C C T A T G A G G G A T C A T G T G G G G T G G G G C C C C G G G T C C A C G C T A T C G A C G T A T G G T T G C T G C G T C A A C T C A C G C T C G A T G T A C A C G C C G T C C T C G T A A C A G C G A C C A C G C T C C C G A A G C T G G A T A A A C G A C T T C A G C A A T A A G G G A C G G T C A G G G G G G A G G A C G T A C G T A C T G G G C A G G C A G G C G G C T G G C A C C C T G A G G C A A A T G C G G A C G C G A A C C A T C C A A G A C G G G C G A C A A A A C G A C C G G A A G A G C G A G G A G A C G T A G C A C C G T T T C C G C G G C T T C C G A A C G G A G C T A A C A G A G A C C C G C C G G A T C T C G T T G G T A C C G A G C C G C C G T A T A C A T C A C A C C T G C A C G G C C A G T G C C T G C C T T A G T T C G G G C G C T A G C C C G T C G T C C C T A T T T A A C T G C G C G A G T G G C G A G C T C G G C

49 Parameter estimation Using the definition of conditional probability, the likelihood can be decomposed as follows: where x 1,, x T in S = {E 1,, E S }.

50 Parameter estimation Using the Markov property, we obtain:

51 Parameter estimation Furthermore, e.g.: where only one transition probability at the time enters the likelihood, due to the indicator function.

52 Parameter estimation The likelihood then becomes: where, e.g.,

53 Parameter estimation Now note that, e.g., Or, p AA + p AC + p AG + p AT = 1. p AT = 1 - p AA - p AC - p AG. AT AA AC AG Substitute this in the likelihood, and take the logarithm to arrive at the log-likelihood. likelihood Note: it is irrelevant which transition probability is substituted.

54 Parameter estimation The log-likelihood: likelihood:

55 Parameter estimation Differentiation the log-likelihood likelihood yields, e.g.: Equate the derivatives to zero. This yields four systems of equations to solve, e.g.:

56 Parameter estimation The transition probabilities are then estimated by: Verify 2 nd order part. derivatives of log-likehood are negative!

57 Parameter estimation If one may assume the observed data is a realization of a stationary Markov chain, the initial distribution is estimated by the stationary distribution. If only one realization of a stationary Markov chain is available and stationarity cannot be assumed, the initial distribution is estimated by:

58 Parameter estimation Thus, for the following sequence: ATCGATCGCA, tabulation yields The maximum likelihood estimates thus become:

59 Parameter estimation In R: > DNAseq <- c("a", "T", "C", "G", "A", "T", "C", "G", "C" "A") > table(dnaseq) DNAseq A C G T > table(dnaseq[1:9], DNAseq[2:10]) A C G T A C G T

60 Testing the order of the Markov chain

61 Testing the order of a Markov chain Often the order of the Markov chain is unknown and needs to be determined from the data. This is done using a Chi-square test for independence. Consider a DNA-sequence. To assess the order, one first tests whether the sequence consists of independent letters. Hereto count the nucleotides, di-nucleotides and tri-nucleotides in the sequence:

62 Testing the order of a Markov chain The null hypothesis of independence between the letters 2 of the DNA sequence evaluated by the χ 2 statistic: which has (4-1) x (4-1) = 9 degrees of freedom. If the null hypothesis cannot be rejected one would fit a If the null hypothesis cannot be rejected, one would fit a Markov model of order m=0. However, if H 0 can be rejected one would test for higher order dependence.

63 Testing the order of a Markov chain To test for 1 st order dependence, consider how often a di- nucleotide, e.g., CT, is followed by, e.g., T. The 1 st order dependence hypothesis is evaluated by: where This is χ 2 distributed with (16-1) 1) x(41)=45dof (4-1) 45 d.o.f..

64 Testing the order of a Markov chain A 1 st order Markov chain provides a reasonable description of the sequence of the hlye gene of the E.coli bacteria. Independence test: Chi-sq stat: , p-value: st order dependence test: Chi-sq stat: , p-value: The sequence of the prra gene of the E.coli bacteria requires a higher order Markov chain model. Independence test: Chi-sq stat: , p-value: e-04 1 st order dependence test: Chi-sq stat: , p-value: e e-08

65 Testing the order of a Markov chain In R (the independence case, assuming the DNAseq-object is a character-object j containing the sequence): > # calculate nucleotide and dimer frequencies > nuclfreq <- matrix(table(dnaseq), ncol=1) > dimerfreq <- table(dnaseq[1:(length(dnaseq)-1)], DNAseq[2:length(DNAseq)]) > # calculate expected dimer frequencies > dimerexp <- nuclfreq %*% t(nuclfreq)/(length(dnaseq)-1) > # calculate test statistic and p-value > teststat <- sum((dimerfreq - dimerexp)^2 / dimerexp) > pval <- exp(pchisq(teststat, 9, lower.tail=false, log.p=true)) Exercise: modify the code above for the 1 st order test. t

66 Application: sequence discriminationi i

67 Application: sequence discrimination We have fitted 1 st order Markov chain models to a representative coding and noncoding sequence of the E.coli bacteria. P coding A C G T A C G T P noncoding A C G T A C G T These models can used to discriminate between coding and noncoding sequences.

68 Application: sequence discrimination For a new sequence with unknown function calculate the likelihood under the coding and noncoding model, e.g.: These likelihoods lih are compared by means of their log ratio: the log-likelihood ratio.

69 Application: sequence discrimination If the log likelihood LR(X) of a new sequence exceeds a certain threshold, the sequence is classified as coding and non-coding otherwise. Back to E.coli To illustrate the potential of the discrimination approach, we calculate the log likelihood ratio for large set of known coding and noncoding sequences. The distribution of the two sets of LR(X) s are compared.

70 Application: sequence discrimination noncoding sequences coding sequences frequenc cy log likelihood ratio

71 Application: sequence discrimination flip mirror violinplot noncoding coding

72 Application: sequence discrimination Conclusion E.coli example Comparison of the distributions indicate that one could discriminate reasonably between coding and noncoding sequences on the basis of simple 1 st order Markov. Improvements: - Higher order Markov model. - Incorporate more structure information of the DNA.

73 References & further reading

74 References and further reading Basharin, G.P., Langville, A.N., Naumov, V.A (2004), The life and work of A.A. Markov, Linear Algebra and its Applications, 386, Ewens, W.J, Grant, G (2006), Statistical Methods for Bioinformatics, Springer, New York. Reinert, G., Schbath, S., Waterman, M.S. (2000), Probabilistic and statistical properties of words: an overview, Journal of Computational Biology, 7, 1-46.

75 This material is provided under the Creative Commons Attribution/Share-Alike/Non-Commercial License. See for details.

Stochastic processes and

Stochastic processes and Stochastic processes and Markov chains (part II) Wessel van Wieringen w.n.van.wieringen@vu.nl wieringen@vu nl Department of Epidemiology and Biostatistics, VUmc & Department of Mathematics, VU University

More information

Stochastic processes and Markov chains (part II)

Stochastic processes and Markov chains (part II) Stochastic processes and Markov chains (part II) Wessel van Wieringen w.n.van.wieringen@vu.nl Department of Epidemiology and Biostatistics, VUmc & Department of Mathematics, VU University Amsterdam, The

More information

Lecture 3: Markov chains.

Lecture 3: Markov chains. 1 BIOINFORMATIK II PROBABILITY & STATISTICS Summer semester 2008 The University of Zürich and ETH Zürich Lecture 3: Markov chains. Prof. Andrew Barbour Dr. Nicolas Pétrélis Adapted from a course by Dr.

More information

Biology 644: Bioinformatics

Biology 644: Bioinformatics A stochastic (probabilistic) model that assumes the Markov property Markov property is satisfied when the conditional probability distribution of future states of the process (conditional on both past

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Answers Lecture 2 Stochastic Processes and Markov Chains, Part2

Answers Lecture 2 Stochastic Processes and Markov Chains, Part2 Answers Lecture 2 Stochastic Processes and Markov Chains, Part2 Question 1 Question 1a Solve system of equations: (1 a)φ 1 + bφ 2 φ 1 φ 1 + φ 2 1. From the second equation we obtain φ 2 1 φ 1. Substitition

More information

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

Lecture #5. Dependencies along the genome

Lecture #5. Dependencies along the genome Markov Chains Lecture #5 Background Readings: Durbin et. al. Section 3., Polanski&Kimmel Section 2.8. Prepared by Shlomo Moran, based on Danny Geiger s and Nir Friedman s. Dependencies along the genome

More information

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6) Markov chains and the number of occurrences of a word in a sequence (4.5 4.9,.,2,4,6) Prof. Tesler Math 283 Fall 208 Prof. Tesler Markov Chains Math 283 / Fall 208 / 44 Locating overlapping occurrences

More information

Lecture 4: Evolutionary Models and Substitution Matrices (PAM and BLOSUM)

Lecture 4: Evolutionary Models and Substitution Matrices (PAM and BLOSUM) Bioinformatics II Probability and Statistics Universität Zürich and ETH Zürich Spring Semester 2009 Lecture 4: Evolutionary Models and Substitution Matrices (PAM and BLOSUM) Dr Fraser Daly adapted from

More information

STAT 380 Continuous Time Markov Chains

STAT 380 Continuous Time Markov Chains STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Mathematical Methods for Computer Science

Mathematical Methods for Computer Science Mathematical Methods for Computer Science Computer Science Tripos, Part IB Michaelmas Term 2016/17 R.J. Gibbens Problem sheets for Probability methods William Gates Building 15 JJ Thomson Avenue Cambridge

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

12.10 (STUDENT CD-ROM TOPIC) CHI-SQUARE GOODNESS- OF-FIT TESTS

12.10 (STUDENT CD-ROM TOPIC) CHI-SQUARE GOODNESS- OF-FIT TESTS CDR4_BERE601_11_SE_C1QXD 1//08 1:0 PM Page 1 110: (Student CD-ROM Topic) Chi-Square Goodness-of-Fit Tests CD1-1 110 (STUDENT CD-ROM TOPIC) CHI-SQUARE GOODNESS- OF-FIT TESTS In this section, χ goodness-of-fit

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Markov Models & DNA Sequence Evolution

Markov Models & DNA Sequence Evolution 7.91 / 7.36 / BE.490 Lecture #5 Mar. 9, 2004 Markov Models & DNA Sequence Evolution Chris Burge Review of Markov & HMM Models for DNA Markov Models for splice sites Hidden Markov Models - looking under

More information

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution 1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne

More information

Chapter 10. Chapter 10. Multinomial Experiments and. Multinomial Experiments and Contingency Tables. Contingency Tables.

Chapter 10. Chapter 10. Multinomial Experiments and. Multinomial Experiments and Contingency Tables. Contingency Tables. Chapter 10 Multinomial Experiments and Contingency Tables 1 Chapter 10 Multinomial Experiments and Contingency Tables 10-1 1 Overview 10-2 2 Multinomial Experiments: of-fitfit 10-3 3 Contingency Tables:

More information

Chi-Squared Tests. Semester 1. Chi-Squared Tests

Chi-Squared Tests. Semester 1. Chi-Squared Tests Semester 1 Goodness of Fit Up to now, we have tested hypotheses concerning the values of population parameters such as the population mean or proportion. We have not considered testing hypotheses about

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Review of One-way Tables and SAS

Review of One-way Tables and SAS Stat 504, Lecture 7 1 Review of One-way Tables and SAS In-class exercises: Ex1, Ex2, and Ex3 from http://v8doc.sas.com/sashtml/proc/z0146708.htm To calculate p-value for a X 2 or G 2 in SAS: http://v8doc.sas.com/sashtml/lgref/z0245929.htmz0845409

More information

Numerical methods for lattice field theory

Numerical methods for lattice field theory Numerical methods for lattice field theory Mike Peardon Trinity College Dublin August 9, 2007 Mike Peardon (Trinity College Dublin) Numerical methods for lattice field theory August 9, 2007 1 / 24 Numerical

More information

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains. Sarah Filippi Department of Statistics  TA: Luke Kelly Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:

More information

Introduction to Statistical Data Analysis Lecture 7: The Chi-Square Distribution

Introduction to Statistical Data Analysis Lecture 7: The Chi-Square Distribution Introduction to Statistical Data Analysis Lecture 7: The Chi-Square Distribution James V. Lambers Department of Mathematics The University of Southern Mississippi James V. Lambers Statistical Data Analysis

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

ISyE 6650 Probabilistic Models Fall 2007

ISyE 6650 Probabilistic Models Fall 2007 ISyE 6650 Probabilistic Models Fall 2007 Homework 4 Solution 1. (Ross 4.3) In this case, the state of the system is determined by the weather conditions in the last three days. Letting D indicate a dry

More information

The Chi-Square Distributions

The Chi-Square Distributions MATH 183 The Chi-Square Distributions Dr. Neal, WKU The chi-square distributions can be used in statistics to analyze the standard deviation σ of a normally distributed measurement and to test the goodness

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Lecture Notes: Markov chains

Lecture Notes: Markov chains Computational Genomics and Molecular Biology, Fall 5 Lecture Notes: Markov chains Dannie Durand At the beginning of the semester, we introduced two simple scoring functions for pairwise alignments: a similarity

More information

Modelling Complex Queuing Situations with Markov Processes

Modelling Complex Queuing Situations with Markov Processes Modelling Complex Queuing Situations with Markov Processes Jason Randal Thorne, School of IT, Charles Sturt Uni, NSW 2795, Australia Abstract This article comments upon some new developments in the field

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Plan for today. ! Part 1: (Hidden) Markov models. ! Part 2: String matching and read mapping

Plan for today. ! Part 1: (Hidden) Markov models. ! Part 2: String matching and read mapping Plan for today! Part 1: (Hidden) Markov models! Part 2: String matching and read mapping! 2.1 Exact algorithms! 2.2 Heuristic methods for approximate search (Hidden) Markov models Why consider probabilistics

More information

Readings: Finish Section 5.2

Readings: Finish Section 5.2 LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet YOUR NAME INSTRUCTIONS: No notes, no calculators, and no communications devices are permitted. Please keep all materials away from your

More information

Random Walk on a Graph

Random Walk on a Graph IOR 67: Stochastic Models I Second Midterm xam, hapters 3 & 4, November 2, 200 SOLUTIONS Justify your answers; show your work.. Random Walk on a raph (25 points) Random Walk on a raph 2 5 F B 3 3 2 Figure

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

Markov Chains. Chapter 16. Markov Chains - 1

Markov Chains. Chapter 16. Markov Chains - 1 Markov Chains Chapter 16 Markov Chains - 1 Why Study Markov Chains? Decision Analysis focuses on decision making in the face of uncertainty about one future event. However, many decisions need to consider

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

11-2 Multinomial Experiment

11-2 Multinomial Experiment Chapter 11 Multinomial Experiments and Contingency Tables 1 Chapter 11 Multinomial Experiments and Contingency Tables 11-11 Overview 11-2 Multinomial Experiments: Goodness-of-fitfit 11-3 Contingency Tables:

More information

Pattern Matching (Exact Matching) Overview

Pattern Matching (Exact Matching) Overview CSI/BINF 5330 Pattern Matching (Exact Matching) Young-Rae Cho Associate Professor Department of Computer Science Baylor University Overview Pattern Matching Exhaustive Search DFA Algorithm KMP Algorithm

More information

Lecture 7: Hypothesis Testing and ANOVA

Lecture 7: Hypothesis Testing and ANOVA Lecture 7: Hypothesis Testing and ANOVA Goals Overview of key elements of hypothesis testing Review of common one and two sample tests Introduction to ANOVA Hypothesis Testing The intent of hypothesis

More information

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Markov Model. Model representing the different resident states of a system, and the transitions between the different states Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

Lecture 4: Evolutionary models and substitution matrices (PAM and BLOSUM).

Lecture 4: Evolutionary models and substitution matrices (PAM and BLOSUM). 1 Bioinformatics: In-depth PROBABILITY & STATISTICS Spring Semester 2011 University of Zürich and ETH Zürich Lecture 4: Evolutionary models and substitution matrices (PAM and BLOSUM). Dr. Stefanie Muff

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

Phylogenetic Assumptions

Phylogenetic Assumptions Substitution Models and the Phylogenetic Assumptions Vivek Jayaswal Lars S. Jermiin COMMONWEALTH OF AUSTRALIA Copyright htregulation WARNING This material has been reproduced and communicated to you by

More information

Adventures in Stochastic Processes

Adventures in Stochastic Processes Sidney Resnick Adventures in Stochastic Processes with Illustrations Birkhäuser Boston Basel Berlin Table of Contents Preface ix CHAPTER 1. PRELIMINARIES: DISCRETE INDEX SETS AND/OR DISCRETE STATE SPACES

More information

2 DISCRETE-TIME MARKOV CHAINS

2 DISCRETE-TIME MARKOV CHAINS 1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state

More information

Numerical methods for lattice field theory

Numerical methods for lattice field theory Numerical methods for lattice field theory Mike Peardon Trinity College Dublin August 9, 2007 Mike Peardon (Trinity College Dublin) Numerical methods for lattice field theory August 9, 2007 1 / 37 Numerical

More information

Introduction to Hidden Markov Models for Gene Prediction ECE-S690

Introduction to Hidden Markov Models for Gene Prediction ECE-S690 Introduction to Hidden Markov Models for Gene Prediction ECE-S690 Outline Markov Models The Hidden Part How can we use this for gene prediction? Learning Models Want to recognize patterns (e.g. sequence

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Hidden Markov Models

Hidden Markov Models Hidden Markov Models Slides revised and adapted to Bioinformática 55 Engª Biomédica/IST 2005 Ana Teresa Freitas CG-Islands Given 4 nucleotides: probability of occurrence is ~ 1/4. Thus, probability of

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

The Chi-Square Distributions

The Chi-Square Distributions MATH 03 The Chi-Square Distributions Dr. Neal, Spring 009 The chi-square distributions can be used in statistics to analyze the standard deviation of a normally distributed measurement and to test the

More information

Mining Infrequent Patterns of Two Frequent Substrings from a Single Set of Biological Sequences

Mining Infrequent Patterns of Two Frequent Substrings from a Single Set of Biological Sequences Mining Infrequent Patterns of Two Frequent Substrings from a Single Set of Biological Sequences Daisuke Ikeda Department of Informatics, Kyushu University 744 Moto-oka, Fukuoka 819-0395, Japan. daisuke@inf.kyushu-u.ac.jp

More information

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Review. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with

More information

An introduction to biostatistics: part 1

An introduction to biostatistics: part 1 An introduction to biostatistics: part 1 Cavan Reilly September 6, 2017 Table of contents Introduction to data analysis Uncertainty Probability Conditional probability Random variables Discrete random

More information

Chapter 10 Nonlinear Models

Chapter 10 Nonlinear Models Chapter 10 Nonlinear Models Nonlinear models can be classified into two categories. In the first category are models that are nonlinear in the variables, but still linear in terms of the unknown parameters.

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time Math 416 Lecture 11 Math 416 Lecture 16 Exam 2 next time Birth and death processes, queueing theory In arrival processes, the state only jumps up. In a birth-death process, it can either jump up or down

More information

A review of Continuous Time MC STA 624, Spring 2015

A review of Continuous Time MC STA 624, Spring 2015 A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic

More information

Discrete Structures for Computer Science

Discrete Structures for Computer Science Discrete Structures for Computer Science William Garrison bill@cs.pitt.edu 6311 Sennott Square Lecture #24: Probability Theory Based on materials developed by Dr. Adam Lee Not all events are equally likely

More information

Hidden Markov Models

Hidden Markov Models Hidden Markov Models Outline 1. CG-Islands 2. The Fair Bet Casino 3. Hidden Markov Model 4. Decoding Algorithm 5. Forward-Backward Algorithm 6. Profile HMMs 7. HMM Parameter Estimation 8. Viterbi Training

More information

Continuous Time Markov Chains

Continuous Time Markov Chains Continuous Time Markov Chains Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 2015 Outline Introduction Continuous-Time Markov

More information

We know from STAT.1030 that the relevant test statistic for equality of proportions is:

We know from STAT.1030 that the relevant test statistic for equality of proportions is: 2. Chi 2 -tests for equality of proportions Introduction: Two Samples Consider comparing the sample proportions p 1 and p 2 in independent random samples of size n 1 and n 2 out of two populations which

More information

2.1.3 The Testing Problem and Neave s Step Method

2.1.3 The Testing Problem and Neave s Step Method we can guarantee (1) that the (unknown) true parameter vector θ t Θ is an interior point of Θ, and (2) that ρ θt (R) > 0 for any R 2 Q. These are two of Birch s regularity conditions that were critical

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Chapte The McGraw-Hill Companies, Inc. All rights reserved.

Chapte The McGraw-Hill Companies, Inc. All rights reserved. er15 Chapte Chi-Square Tests d Chi-Square Tests for -Fit Uniform Goodness- Poisson Goodness- Goodness- ECDF Tests (Optional) Contingency Tables A contingency table is a cross-tabulation of n paired observations

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015 Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

Finite-Horizon Statistics for Markov chains

Finite-Horizon Statistics for Markov chains Analyzing FSDT Markov chains Friday, September 30, 2011 2:03 PM Simulating FSDT Markov chains, as we have said is very straightforward, either by using probability transition matrix or stochastic update

More information

MATH 446/546 Test 2 Fall 2014

MATH 446/546 Test 2 Fall 2014 MATH 446/546 Test 2 Fall 204 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 546 level. Please read and follow all of these

More information

Hidden Markov Models. music recognition. deal with variations in - pitch - timing - timbre 2

Hidden Markov Models. music recognition. deal with variations in - pitch - timing - timbre 2 Hidden Markov Models based on chapters from the book Durbin, Eddy, Krogh and Mitchison Biological Sequence Analysis Shamir s lecture notes and Rabiner s tutorial on HMM 1 music recognition deal with variations

More information

Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity LETTER TO THE EDITOR Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity T.D. Frank Center for the Ecological Study of Perception and Action,

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Markov decision processes and interval Markov chains: exploiting the connection

Markov decision processes and interval Markov chains: exploiting the connection Markov decision processes and interval Markov chains: exploiting the connection Mingmei Teo Supervisors: Prof. Nigel Bean, Dr Joshua Ross University of Adelaide July 10, 2013 Intervals and interval arithmetic

More information

M3/4/5 S4 Applied Probability

M3/4/5 S4 Applied Probability M3/4/5 S4 Applied Probability Autumn 215 Badr Missaoui Room 545 Huxley Building Imperial College London E-Mail: badr.missaoui8@imperial.ac.uk Department of Mathematics Imperial College London 18 Queens

More information

(x t. x t +1. TIME SERIES (Chapter 8 of Wilks)

(x t. x t +1. TIME SERIES (Chapter 8 of Wilks) 45 TIME SERIES (Chapter 8 of Wilks) In meteorology, the order of a time series matters! We will assume stationarity of the statistics of the time series. If there is non-stationarity (e.g., there is a

More information

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

Goodness of Fit Tests: Homogeneity

Goodness of Fit Tests: Homogeneity Goodness of Fit Tests: Homogeneity Mathematics 47: Lecture 35 Dan Sloughter Furman University May 11, 2006 Dan Sloughter (Furman University) Goodness of Fit Tests: Homogeneity May 11, 2006 1 / 13 Testing

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Topic 21 Goodness of Fit

Topic 21 Goodness of Fit Topic 21 Goodness of Fit Contingency Tables 1 / 11 Introduction Two-way Table Smoking Habits The Hypothesis The Test Statistic Degrees of Freedom Outline 2 / 11 Introduction Contingency tables, also known

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

Sociology 6Z03 Topic 10: Probability (Part I)

Sociology 6Z03 Topic 10: Probability (Part I) Sociology 6Z03 Topic 10: Probability (Part I) John Fox McMaster University Fall 2014 John Fox (McMaster University) Soc 6Z03: Probability I Fall 2014 1 / 29 Outline: Probability (Part I) Introduction Probability

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model

More information

A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY. Zoran R. Pop-Stojanović. 1. Introduction

A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY. Zoran R. Pop-Stojanović. 1. Introduction THE TEACHING OF MATHEMATICS 2006, Vol IX,, pp 2 A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY Zoran R Pop-Stojanović Abstract How to introduce the concept of the Markov Property in an elementary

More information