Meeting times of Taboo random walks on bipartite graphs

Size: px
Start display at page:

Download "Meeting times of Taboo random walks on bipartite graphs"

Transcription

1 Meeting times of Taboo random walks on bipartite graphs By Xianwu Zhang MS candidate: Applied and Computational Mathematics Advisor: Barry James Co-advisor: Kang Ling James Department of Mathematics and Statistics University of Minnesota Duluth 1

2 ACKNOWLEDGEMENTS First I want to acknowledge all of the help from my advisor, Professor Barry James and my co-advisor Professor Kang James. Without their help I wouldn t be where I am today. I also would like to give my thanks to Professor Qi, who is my committee member for his help in writing this paper and in course study. I also give my special thanks to Yun Peng for his help on the programming. Finally I give my thanks to the statistics seminar class which helps me to prepare the talk better and better.

3 DEDICATION This dissertation is dedicated to my parents. For your everlasting love 3

4 Abstract In this paper we tried to determine the distribution of the first time T that two indepent taboo random walks meet on a bipartite graph. Firstly, we did simulations for complete and incomplete bipartite graphs to support our conjecture that T should have a geometric distribution deping on the even or odd steps. Secondly, for complete graphs, we figured out a formula with proof for theoretical conditional probabilities which are consistent with the simulation results; then we gave a pdf fort. Then, for one example from incomplete graphs, from valid paths we calculated the theoretical probabilities which are very consistent with frequencies from the simulation results fort. Last, we tried a new definition of meeting to avoid the even or odd steps analysis and obtained some conjectures. 4

5 Contents Meeting times of Taboo random walks on bipartite graphs Introduction Review of some related literature and some preliminaries Related literature Some terminologies and properties Quick review of Markov chains A Markov chain associated with our taboo random walk Results for a complete bipartite graph Simulation results Formula for complete graphs Conjecture for incomplete graphs Simulation based on Bernoulli's law of large numbers Theoretical conditional probabilities An idea for using Markov chain to get E(T) Further research References... 4 Appices... 6 Program # Program #... 6 Program # Program # Program # Program # Program # Program # Program # Program # Program #

6 1. Introduction In graph theory, a bipartite graph is a graph whose vertices are in two disjoint groups and each vertex in one group is connected only to some other vertices in the other group. The degree of one vertex is the number of its neighbors in the other group. A complete graph means that each left vertex is connected to each right vertex like fig.1 and an incomplete graph means that each left vertex is connected to some of right vertices but not to all right vertices like fig.. Our graph in this paper goes like this: Let G denote a bipartite graph with left vertex set of size L and right vertex set of size R. In general both L and R could be very large. Assume that all the left vertices have the same degree k, all the right vertices have the same degree q and the transitions from any vertex to its neighbors are equiprobable. A taboo random walk in this paper means that you can t go back to the vertex where you came from. Now consider the following problem: we select two left vertices uniformly at random and start two indepent taboo random walks from each vertex. Our question is how to determine the distribution of the first time T when the walks meet. The following two graphs are examples of a complete graph and an incomplete graph, with k R, q L 3in the first graph and k, q4 in the second graph. Fig.1 Fig. 6

7 D.J. Aldous and others have done the research on the meeting time of Markov chains and have also given upper bounds for the expected meeting time. Their results inspire us to look at the distribution of meeting times of random walks and Markov chains. There is still future work to do, but we do get some interesting results in this paper. In section we review some related papers and explain some preliminaries. In section 3 we get some results for complete bipartite graphs, and in section 4 we get some conjectures for incomplete graphs. In section 5 we will try an idea to use Markov chain to get E (T). In section 6 we mention future research topics.. Review of some related literature and some preliminaries.1 Related literature The literature on this problem is sparse. But we do have some papers which focus on the related topic: an upper bound for the expected meeting time. Those papers are inspiring and beneficial for potential research. In his 1990 paper Meeting times for indepent Markov chains, D.J. Aldous researched the expected time two indepent continuous-time reversible Markov chains first meet. Suppose X, Y are two chains in question, then t t T min t : X Y. M t t T M is the first meeting time for X t and Y t. The worst-case expected meeting time is max E(T X i, Y j). M i, j M 0 0 D.J. Aldous got an upper bound for M : τ i M K i τ1v EπHi 1 π, 7

8 where a v b means the maximum number of a and b; EH means the expected time that the chain goes to state k from state i ; k i EH πeh the weighted expected hitting time from state i to the other states and is π i k k i k stationary distribution. In 008, Boaz Nadler concluded a sharper bound for the meeting time. The meeting time of two random walks is O (L). The bigo notation contains a multiplicative constant that deps on the parameters k and q, and the exact degrees of the left and right vertices. Intuitively he thought the larger k, the faster the random walk can spread to far away vertices. He also did simulation to support the conclusion. His graph is very similar to my project. As predicted theoretically, the meeting time is linear in L and it decreases with k. In 01, Roberto Imbuzeiro Oliveira proved that the expected value of T is at most a constant multiple of the largest hitting time of an element in the state space, which is a sharper bound compared with Aldous s results.. Some terminologies and properties..1 Quick review of Markov chains If we want to model this random walk using Markov chains, we have to make sure that the chain has the Markov property. The Markov property is that the future of the process deps only on present states and not on the past. Suppose X 0, X1, X, is a Markov chain, then for all n 0,1,, and states i, j, the Markov property is P n1 n n1 n n1 n (X j X i) P(X j X i,x i,,x i,x i ). Let n p ij denote the probability that the chain goes from state i to state j in n steps. If the number of states in state space S is N, P 1 [p ij ] NN is the one-step transition matrix and 8

9 P [p ] P [] ij NN is the two-step transition matrix. P is stochastic because # of states pij 1. j1 And P is doubly stochastic if we have # of states pij 1. i1 n State i is accessible from state j if p 0 for some n 0 and we say state i leads to state j and i If i j. ij jand j i, then i communicates with j and i j. If i j for all states, then the chain is irreducible. Let fi denote the probability of a return to i starting from i. For an irreducible chain, if fi 1, the chain is recurrent and if fi 1, then the chain is transient... A Markov chain associated with our taboo random walk The taboo random walk on a bipartite graph is not a Markov chain because of the taboo condition. The taboo condition means that the future step of the taboo random walk deps on the previous one step not the current step. We define a state by a pair of vertices of the taboo random walk in two consecutive steps. The Markov property is guaranteed because of the taboo condition and the future state now only deps on the current state, which is the Markov property. Fact 1: the chains for complete graphs and for the incomplete example in this paper are irreducible and have a finite number of states. Suppose i and j are any two states in the state space S, obviously i jand j i,which means i communicates with j for any state i and j. So the whole chain is irreducible. Then the chain has one communicating class and has a finite number of states; the number of states is Rq or Lk. 9

10 Fact : the chains above are positive recurrent. The chain in question has a finite number of states, which means there is at least one recurrent state in the state space. In addition, the chain has only one communicating class. So the chain is positive recurrent. Lemma 1: the transition matrix for the chain of complete graph is doubly stochastic. Proof: j th i th p ij Theith row is stochastic because # of states j1 p ij 1. Suppose state j is mn or mn If j is mn, its direction is from L to R and then the previous step should be from R to L. And because of the taboo condition the possible previous states are 1 m, m, ( n 1) m,( n 1) m, Rm. Then the corresponding entries in the jth column are nonzero but the rest entries are zeros. 1 pij *(R 1) 1. R 1 # of states i1 If j is mn, its direction is from R to L and then the previous step should be from L to R. And because of the taboo condition the possible previous states are 1 m, m, ( n 1) m,( n 1) m, Lm. Then the corresponding entries in the jth column are nonzero but the rest entries are zeros. Therefore, 10

11 1 pij *( L 1) 1. L 1 # of states i1 Lemma: the above chains have a stationary distribution. Proof: the chain is irreducible, the chain has a finite number of states and the chain is doubly stochastic. Then the chain has a stationary distribution, which is in this case a discrete uniform distribution. Since Rq denotes the number of the states, (,,, ) Rq Rq Rq. Lemma3: the above chains are periodic and the period is or 4 deping on the graphs. Proof: the chain has only one communicating class and we only have to find the period of one state because period is a class property. Because of the bipartite graph structure, the chain will come back only in an even step. According to the definition of period So n is an even number and d i is or 4. i ii n d gcd n1; P 0, Case 1: for a complete graph, the period is as long as k, q The following example is the simplest complete bipartite graph We focus on state n 4 11

12 n n 1 Any return time is a multiple of 4 since 4 and 5 alternate on the right-hand side because there are just two right vertices. So the period for 14 is 4 and the period of the chain is 4. We focus on n n n 8 So the period for 15 is and the period of the chain is. As long as k, q, we can find a path in 6 steps like the above example, which means the period is. Case : the period for our incomplete graph example is We focus on 17 : 1

13 P n n 8 So the period for 17 is and the period of the chain is. Fact 3: if a random variable T ~ GEO( p) and P(T n 1 T n) p P(T n T n 1) p 1 P(T k) (1 p) k p, then PDF for GEO(0.) T fig.3 PDF of GEO(0.) 3. Results for a complete bipartite graph Complete graph means each left vertex is connected to each right vertex. For these connected cases, we have got formulas for the distribution of T when the two walks meet on even steps or odd steps. We have a conjecture that T has a geometric distribution given that T is even or odd. We did simulation for two examples first and found support for our conjecture and then we derived the formula. 3.1 Simulation results 13

14 Let E denote an event and N ( ) n E denote the number of times the event E happens in n trials. According to Bernoulli s law of large numbers, N ( ) n E n p n where p is the probability that event E happens in one trial and the convergence is convergence in probability. Our simulation has two steps. If we get almost identical conditional probabilities from the simulation, it means that our conjecture at least gets some positive support from the simulation. Step1: I did 90,000,000 indepent trials by using MATLAB. In each trial two indepent taboo random walks start from the left side uniformly at random. Then I calculated the proportion of trials for the different steps where the two walks first meet. Step: I used the frequencies or proportions to estimate the real probabilities and used those estimates to calculate the conditional probabilities P(T n 1 T n) and P(T n T n 1) n 0,1,,3 For the first example of a complete graph ( L3, R ), we calculated the conditional probabilities (condp) using the frequencies. T condp T condp T condp T condp Table.1 Conditional probabilities for odd steps T condp

15 T condp T condp T condp Table. Conditional probabilities for even steps For the second example of a complete graph ( L4, R 3 ) T condp T condp T condp Table.3 Conditional probabilities for odd steps T condp T condp T condp Table.4 Conditional probabilities for even steps The simulation results are positively supportive for the geometric distribution conjecture because of almost constant conditional probabilities. 3. Formula for complete graphs Lemma4: for the general complete graph with L and R vertices, the conditional probabilities are R PT n 1 T n p R1 1 L L1, PT n T>n 1 p, n 1,,3 15

16 Proof: Lk Rq, in this case k R, q L. The two walks have to meet the taboo condition. We do not have to pay attention to the previous steps except the previous one step to determine the next step. We prove the following formula and the other one has the similar logic. 1 1 L P(T n T n 1) ( L )* * L1 L1 (L1) If T n, the two walks meet on the left-hand side. Because of the taboo condition, they can meet on L left vertices. Each walk finished the n 1th step with a probability of 1 L 1, and the two walks are indepent. In the same way, we can get 1 1 R P(T n 1 T n) (R )* * R 1 R 1 ( R 1) According to the formula and simulation results, we can get the comparison table and the simulation results are consistent with the formula results. Graph 50,000,000 trials P(T=n+1 T>n) P(T=n T>n-1) fig.a Formula results Simulation results fig.b Formula results Simulation results Table.5 the comparison between simulation and theoretical results Theorem: for the general complete graph with L left vertices and R right vertices, the pdf for T is ( ) ( 1)* ( 1) (1 ) n n P T n P T n T n P T n p p (1 p ) C, 1 P T n P T n T n P T n p p p C, 1 ( 1) ( 1 )* ( ) 1(1 1) n n (1 ) 16

17 where ( R1)( L1) C 1 P(0) P(1) RL Proof: Even steps P(T 0) L* * L L L P( T ) P( T T 1)* P( T 1) p *(1 P(0) P(1)) p C P( T 4) P( T 4 T 3)* P( T 3) p *(1 P(0) P(1) P() P(3)) p (1 p )(1 p ) C 1 P( T 6) P( T 6 T 5)* P( T 5) p(1 p1) (1 p ) C p *(1 P(0) P(1) P(5)) P( T 8) P( T 8 T 7)* P( T 7) p *(1 P(0) P(1) P(7)) P( T n ) p(1 p1) (1 p ) C : : P( T n T n 1)* P( T n 1) (1 ) n n p p (1 p ) C Odd steps L( L 1) L 1 P( T 1) R* ** * * * L L R R R* L P( T 3) P( T 3 T )* P( T ) p *(1 P(0) P(1) P()) 1 p (1 p ) C 1 1 P( T 5) P( T 5 T 4)* P( T 4) p1(1 p1)(1 p ) C p *(1 P(0) P(1) P(4)) P( T 7) P( T 7 T 6)* P( T 6) p *(1 P(0) P(1) P(6)) 1 3 p1(1 p1) (1 p ) C P( T 9) P( T 9 T 8)* P( T 8) p *(1 P(0) P(1) P(8)) 1 P( T n 1) 3 4 p1(1 p1) (1 p ) C : : P( T n 1 T n)* P( T n) p p p C 1 1(1 1) n n (1 ) 4. Conjecture for incomplete graphs 17

18 frequencies for T We did simulation for the incomplete graph example. In this example L 6, R 3, k, q 4. The construction of graph is the following: 4.1 Simulation based on Bernoulli's law of large numbers The conditional probabilities are based on the simulation results. From those conditional probabilities, we can conjecture that T has a geometric distribution given that T is even or odd histogram for frequencies T(1-80) fig.4 histogram of trials 18

19 T condp T condp Table.6 Conditional probabilities for odd steps T condp T condp Table.7 Conditional probabilities for even steps We got almost constant estimates for conditional probabilities for odd and even steps from the simulations. Because of large number of trials, I believe that these estimates are very close to the real conditional probabilities. 4. Theoretical conditional probabilities Can we get the real probabilities for T and then the real conditional probabilities? To confirm our conjecture and to answer the last question, we also tried another way to give some support. The idea here is to calculate the theoretical probabilities for different meeting times. Step1: find all the valid meeting pairs that the two indepent random walks can meet on different steps. Step : add all the probabilities of each meeting pair to get the theoretical probability for 19

20 T 1,,3, and calculate the conditional probabilities A valid meeting pair satisfies two properties. The pairs meet the taboo conditions; the two walks meet for the very first time at the very of each pair. The calculation goes like this. We use one meeting pair for T to explain the calculation This is a meeting pair. One walk can go 7 1 and the other walk can go The probability for one walk is and the probability for one meeting pair is because they are indepent. Each pair has the same probability So the probability for T is The following table is part of calculations T Number of meeting pairs Theoretical probabilities Simulation results Table.8 part of calculations for comparison The following table is the comparison between the simulation results and probabilities from the meeting pairs. 0

21 T Frequency by simulation # of valid meeting pairs Probabilities from pairs T Frequency by simulation # of valid meeting pairs Probabilities from pairs Table.9 comparison results between theoretical results and simulation results We can tell that the simulation results are consistent with the theoretical probabilities calculated from the valid pairs. Even though we don t have a specific formula for the conditional probabilities, we still can conjecture that T given on T is even or odd has a geometric distribution. And we can calculate the pdf by using the valid pairs. 5. An idea for using Markov chain to get E(T) A new definition of meeting for two Markov chains can avoid the analysis for even and odd steps. At first, we define a state as a pair of vertices like before. Then we only consider the Markov chain in every two steps. The new definition for meeting is they have same first vertex or same second vertex in the same step. Let T ' denote the first time the two random walks meet. T ' 0,1, and in the diagram above ' T 1,which corresponds to T or T 3. We then have 1

22 Proportion * T ' T * T' 1 *E( T ') E( T) *E( T ') 1 We did the simulation for the following example estimates for conditional probabilities for different steps from the simulation p= T Fig.5 conditional probabilities for new idea in trials

23 ' T condp T condp T condp T condp T condp Table.10 Conditional probabilities for new idea in trials From the conditional probabilities based on the simulation results we can say T has at least approximately a geometric distribution. From the simulation for the new idea, ET '.739, and from the previous simulation in terms of random walk, ET , so it is true that * E T ' E T * E T ' 1. From the simulation results based on the new idea, it is quite possible that geometric distribution. If T ' ~ GEO(p), then T ' has a * T ' T * T ' 1 i1 P T ' i (1 p) * p, P{T' i} PT i or i 1 i1 P{T i} P{T i1} (1 p) * p (a) Also, from the previous conjecture that T has a geometric distribution when restricted to even or odd steps, 3

24 P{T i1} c P{T i} (b) We actually can get the P{T i} and P{T i 1} by solving (a) and (b). Even if T has not a geometric distribution, we can t get P{T i} and P{T i 1} but we still can get a good approximation for ET. This should be a very useful approximation when L and R are large. Because the larger the L and R, the easier the Markov chain can spread away. It means that T ' and T will be larger and the larger T means *E( T ') or *E( T ') 1will be a better approximation for ET ( ) in terms of relative error. In all three reference papers they obtained a upper bound for ET ( ). 6. Further research Actually this new idea could give us a very good approximation for ET ( ). One concern about this: In Aldous s paper Meeting times for indepent Markov chains, continuous time reversible Markov chains are studied and the upper bound is for this kind of chain. For the complete graph, the two step transition matrix is symmetric and therefore two step Markov chain is time reversible. Perhaps an analogous result can be obtained for incomplete graph, and Aldous s result could be compared with our approximation. One more topic for future research is to develop a program to calculate the probabilities as long as you have some information like L, R, k, q. 7. References D. J. Aldous, Meeting times for indepent Markov chains, Stochastic Processes and their Applications 38 (1991) Boaz Nadler, Taboo Random Walk on an Expander Graph, Sept Roberto Imbuzeiro Oliveira, On the coalescence time of reversible random walks, Trans. Amer. Math. Soc. 364(01),

25 Sheldon M. Ross, Introduction to Probability Models 10 th edition, Academic Press (Elsevier), Amsterdam, 010 5

26 Appices Program #1 %%==simulation for the second example===== clear clc counter=zeros(1,100000); records=[]; for n=1: a1=rand; b1=rand; a=translate(a1);b=translate(b1); if a==b counter(1)=counter(1)+1; if a~=b recorda=[];recordb=[]; T=1; flaga=1;flagb=1; recorda=[recorda a];recordb=[recordb b]; while a~=b if a<=6 && b<=6 y1=rand;[a,c]=translate1a(a,y1,flaga,recorda);recorda=[recorda a];flaga=c; y=rand;[b,c]=translate1b(b,y,flagb,recordb);recordb=[recordb b];flagb=c;t=t+1; else y3=rand;a=translatea(a,y3,recorda);recorda=[recorda a]; y4=rand;b=translateb(b,y4,recordb);recordb=[recordb b];t=t+1; counter(t)=counter(t)+1; counter(1:6)/ Program # function y=translate(x) if x<=1/6 y=1; elseif x<=/6 y=; elseif x<=3/6 y=3; elseif x<=4/6 y=4; elseif x<=5/6 y=5; elseif x<=1 6

27 y=6; Program #3 function [w,f]=translate1a(x,y,flaga,z) if flaga==1 if y<=1/ && x==1 w=7; elseif y>=1/ && x==1 w=8; if y<=1/ && x== w=7; elseif y>=1/ && x== w=9; if y<=1/ && x==3 w=8; elseif y>=1/ && x==3 w=9; if y<=1/ && x==4 w=7; elseif y>=1/ && x==4 w=9; if y<=1/ && x==5 w=8; elseif y>=1/ && x==5 w=9; if y<=1/ && x==6 w=7; elseif y>=1/ && x==6 w=8; if flaga== 7

28 if x==1 && z(-1)==7 w=8; elseif x==1 && z(-1)==8 w=7; if x== && z(-1)==9 w=7; elseif x== && z(-1)==7 w=9; if x==3 && z(-1)==9 w=8; elseif x==3 && z(-1)==8 w=9; if x==4 && z(-1)==9 w=7; elseif x==4 && z(-1)==7 w=9; if x==5 && z(-1)==9 w=8; elseif x==5 && z(-1)==8 w=9; if x==6 && z(-1)==8 w=7; elseif x==6 && z(-1)==7 w=8; if flaga==1 f=; if flaga== f=; Program #4 function w=translatea(x,y,z) if (x==7) && (z(-1)==1) 8

29 if y<=1/3 w=; elseif y<=/3 w=4; elseif y<=1 w=6; if (x==7) && (z(-1)==) if y<=1/3 w=1; elseif y<=/3 w=4; elseif y<=1 w=6; if (x==7) && (z(-1)==4) if y<=1/3 w=1; elseif y<=/3 w=; elseif y<=1 w=6; if (x==7) && (z(-1)==6) if y<=1/3 w=1; elseif y<=/3 w=; elseif y<=1 w=4; if (x==8) && (z(-1)==1) if y<=1/3 w=3; elseif y<=/3 w=5; elseif y<=1 w=6; if (x==8) && (z(-1)==3) if y<=1/3 w=1; elseif y<=/3 w=5; elseif y<=1 w=6; if (x==8) && (z(-1)==5) 9

30 if y<=1/3 w=1; elseif y<=/3 w=3; elseif y<=1 w=6; if (x==8) && (z(-1)==6) if y<=1/3 w=1; elseif y<=/3 w=3; elseif y<=1 w=5; if (x==9)&& (z(-1)==) if y<=1/3 w=3; elseif y<=/3 w=4; elseif y<=1 w=5; if (x==9) && (z(-1)==3) if y<=1/3 w=; elseif y<=/3 w=4; elseif y<=1 w=5; if (x==9) && (z(-1)==4) if y<=1/3 w=; elseif y<=/3 w=3; elseif y<=1 w=5; if (x==9) && (z(-1)==5) if y<=1/3 w=; elseif y<=/3 w=3; elseif y<=1 w=4; 30

31 Program #5 %%%---- use recursive algorithm to get all the possible qualified routes in %%%----- pair for A and B-- clear clc global M M=[]; T=input('PLEASE INPUT THE STEPS YOU WANT:'); n=t; if T%==0 A=creatfromlastspot(1);L=size(A); for i=1:l() b=[1;1];a=[a(:,i) b]; while length(a)<t+1 a=creatfrom(a,t); B=creatfromlastspot();L=size(B); for i=1:l() b=[;];a=[b(:,i) b]; while length(a)<t+1 a=creatfrom(a,t); C=creatfromlastspot(3);L=size(C); for i=1:l() b=[3;3];a=[c(:,i) b]; while length(a)<t+1 a=creatfrom(a,t); D=creatfromlastspot(4);L=size(D); for i=1:l() b=[4;4];a=[d(:,i) b]; while length(a)<t+1 a=creatfrom(a,t); E=creatfromlastspot(5);L=size(E); for i=1:l() b=[5;5];a=[e(:,i) b]; while length(a)<t+1 a=creatfrom(a,t); F=creatfromlastspot(6);L=size(F); for i=1:l() b=[6;6];a=[f(:,i) b]; while length(a)<t+1 a=creatfrom(a,t); 31

32 if T%==1 A=creatfromlastspot(7); for i=1:length(a) b=[7;7];a=[a(:,i) b]; while length(a)<t+1 a=creatfrom(a,t); if T==1 % disp(a); M=[M;a]; B=creatfromlastspot(8); for i=1:length(b) b=[8;8];a=[b(:,i) b]; while length(a)<t+1 a=creatfrom(a,t); if T==1 % disp(a); M=[M;a]; C=creatfromlastspot(9); for i=1:length(c) b=[9;9];a=[c(:,i) b]; while length(a)<t+1 a=creatfrom(a,t); if T==1 % disp(a); M=[M;a]; disp(' all qualified routes in pair for pig A and pig B are') M; disp('the # of the all qualified routes in pair for pig A and pig B is') s=size(m);s(1)/ Program #6 %%--to test if there is any invalid route in all the possible routes-- A=M; s=size(a);m=s(1)-1;n=s()-1;flag=0; for i=1:m for j=1:n if A(i,j)==A(i,j+1) if mod(i,)==1 disp(i) else disp(i-1) 3

33 disp('th is an invalid route') flag=flag+1; for i=1::m for j=1:n if A(i,j)==A(i+1,j) disp(i) disp('th is an invalid route') flag=flag+1; if flag==0 disp('awesome!!! all routes are valid routes') Program #7 function routes=creatfrom(a,t) global T T=t; global M c=a(:,1:);b=zeros(,1);a=a; % T=t; if c(1,1)<=6 if c(1,1)==1 if c(1,)==7 b(1,1)=8; if c(1,)==8 b(1,1)=7; if c(1,1)== if c(1,)==7 b(1,1)=9; if c(1,)==9 b(1,1)=7; if c(1,1)==3 if c(1,)==8 b(1,1)=9; if c(1,)==9 b(1,1)=8; if c(1,1)==4 if c(1,)==7 b(1,1)=9; if c(1,)==9 b(1,1)=7; if c(1,1)==5 if c(1,)==8 b(1,1)=9; if c(1,)==9 b(1,1)=8; if c(1,1)==6 33

34 if c(1,)==7 b(1,1)=8; if c(1,)==8 b(1,1)=7; if c(,1)==1 if c(,)==7 b(,1)=8; if c(,)==8 b(,1)=7; if c(,1)== if c(,)==7 b(,1)=9; if c(,)==9 b(,1)=7; if c(,1)==3 if c(,)==8 b(,1)=9; if c(,)==9 b(,1)=8; if c(,1)==4 if c(,)==7 b(,1)=9; if c(,)==9 b(,1)=7; if c(,1)==5 if c(,)==8 b(,1)=9; if c(,)==9 b(,1)=8; if c(,1)==6 if c(,)==7 b(,1)=8; if c(,)==8 b(,1)=7; if c(1,1)>6 [B,f]=creatfromrightmiddlespot(c);m=length(B); if f==1 for i=1:m b=b(:,i);routes=[b A]; while length(routes)<t+1 routes=creatfrom(routes,t); if length(routes)==t+1 && length([b A])==T+1 % disp(routes); M=[M;routes]; routes=[b A]; 34

35 Program #8 function initialroutes=creatfromlastspot(n) n=n; if n==7 initialroutes=combntns([1 4 6],)'; if n==8 initialroutes=combntns([ ],)'; if n==9 initialroutes=combntns([ 3 4 5],)'; if n==1 initialroutes=combntns([7 8],)'; if n== initialroutes=combntns([7 9],)'; if n==3 initialroutes=combntns([8 9],)'; if n==4 initialroutes=combntns([7 9],)'; if n==5 initialroutes=combntns([8 9],)'; if n==6 initialroutes=combntns([7 8],)'; Program #9 function [routes,flag]=creatfromrightmiddlespot(cc) c=cc;flag=1; if c(1,1)==c(,1) flag=0; if c(1,1)==7 if c(1,)==1 B=[ 4 6]; if c(1,)== B=[1 4 6]; if c(1,)==4 B=[1 6]; if c(1,)==6 35

36 B=[1 4]; if c(1,1)==8 if c(1,)==1 B=[3 5 6]; if c(1,)==3 B=[1 5 6]; if c(1,)==5 B=[1 3 6]; if c(1,)==6 B=[1 3 5]; if c(1,1)==9 if c(1,)== B=[3 4 5]; if c(1,)==3 B=[ 4 5]; if c(1,)==4 B=[ 3 5]; if c(1,)==5 B=[ 3 4]; if c(,1)==7 if c(,)==1 C=[ 4 6]; if c(,)== C=[1 4 6]; if c(,)==4 C=[1 6]; if c(,)==6 C=[1 4]; if c(,1)==8 if c(,)==1 C=[3 5 6]; if c(,)==3 C=[1 5 6]; if c(,)==5 C=[1 3 6]; if c(,)==6 C=[1 3 5]; if c(,1)==9 if c(,)== C=[3 4 5]; if c(,)==3 C=[ 4 5]; if c(,)==4 36

37 C=[ 3 5]; if c(,)==5 C=[ 3 4]; D=[]; for i=1:3 for j=1:3 if C(j)~=B(i) d=[b(i);c(j)];d=[d d]; routes=d; Program #10 %%--- to eliminate the replication in the routes--- L=length(M); for i=1::l for j=i+::l- if j+<=l if M(j,:)==M(i,:) if M(j+1,:)==M(i+1,:) M=[M(1:j-1,:);M(j+:,:)];L=length(M); Program #11 %%--to test if there is any invalid route in all the possible routes-- A=M; s=size(a);m=s(1)-1;n=s()-1;flag=0; for i=1:m for j=1:n if A(i,j)==A(i,j+1) if mod(i,)==1 disp(i) else disp(i-1) disp('th is an invalid route') 37

38 flag=flag+1; for i=1::m for j=1:n if A(i,j)==A(i+1,j) disp(i) disp('th is an invalid route') flag=flag+1; if flag==0 disp('awesome!!! all routes are valid routes') 38

Markov chains (week 6) Solutions

Markov chains (week 6) Solutions Markov chains (week 6) Solutions 1 Ranking of nodes in graphs. A Markov chain model. The stochastic process of agent visits A N is a Markov chain (MC). Explain. The stochastic process of agent visits A

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

Disjointness and Additivity

Disjointness and Additivity Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

Markov Chains (Part 4)

Markov Chains (Part 4) Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS DATA MINING LECTURE 3 Link Analysis Ranking PageRank -- Random walks HITS How to organize the web First try: Manually curated Web Directories How to organize the web Second try: Web Search Information

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

The Distribution of Mixing Times in Markov Chains

The Distribution of Mixing Times in Markov Chains The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution

More information

MATH 341 MIDTERM 2. (a) [5 pts] Demonstrate that A and B are row equivalent by providing a sequence of row operations leading from A to B.

MATH 341 MIDTERM 2. (a) [5 pts] Demonstrate that A and B are row equivalent by providing a sequence of row operations leading from A to B. 11/01/2011 Bormashenko MATH 341 MIDTERM 2 Show your work for all the problems. Good luck! (1) Let A and B be defined as follows: 1 1 2 A =, B = 1 2 3 0 2 ] 2 1 3 4 Name: (a) 5 pts] Demonstrate that A and

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Some Definition and Example of Markov Chain

Some Definition and Example of Markov Chain Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

Homework set 2 - Solutions

Homework set 2 - Solutions Homework set 2 - Solutions Math 495 Renato Feres Simulating a Markov chain in R Generating sample sequences of a finite state Markov chain. The following is a simple program for generating sample sequences

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

besides your solutions of these problems. 1 1 We note, however, that there will be many factors in the admission decision

besides your solutions of these problems. 1 1 We note, however, that there will be many factors in the admission decision The PRIMES 2015 Math Problem Set Dear PRIMES applicant! This is the PRIMES 2015 Math Problem Set. Please send us your solutions as part of your PRIMES application by December 1, 2015. For complete rules,

More information

Topics in Approximation Algorithms Solution for Homework 3

Topics in Approximation Algorithms Solution for Homework 3 Topics in Approximation Algorithms Solution for Homework 3 Problem 1 We show that any solution {U t } can be modified to satisfy U τ L τ as follows. Suppose U τ L τ, so there is a vertex v U τ but v L

More information

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

ANALYZING THE QUADRATIC SIEVE ALGORITHM

ANALYZING THE QUADRATIC SIEVE ALGORITHM ANALYZING THE QUADRATIC SIEVE ALGORITHM KIM BOWMAN, NEIL CALKIN, ZACH COCHRAN, AND KEVIN JAMES Abstract. We will discuss random vector models related to the quadratic sieve. We will introduce a new model

More information

The Google Markov Chain: convergence speed and eigenvalues

The Google Markov Chain: convergence speed and eigenvalues U.U.D.M. Project Report 2012:14 The Google Markov Chain: convergence speed and eigenvalues Fredrik Backåker Examensarbete i matematik, 15 hp Handledare och examinator: Jakob Björnberg Juni 2012 Department

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

The Toughness of Cubic Graphs

The Toughness of Cubic Graphs The Toughness of Cubic Graphs Wayne Goddard Department of Mathematics University of Pennsylvania Philadelphia PA 19104 USA wgoddard@math.upenn.edu Abstract The toughness of a graph G is the minimum of

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Approximate Counting and Markov Chain Monte Carlo

Approximate Counting and Markov Chain Monte Carlo Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................

More information

Exercises. Template for Proofs by Mathematical Induction

Exercises. Template for Proofs by Mathematical Induction 5. Mathematical Induction 329 Template for Proofs by Mathematical Induction. Express the statement that is to be proved in the form for all n b, P (n) forafixed integer b. 2. Write out the words Basis

More information

Test of Complete Spatial Randomness on Networks

Test of Complete Spatial Randomness on Networks Test of Complete Spatial Randomness on Networks A PROJECT SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Xinyue Chang IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE

More information

Hanna Furmańczyk EQUITABLE COLORING OF GRAPH PRODUCTS

Hanna Furmańczyk EQUITABLE COLORING OF GRAPH PRODUCTS Opuscula Mathematica Vol. 6 No. 006 Hanna Furmańczyk EQUITABLE COLORING OF GRAPH PRODUCTS Abstract. A graph is equitably k-colorable if its vertices can be partitioned into k independent sets in such a

More information

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Review. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with

More information

1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques

1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques 1 Proof techniques Here we will learn to prove universal mathematical statements, like the square of any odd number is odd. It s easy enough to show that this is true in specific cases for example, 3 2

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Poisson Processes for Neuroscientists

Poisson Processes for Neuroscientists Poisson Processes for Neuroscientists Thibaud Taillefumier This note is an introduction to the key properties of Poisson processes, which are extensively used to simulate spike trains. For being mathematical

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

MATH HOMEWORK PROBLEMS D. MCCLENDON

MATH HOMEWORK PROBLEMS D. MCCLENDON MATH 46- HOMEWORK PROBLEMS D. MCCLENDON. Consider a Markov chain with state space S = {0, }, where p = P (0, ) and q = P (, 0); compute the following in terms of p and q: (a) P (X 2 = 0 X = ) (b) P (X

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications Random Walks on Graphs Outline One Concrete Example of a random walk Motivation applications shuffling cards universal traverse sequence self stabilizing token management scheme random sampling enumeration

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University Markov Chains Andreas Klappenecker Texas A&M University 208 by Andreas Klappenecker. All rights reserved. / 58 Stochastic Processes A stochastic process X tx ptq: t P T u is a collection of random variables.

More information

WAITING FOR A BAT TO FLY BY (IN POLYNOMIAL TIME)

WAITING FOR A BAT TO FLY BY (IN POLYNOMIAL TIME) WAITING FOR A BAT TO FLY BY (IN POLYNOMIAL TIME ITAI BENJAMINI, GADY KOZMA, LÁSZLÓ LOVÁSZ, DAN ROMIK, AND GÁBOR TARDOS Abstract. We observe returns of a simple random wal on a finite graph to a fixed node,

More information

3. The Voter Model. David Aldous. June 20, 2012

3. The Voter Model. David Aldous. June 20, 2012 3. The Voter Model David Aldous June 20, 2012 We now move on to the voter model, which (compared to the averaging model) has a more substantial literature in the finite setting, so what s written here

More information

Convergence Rate of Markov Chains

Convergence Rate of Markov Chains Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Markov decision processes and interval Markov chains: exploiting the connection

Markov decision processes and interval Markov chains: exploiting the connection Markov decision processes and interval Markov chains: exploiting the connection Mingmei Teo Supervisors: Prof. Nigel Bean, Dr Joshua Ross University of Adelaide July 10, 2013 Intervals and interval arithmetic

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2/7/2012 Jure Leskovec, Stanford C246: Mining Massive Datasets 2 Web pages are not equally important www.joe-schmoe.com

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 7

MATH 56A: STOCHASTIC PROCESSES CHAPTER 7 MATH 56A: STOCHASTIC PROCESSES CHAPTER 7 7. Reversal This chapter talks about time reversal. A Markov process is a state X t which changes with time. If we run time backwards what does it look like? 7.1.

More information

Part III: A Simplex pivot

Part III: A Simplex pivot MA 3280 Lecture 31 - More on The Simplex Method Friday, April 25, 2014. Objectives: Analyze Simplex examples. We were working on the Simplex tableau The matrix form of this system of equations is called

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

After linear functions, the next simplest functions are polynomials. Examples are

After linear functions, the next simplest functions are polynomials. Examples are Mathematics 10 Fall 1999 Polynomials After linear functions, the next simplest functions are polynomials. Examples are 1+x, x +3x +, x 3. Here x is a variable. Recall that we have defined a function to

More information

88 CONTINUOUS MARKOV CHAINS

88 CONTINUOUS MARKOV CHAINS 88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state

More information

Chapter 9: Systems of Equations and Inequalities

Chapter 9: Systems of Equations and Inequalities Chapter 9: Systems of Equations and Inequalities 9. Systems of Equations Solve the system of equations below. By this we mean, find pair(s) of numbers (x, y) (if possible) that satisfy both equations.

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

MATH 56A SPRING 2008 STOCHASTIC PROCESSES MATH 56A SPRING 008 STOCHASTIC PROCESSES KIYOSHI IGUSA Contents 4. Optimal Stopping Time 95 4.1. Definitions 95 4.. The basic problem 95 4.3. Solutions to basic problem 97 4.4. Cost functions 101 4.5.

More information

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

More information

Systems of distinct representatives/1

Systems of distinct representatives/1 Systems of distinct representatives 1 SDRs and Hall s Theorem Let (X 1,...,X n ) be a family of subsets of a set A, indexed by the first n natural numbers. (We allow some of the sets to be equal.) A system

More information

How to Quantitate a Markov Chain? Stochostic project 1

How to Quantitate a Markov Chain? Stochostic project 1 How to Quantitate a Markov Chain? Stochostic project 1 Chi-Ning,Chou Wei-chang,Lee PROFESSOR RAOUL NORMAND April 18, 2015 Abstract In this project, we want to quantitatively evaluate a Markov chain. In

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

On Systems of Diagonal Forms II

On Systems of Diagonal Forms II On Systems of Diagonal Forms II Michael P Knapp 1 Introduction In a recent paper [8], we considered the system F of homogeneous additive forms F 1 (x) = a 11 x k 1 1 + + a 1s x k 1 s F R (x) = a R1 x k

More information

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES RETHNAKARAN PULIKKOONATTU ABSTRACT. Minimum distance is an important parameter of a linear error correcting code. For improved performance of binary Low

More information

ISyE 6650 Probabilistic Models Fall 2007

ISyE 6650 Probabilistic Models Fall 2007 ISyE 6650 Probabilistic Models Fall 2007 Homework 4 Solution 1. (Ross 4.3) In this case, the state of the system is determined by the weather conditions in the last three days. Letting D indicate a dry

More information

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

At the boundary states, we take the same rules except we forbid leaving the state space, so,. Birth-death chains Monday, October 19, 2015 2:22 PM Example: Birth-Death Chain State space From any state we allow the following transitions: with probability (birth) with probability (death) with probability

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Conductance and Rapidly Mixing Markov Chains

Conductance and Rapidly Mixing Markov Chains Conductance and Rapidly Mixing Markov Chains Jamie King james.king@uwaterloo.ca March 26, 2003 Abstract Conductance is a measure of a Markov chain that quantifies its tendency to circulate around its states.

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) The authors explain how the NCut algorithm for graph bisection

More information

CS 246 Review of Proof Techniques and Probability 01/14/19

CS 246 Review of Proof Techniques and Probability 01/14/19 Note: This document has been adapted from a similar review session for CS224W (Autumn 2018). It was originally compiled by Jessica Su, with minor edits by Jayadev Bhaskaran. 1 Proof techniques Here we

More information

MAA704, Perron-Frobenius theory and Markov chains.

MAA704, Perron-Frobenius theory and Markov chains. November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.

More information

QR FACTORIZATIONS USING A RESTRICTED SET OF ROTATIONS

QR FACTORIZATIONS USING A RESTRICTED SET OF ROTATIONS QR FACTORIZATIONS USING A RESTRICTED SET OF ROTATIONS DIANNE P. O LEARY AND STEPHEN S. BULLOCK Dedicated to Alan George on the occasion of his 60th birthday Abstract. Any matrix A of dimension m n (m n)

More information

Lecture 9: Counting Matchings

Lecture 9: Counting Matchings Counting and Sampling Fall 207 Lecture 9: Counting Matchings Lecturer: Shayan Oveis Gharan October 20 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.

More information

On the Logarithmic Calculus and Sidorenko s Conjecture

On the Logarithmic Calculus and Sidorenko s Conjecture On the Logarithmic Calculus and Sidorenko s Conjecture by Xiang Li A thesis submitted in conformity with the requirements for the degree of Msc. Mathematics Graduate Department of Mathematics University

More information

A COMPOUND POISSON APPROXIMATION INEQUALITY

A COMPOUND POISSON APPROXIMATION INEQUALITY J. Appl. Prob. 43, 282 288 (2006) Printed in Israel Applied Probability Trust 2006 A COMPOUND POISSON APPROXIMATION INEQUALITY EROL A. PEKÖZ, Boston University Abstract We give conditions under which the

More information

Q520: Answers to the Homework on Hopfield Networks. 1. For each of the following, answer true or false with an explanation:

Q520: Answers to the Homework on Hopfield Networks. 1. For each of the following, answer true or false with an explanation: Q50: Answers to the Homework on Hopfield Networks 1. For each of the following, answer true or false with an explanation: a. Fix a Hopfield net. If o and o are neighboring observation patterns then Φ(

More information