6 Stationary Distributions

Size: px
Start display at page:

Download "6 Stationary Distributions"

Transcription

1 6 Stationary Distributions 6. Definition and Examles Definition 6.. Let {X n } be a Markov chain on S with transition robability matrix P. A distribution π on S is called stationary (or invariant) if π = πp, (6.) or euivalently if π j = π i i j, j S. (6.2) i S Thus, in order to find a stationary distribution of a Markov chain with a transition robability matrix P = [ i j ], we need to solve the linear system (6.) (or euivalently (6.2)) together with the conditions: π i = and π i 0 for all i S. Examle 6..2 (2-state Markov chain) Consider the transition matrix: [ ] P =. i Solving the euation πp = π, π = [π 0 π ], which is euivalent to π 0 π = 0, together with π 0 + π =, we obtain π 0 = +, π = +. (6.3) Thus we conclude: (i) If + > 0, there exists a uniue stationary distribution as in (6.3). (ii) If = = 0, a stationary distribution is not uniuely determined. In fact, any distribution π = [π 0, π ] is stationary. Moreover, we see from Examle that if 0 < + < 2, or euivalently, if r <, we have lim n Pn = [ ]. + Then lim π(n) = lim n n π(0)pn = [π 0 (0) π (0)] + [ ] [ = + It is noteworthy that the stationary distribution is obtained as a limit distribution. ]. + Examle 6..3 (3-state Markov chain) We discuss the Markov chain {X n } introduced in Examle If > 0 and b > 0, a stationary distribution is uniue and given by π = [0 0 ]. Examle 6..4 (One-dimensional RW) Consider the -dimensional random walk with right-move robability > 0 and left-move robability = > 0. Let [ π(k) ] be a distribution on Z. If it is stationary, we have π(k) = π(k ) + π(k + ), k Z. (6.4) (Case ). Then a general solution to (6.4) is given by ( ) k ( ) k π(k) = C k + C 2 = C + C 2, k Z. 22

2 This never becomes a robability distribution for any choice of C and C 2. Namely, there is no stationary distribution. (Case 2) =. In this case a general solution to (6.4) is given by π(k) = (C + C 2 k) k = C + C 2 k, k Z. This never becomes a robability distribution for any choice of C and C 2. Namely, there is no stationary distribution. Examle 6..5 (One-dimensional RW with reflection barrier) There is a uniue stationary distribution when <. In fact, ( ) k π(0) = C, π(k) = C, k, where C = 2. If, then there is no stationary distribution. 6.2 Existence and Uniueness Theorem 6.2. A Markov chain over a finite state sace S has a stationary distribution. A simle roof is based on the Brouwer s fixed-oint theorem, for details see the textbooks. Note that the stationary distribution mentioned in the above theorem is not necessarily uniue. Definition We say that a state j can be reached from a state i if there exists some n 0 such that n (i, j) > 0. By definition every state i can be reached from itself. We say that two states i and j intercommunicate if i can be reached form j and j can be reached from i, i.e., there exist m 0 and n 0 such that n (i, j) > 0 and m ( j, i) > 0. For i, j S we introduce a binary relation i j when they intercommunicate. Then becomes an euivalence relation on S : (i) i i; (ii) i j = j i; (iii) i j, j k = i k. In fact, (i) and (ii) are obvious by definition, and (iii) is verified by the Chaman-Kolmogorov euation. Thereby the state sace S is classified into a disjoint set of euivalence classes. In each euivalence class any two states intercommunicate each other. Definition A state i is called absorbing if ii =. In articular, an absorbing state is a state which constitutes an euivalence class by itself. Definition A Markov chain is called irreducible if every state can be reached from every other state, i.e., if there is only one euivalence class of intercommunicating states. Theorem An irreducible Markov chain on a finite state sace S admits a uniue stationary distribution π = [π i ]. Moreover, π i > 0 for all i S. In fact, the roof owes to the following two facts: () For an irreducible Markov chain the following assertions are euivalent: (i) it admits a stationary distribution; (ii) every state is ositive recurrent. In this case the stationary distribution π is uniue and given by π i = E(T i X 0 = i), i S. (2) Every state of an irreducible Markov chain on a finite state sace is ositive recurrent (Theorem 7.3.2). 23

3 6.3 Convergence Examle 6.3. (2-state Markov chain) We recall Examles and If + > 0, the distribution of the above Markov chain converges to the uniue stationary distribution. Consider the case of = =, i.e., the transition matrix becomes [ ] 0 P =. 0 The stationary distribution is uniue. But for a given initial distribution π(0) it is not necessarily true that π(n) converges to the stationary distribution. Roughly seaking, we need to avoid the eriodic transition in order to have the convergence to a stationary distribution. Definition For a state i S, GCD{n ; P(X n = i X 0 = i) > 0} is called the eriod of i. (When the set in the right-hand side is emty, the eriod is not defined.) A state i S is called aeriodic if its eriod is one. Theorem For an irreducible Markov chain, every state has a common eriod. Theorem Let π be a stationary distribution of an irreducible Markov chain on a finite state sace (It is uniue, see Theorem 6.2.5). If {X n } is aeriodic, for any j S we have lim P(X n = j) = π j. n Problem 0 Find all stationary distributions of the Markov chain determined by the transition diagram below. Then discuss convergence of distributions. Problem Let {X n } be the Markov chain introduced in Examle 5.2.7: b a H S D r For n =, 2,... let H n denote the robability of starting from H and terminating at D at n-ste. Similarly, for n =, 2,... let S n denote the robability of starting from S and terminating at D at n-ste. () Show that {H n } and {S n } satisfies the following linear system: H n = ah n + bs n, n 2; H = 0, S =. S n = H n + rs n, 24

4 (2) Let H and S denote the life times starting from the state H and S, resectively. Solving the linear system in (), rove the following identities for the mean life times: 6.4 Page Rank E[H] = nh n = b + + b, E[S ] = ns n = b + b. The hyerlinks among N websites give rise to a digrah (directed grah) G on N vertices. It is natural to consider a Markov chain on G, which is defined by the transition matrix P = [ i j ], where where deg i = { j ; i j} is the out-degree of i. if i j, deg i i j = 0, if i j and i j,, deg i = 0 and j = i, /2 /2 There exists a stationary state but not necessarily uniue. Taking 0 d we modify the transition matrix: Q = [ i j ], i j = d i j + ϵ, ϵ = d N. If 0 d <, the Markov chain determined by Q has necessarily a uniue stationary distribution. Choosing a suitable d <, we may understand the stationary distribution π = [π(i)] as the age rank among the websites. Problem 2 Consider the age rank introduced above. () Let π(i) be the age rank of a site i. Show that π(i) satisfies the following relation and exlain the meaning. π(i) = d N + d j: j i π( j) deg j (2) Show more examles of the age rank and discuss the role of sites which have no hyerlinks, that is, deg i = 0 (in terms of P = [ i j ] such sites corresond to absorbing states). 25

5 7 Toics in Markov Chains I: Recurrence 7. Recurrence Definition 7.. Let i S be a state. Define the first hitting time or first assage time to i by T i = inf{n ; X n = i}. If {n ; X n = i} is an emty set, we define T i =. A state i is called recurrent if P(T i < X 0 = i) =. It is called transient if P(T i = X 0 = i) > 0. Theorem 7..2 A state i S is recurrent if and only if If a state i is transient, we have n (i, i) =. n=0 n (i, i) < n=0 and n (i, i) = n=0 P(T i < X 0 = i). Proof We first ut n (i, j) = P(X n = j X 0 = i), n = 0,, 2,..., f n (i, j) = P(T j = n X 0 = i) = P(X j,..., X n j, X n = j X 0 = i), n =, 2,.... n (i, j) is nothing else but the n ste transition robability. On the other hand, f n (i, j) is the robability that the Markov chain starts from i and reach j first time after n ste. Dividing the set of samle aths from i to j in n stes according to the number of stes after which the ath reaches j for the first time, we obtain n (i, j) = n f r (i, j) n r ( j, j), i, j S, n =, 2,.... (7.) r= We next introduce the generating functions: G i j (z) = In view of (7.) we see easily that Setting i = j in (7.2), we obtain On the other hand, since G ii () = n (i, j)z n, F i j (z) = n=0 f n (i, j)z n. G i j (z) = 0 (i, j) + F i j (z)g j j (z). (7.2) G ii (z) = + F ii (z)g ii (z) G ii (z) = n (i, i), F ii () = n=0 F ii (z). f n (i, i) = P(T i < X 0 = i) we see that two conditions F ii () = and G ii () = are euivalent. The second statement is readily clear. 26

6 7.2 Random Walks on Lattices Examle 7.2. (random walk on Z) Since the random walk starting from the origin 0 returns to it only after even stes, for recurrence we only need to comute the sum of 2n (0, 0). We start with the obvious result: 2n (0, 0) = (2n)! n!n! n n, + =. Then, using the Stirling formula: n! ( n ) n 2πn (7.3) e we obtain 2n (0, 0) πn (4) n. Hence, <,, 2n (0, 0) =, = = /2. n=0 Conseuently, one-dimensional random walk is transient if, and it is recurrent if = = /2. Remark Let {a n } and {b n } be seuences of ositive numbers. We write a n b n if lim n a n /b n =. In this case, there exist two constant numbers c > 0 and c 2 > 0 such that c a n b n c 2 a n. Hence a n and b n converge or diverge at the same time. Examle (random walk on Z 2 ) Obviously, the random walk starting from the origin 0 returns to it only after even stes. Therefore, for recurrence we only need to comute the sum of 2n (0, 0). For twodimensional random walk we need to consider two directions along with x-axis and y-axis. We see easily that 2n (0, 0) = i+ j=n (2n)! i!i! j! j! ( ) 2n = (2n)! 4 n!n! Emloying the formula for the binomial coefficients: n i=0 which is a good exercise for the readers, we obtain Then we have 2n (0, 0) = ( ) 2 n = i ( 2n n ( ) 2n 4 i+ j=n ) 2 ( 4 2n (0, 0) =, which means that two-dimensional random walk is recurrent. n!n! i!i! j! j! = ( 2n n ) ( 4 ) 2n n i=0 ( ) 2 n. i ( ) 2n, (7.4) n ) 2n πn. Examle (random walk on Z 3 ) Let us consider the isotroic random walk in 3-dimension. As there are three directions, say, x, y, z-axis, we have 2n (0, 0) = i+ j+k=n (2n)! i!i! j! j!k!k! ( ) 2n = (2n)! 6 n!n! ( ) 2n 6 i+ j+k=n n!n! i!i! j! j!k!k! = ( 2n n ) ( 6 ) 2n i+ j+k=n ( ) 2 n!. i! j!k! 27

7 We note the following two facts. First, i+ j+k=n n! i! j!k! = 3n. (7.5) n! Second, the maximum value M n = max i+ j+k=n i! j!k! is attained when n 3 i, j, k n 3 + so by the Stirling formula. Then we have Therefore. 2n (0, 0) ( 2n n M n 3 3 2πn 3n ) ( 6 ) 2n 3 n M n 3 3 2π π n 3/2. 2n (0, 0) <, which imlies that the random walk is not recurrent (i.e., transient). 7.3 Positive Recurrence and Null Recurrence If a state i is recurrent, i.e., P(T i < X 0 = i) =, the mean recurrent time is defined: E(T i X 0 = i) = np(t i = n X 0 = i). The state i is called ositive recurrent if E(T i X 0 = i) <, and null recurrent otherwise. Theorem 7.3. The states in an euivalence class are all ositive recurrent, or all null recurrent, or all transient. In articular, for an irreducible Markov chain, the states are all ositive recurrent, or all null recurrent, or all transient. Theorem For an irreducible Markov chain on a finite state sace S, every state is ositive recurrent. Examle The mean recurrent time of the one-dimensional isotroic random walk is infinity, i.e., the one-dimensional isotroic random walk is null recurrent. The roof will be given in Section??. Problem 3 Let {X n } be a Markov chain described by the following transition diagram: = = = where > 0 and > 0. For a state i S let T i = inf{n ; X n = i} be the first hitting time to i. () Calculate P(T 0 = X 0 = 0), P(T 0 = 2 X 0 = 0), P(T 0 = 3 X 0 = 0), P(T 0 = 4 X 0 = 0). = (2) Find P(T 0 = n X 0 = 0) and calculate P(T 0 = n X 0 = 0), np(t 0 = n X 0 = 0). 28

8 8 Toics in Markov Chains II: Absortion 8. Absorbing States A state i is called absorbing if ii = and i j = 0 for all j i. Once a Markov chain hits an absorbing state, it stays thereat forever. Let us consider a Markov chain on a finite state sace S with some absorbing states. We set S = S a S 0, where S a denotes the set of absorbing states and S 0 the rest. According to the above artition, the transition matrix is written as [ ] I 0 P = 0 0 =. S T Then [ ] n [ ] P n I 0 I 0 = = S T T n, where S = S and S n = S n + T n S. To avoid inessential tediousness we assume the following condition (C) For any i S 0 there exist j S a and n such that (P n ) i j > 0. In other words, the Markov chain starting from i S 0 has a ositive robability of absortion. Since S is finite by assumtion, the n in (C) is chosen indeendently of i S 0. Hence (C) is euivalent to the following (C2) There exists N such that for any i S 0 there exist j S a with (P N ) i j > 0. Lemma 8.. Notations and assumtions being as above, lim n T n = 0. S n Proof We see from the obvious relation = (P N ) i j = (P N ) i j + (P N ) i j j S j S 0 j S a and condition (C2) that j S 0 (P N ) i j <, i S 0. Note that for i, j S 0 we have (P N ) i j = (T N ) i j. We choose δ < such that j S 0 (T N ) i j δ < i S 0. Now let i S 0 and n N. We see that (T n ) i j = (T n N ) ik (T N ) k j = (T n N ) ik (T N ) k j δ (T n N ) ik = δ (T n N ) i j. j S 0 j,k S 0 k S 0 j S 0 k S 0 j S 0 29

9 Reeating this rocedure, we have (T n ) i j δ k (T n kn ) i j δ k (P n kn ) i j δ k, j S 0 j S 0 j S where 0 n kn < N. Therefore, lim (T n ) i j = 0, n j S 0 from which we have lim n (T n ) i j = 0 for all i, j S 0. Remark 8..2 It is shown that every state i S 0 is transient. Theorem 8..3 Let π 0 distribution is given by = [α β] be the initial distribution (according to S = S a S 0 ). Then the limit [α + βs 0], where S = (I T) S. Proof The limit distribution is given by [ ] lim π 0P n I 0 = lim [α β] n n S n T n = lim [α + βs n βt n ]. n We see from Lemma 8.. that On the other hand, since S n = S n + T n S we have lim βt n = 0. n S n = (I + T + T T n )S and Hence which shows the result. (I T)S n = (I T n )S. lim S n = lim (I T) (I T n )S = (I T) S, n n Examle 8..4 Consider the Markov chain given by the transition diagram, which is a random walk with absorbing barriers. The transition matrix is given by [ I 0 P = = 0 0 S T 0 0 ], S = [ ] 0, T = 0 [ ]

10 Then S = (I T) S = [ ] 2 Suose that the initial distribution is given by π 0 = [α β γ δ]. Then the limit distribution is [ α + γ + ] 2 δ β + 2 γ + δ 0 0. In articular, if the Markov chain starts at the state 3, setting π 0 = [0 0 0], we obtain the limit distribution [ 2 ] 0 0, which means that the Markov chain is absorbed in the states or 2 at the ratio : 2. Problem 4 Following Examle 8..4, study the Markov chain given by the following transition diagram, where + = Gambler s Ruin We consider a random walk with absorbing barriers at A and B, where A > 0 and B > 0. This is a Markov chain on the state sace S = { A, A +,..., B, B} with the transition diagram as follows: A B We are interested in the absorbing robability, i.e., R = P(X n = A for some n =, 2,... ) = P {X n = A}, S = P(X n = B for some n =, 2,... ) = P {X n = B}. Note that the events in the right-hand sides are not the unions of disjoint events. A samle ath is shown in the following icture: B 0 A 3

11 A key idea is to introduce a similar random walk starting at k, A k B, which is denoted by X n (k). Then the original one is X n = X n (0). Let R k and S k be the robabilities that the random walk X n (k) is absorbed at A and B, resectively. We wish to find R = R 0 and S = S 0. Lemma 8.2. {R k ;, A k B} fulfills the following difference euation: R k = R k+ + R k, R A =, R B = 0. (8.) Similarly, {S k ;, A k B} fulfills the following difference euation: S k = S k+ + S k, S A = 0, S B =. (8.2) Theorem Let A and B. Let {X n } be the random walk with absorbing barriers at A and B, and with right-move robability and left-move robability ( + = ). Then the robabilities that {X n } is absorbed at the barriers are given by (/) A (/) A+B P(X n = A for some n) = (/) A+B,, B A + B, = = 2, (/) A,, P(X n = B for some n) = (/) A+B A A + B, = = 2. In articular, the random walk is absorbed at the barriers at robability. An interretation of Theorem gives the solution to the gambler s ruin roblem. Two layers A and B toss a fair coin by turns. Let A and B be their allotted oints when the game starts. They exchange oint after each trial. This game is over when one of the layers loses all the allotted oints and the other gets A + B oints. We are interested in the robability of each layer s win. For each n 0 define X n in such a way that the allotted oint of A at time n is given by A + X n. Then {X n } becomes a random walk with absorbing barrier at A and B. It then follows from Theorem that the winning robability of A and B are given by P(A) = A A + B, P(B) = B A + B, (8.3) resectively. As a result, they are roortional to the initial allotted oints. For examle, if A = and B = 00, we have P(A) = /0 and P(B) = 00/0, which sounds that almost no chance of A s win. In a fair bet the recurrence is guaranteed by Theorem 2... Even if one has much more losses than wins, continuing the game one will be back to the zero balance. However, in reality there is a barrier of limited money. (8.3) tells the effect of the barrier. It is also interesting to know the exectation of the number of coin tosses until the game is over. Theorem Let {X n } be the same as in Theorem The exected life time of this random walk until absortion is given by A A + B (/) A,, (/) A+B AB, = = 2. 32

12 Proof Let Y k be the life time of a random walk starting from the osition k ( A k B) at time n = 0 until absortion. In other words, Y k = min{ j 0 ; X (k) j We wish to comute E(Y 0 ). We see by definition that For A < k < B we have = A X (k) j = B }. E(Y A ) = E(Y B ) = 0. (8.4) E(Y k ) = In a similar manner as in the roof of Theorem we note that Inserting (8.6) into (8.5), we obtain jp(y k = j). (8.5) j= P(Y k = j) = P(Y k+ = j ) + P(Y k = j ). (8.6) E(Y k ) = jp(y k+ = j ) + jp(y k = j ) j= j= = E(Y k+ ) + E(Y k ) +. (8.7) Thus, E(Y k ) is the solution to the difference euation (8.7) with boundary condition (8.4). This difference euation is solved in a standard manner and we find Setting k = 0, we obtain the result. A + k E(Y k ) = A + B (/) A+k,, (/) A+B (A + k)(b k), = = 2. If = = /2 and A =, B = 00, the exected life time is AB = 00. The gambler A is much inferior to B in the amount of funds (as we have seen already, the robability of A s win is just /0), however, the exected life time until the game is over is 00, which sounds longer than one exects intuitively. Perhas this is because the gambler cannot uit gambling. Problem 5 (A bold gambler) In each game a gambler wins the dollars he bets with robability, and loses with robability =. The goal of the gambler is to get 5 dollars. His strategy is to bet the difference between 5 dollars and what he has. Let X n be the amount he has just after nth bet. () Analyze the Markov chain {X n } with initial condition X 0 =. (2) Comare with the steady gambler discussed in this section, who bets just dollar in each game. 33

13 9 Galton-Watson Branching Processes Consider a simlified family tree where each individual gives birth to offsring (children) and dies. The number of offsrings is random. We are interested in whether the family survives or not. A fundamental model was roosed by F. Galton in 873 and basic roerties were derived by Galton and Watson in their joint aer in the next year. The name Galton-Watson branching rocess is uite common in literatures after their aer, but it would be more fair to refer to it as BGW rocess. In fact, Irénée-Jules Bienaymé studied the same model indeendently already in Definition Let X n be the number of individuals of the nth generation. Then {X n ; n = 0,, 2,... } becomes a discretetime stochasic rocess. We assume that the number of children born from each individual obeys a common robability distribution and is indeendent of individuals and of generation. Under this assumtion {X n } becomes a Markov chain. Let us find the transition robability. Let Y be the number of children born from an individual and set P(Y = k) = k, k = 0,, 2,.... The seuence { 0,, 2,... } describes the distribution of the number of children born from an individual. In fact, what we need is the condition k 0, k =. k=0 We refer to { 0,,... } as the offsring distribution. Let Y, Y 2,... be indeendent identically distributed random variables, of which the distribution is the same as Y. Then, we define the transition robability by i (i, j) = P(X n+ = j X n = i) = P Y k = j, i, j 0, k= and 0, j, (0, j) =, j = 0. 34

14 Clearly, the state 0 is an absorbing one. The above Markov chain {X n } over the state sace {0,, 2,... } is called the Galton-Watson branching rocess with offsring distribution { k ; k = 0,, 2,... }. For simlicity we assume that X 0 =. When 0 + =, the famility tree is reduced to just a ath without branching so the situation is much simler (Problem 6). We will focus on the case where 0 + <, 2 <,..., k <,.... In the next section on we will always assume the above conditions. Problem 6 (One-child olicy) Consider the Galton-Watson branching rocess with offsring distribution satisfying 0 + =. Calculate the robabilities = P(X = 0), 2 = P(X 0, X 2 = 0),..., n = P(X 0,..., X n 0, X n = 0),... and find the extinction robability P {X n = 0} = P(X n = 0 occurs for some n ). 9.2 Generating Functions Let {X n } be the Galton-Watson branching rocess with offsring distribution { k ; k = 0,, 2,... }. Let (i, j) = P(X n+ = j X n = i) be the transition robability. We assume that X 0 =. Define the generating function of the offsring distribution by f (s) = The series in the right-hand side converges for s. We set k s k. (9.) k=0 f 0 (s) = s, f (s) = f (s), f n (s) = f ( f n (s)). Lemma 9.2. (i, j)s j = [ f (s)] i, i =, 2,.... (9.2) Proof By definition, (i, j) = P (Y + + Y i = j) = k + +k i = j k 0,...,k i 0 P(Y = k,..., Y i = k i ). Since Y,..., Y i are indeendent, we have (i, j) = P(Y = k ) P(Y i = k i ) = k + +k i = j k 0,...,k i 0 k + +k i = j k 0,...,k i 0 k ki. Hence, (i, j)s j = which roves the assertion. k + +k i = j k 0,...,k i 0 k ki s j = k s k ki s k i = [ f (s)] i, k =0 k i =0 35

15 Lemma Let n (i, j) be the n-ste transition robability of the Galton-Watson branching rocess. We have n (i, j)s j = [ f n (s)] i, i =, 2,.... (9.3) Proof We rove the assertion by induction on n. First note that (i, j) = (i, j) and f (s) = f (s) by definition. For n = we need to show that (i, j)s j = [ f (s)] i, i =, 2,..., (9.4) Which was shown in Lemma Suose that n and the claim (9.3) is valid u to n. Using the Chaman-Kolmogorov identity, we see that n+ (i, j)s j = (i, k) n (k, j)s j. k=0 Since by assumtion of induction, we obtain n (k, j)s j = [ f n (s)] k n+ (i, j)s j = (i, k)[ f n (s)] k. k=0 The right-hand side coincides with (9.4) where s is relaced by f n (s). Conseuently, we come to n+ (i, j)s j = [ f ( f n (s))] i = [ f n+ (s)] i, which roves the claim for n +. Since X 0 =, P(X n = j) = P(X n = j X 0 = ) = n (, j). In articular, P(X = j) = P(X = j X 0 = ) = (, j) = (, j) = j. Theorem Assume that the mean value of the offsring distribution is finite: m = k k <. k=0 Then we have E[X n ] = m n. Proof Differentiating (9.), we obtain f (s) = k k s k, s <. (9.5) k=0 36

16 Letting s 0, we have On the other hand, setting i = in (9.3), we have Differentiating both sides, we come to lim f (s) = m. s 0 n (, j)s j = f n (s) = f n ( f (s)). (9.6) f n(s) = j n (, j)s j = f n ( f (s)) f (s). (9.7) Letting s 0, we have lim f n(s) = s 0 j n (, j) = lim f s 0 n ( f (s)) lim f (s) = m lim f s 0 s 0 n (s). Therefore, which means that E(X n ) = lim s 0 f n(s) = m n, jp(x n = j) = j n (, j) = m n. In conclusion, the mean value of the number of individuals in the nth generation, E(X n ), decreases and converges to 0 if m < and diverges to the infinity if m >, as n. It stays at a constant if m =. We are thus suggested that extinction of the family occurs when m <. 9.3 Extinction Probability The event {X n = 0} means that the family died out until the nth generation. So = P {X n = 0} is the robability of extinction of the family. Note that the events in the right-hand side is not mutually exclusive but {X = 0} {X 2 = 0} {X n = 0}.... Therefore, it holds that = lim n P(X n = 0). (9.8) If =, this family almost surely dies out in some generation. If <, the survival robability is ositive > 0. We are interested in whether = or not. Lemma 9.3. Let f (s) be the generating function of the offsring distribution, and set f n (s) = f ( f n (s)) as before. Then we have = lim n f n (0). Therefore, satisfies the euation: = f (). (9.9) 37

17 Proof Hence, It follows from Lemma that f n (s) = n (, j)s j. f n (0) = n (, 0) = P(X n = 0 X 0 = ) = P(X n = 0), where the last identity is by the assumtion of X 0 =. The assertion is now straightforward by combining (9.8). The second assertion follows since f (s) is a continuous function on [0, ]. Lemma Assume that the offsring distribution satisfies the conditions: 0 + <, 2 <,..., k <,.... Then the generating function f (t) verifies the following roerties. () f (s) is increasing, i.e., f (s ) f (s 2 ) for 0 s s 2. (2) f (s) is strictly convex, i.e., if 0 s < s 2 and 0 < θ < we have f (θs + ( θ)s 2 ) < θ f (s ) + ( θ) f (s 2 ). Proof f (s) > 0. () is aarent since the coefficient of the ower series f (s) is non-negative. (2) follows by Lemma () If m, we have f (s) > s for 0 s <. (2) If m >, there exists a uniue s such that 0 s < and f (s) = s. Lemma f (0) f 2 (0). Theorem The extinction robability of the Galton-Watson branching rocess as above coincides with the smallest s such that s = f (s), 0 s. Moreover, if m we have =, and if m > we have <. The Galton-Watson branching rocess is called subcritical, critical and suercritical if m <, m = and m >, resectively. The survival is determined only by the mean value m of the offsring distribution. The situation changes dramatically at m = and, following the terminology of statistical hysics, we call it hase transition. Problem 7 Let b, be constant numbers such that b > 0, 0 < < and b + <. Suose that the offsring distribution given by k = b k, k =, 2,..., 0 = k. () Find the generating function f (s) of the off-sring distribution. (2) Set m = and find f n (s). k= Problem 8 Show your own model based on the Galton-Watson branching rocess with m = 0.72 (this is motivated by the total fertility rate of Jaan in 206, that is,.44). Then, by comuter simulation or by numerical comutation, estimate the extinction robability of your model. 38

1 Random Variables and Probability Distributions

1 Random Variables and Probability Distributions 1 Random Variables and Probability Distributions 1.1 Random Variables 1.1.1 Discrete random variables A random variable X is called discrete if the number of values that X takes is finite or countably

More information

1 Probability Spaces and Random Variables

1 Probability Spaces and Random Variables 1 Probability Saces and Random Variables 1.1 Probability saces Ω: samle sace consisting of elementary events (or samle oints). F : the set of events P: robability 1.2 Kolmogorov s axioms Definition 1.2.1

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

8 STOCHASTIC PROCESSES

8 STOCHASTIC PROCESSES 8 STOCHASTIC PROCESSES The word stochastic is derived from the Greek στoχαστικoς, meaning to aim at a target. Stochastic rocesses involve state which changes in a random way. A Markov rocess is a articular

More information

Solution: (Course X071570: Stochastic Processes)

Solution: (Course X071570: Stochastic Processes) Solution I (Course X071570: Stochastic Processes) October 24, 2013 Exercise 1.1: Find all functions f from the integers to the real numbers satisfying f(n) = 1 2 f(n + 1) + 1 f(n 1) 1. 2 A secial solution

More information

Homework Solution 4 for APPM4/5560 Markov Processes

Homework Solution 4 for APPM4/5560 Markov Processes Homework Solution 4 for APPM4/556 Markov Processes 9.Reflecting random walk on the line. Consider the oints,,, 4 to be marked on a straight line. Let X n be a Markov chain that moves to the right with

More information

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

= p(t)(1 λδt + o(δt)) (from axioms) Now p(0) = 1, so c = 0 giving p(t) = e λt as required. 5 For a non-homogeneous process we have

= p(t)(1 λδt + o(δt)) (from axioms) Now p(0) = 1, so c = 0 giving p(t) = e λt as required. 5 For a non-homogeneous process we have . (a (i I: P(exactly event occurs in [t, t + δt = λδt + o(δt, [o(δt/δt 0 as δt 0]. II: P( or more events occur in [t, t + δt = o(δt. III: Occurrence of events after time t is indeendent of occurrence of

More information

Solutions to In Class Problems Week 15, Wed.

Solutions to In Class Problems Week 15, Wed. Massachusetts Institute of Technology 6.04J/18.06J, Fall 05: Mathematics for Comuter Science December 14 Prof. Albert R. Meyer and Prof. Ronitt Rubinfeld revised December 14, 005, 1404 minutes Solutions

More information

Lecture 4 - Random walk, ruin problems and random processes

Lecture 4 - Random walk, ruin problems and random processes Lecture 4 - Random walk, ruin problems and random processes Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 4 - Random walk, ruin problems and random processesapril 19, 2009 1 / 30 Part I Random

More information

Elementary Analysis in Q p

Elementary Analysis in Q p Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

Statics and dynamics: some elementary concepts

Statics and dynamics: some elementary concepts 1 Statics and dynamics: some elementary concets Dynamics is the study of the movement through time of variables such as heartbeat, temerature, secies oulation, voltage, roduction, emloyment, rices and

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

Analysis of some entrance probabilities for killed birth-death processes

Analysis of some entrance probabilities for killed birth-death processes Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Brownian Motion and Random Prime Factorization

Brownian Motion and Random Prime Factorization Brownian Motion and Random Prime Factorization Kendrick Tang June 4, 202 Contents Introduction 2 2 Brownian Motion 2 2. Develoing Brownian Motion.................... 2 2.. Measure Saces and Borel Sigma-Algebras.........

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES AARON ZWIEBACH Abstract. In this aer we will analyze research that has been recently done in the field of discrete

More information

The Poisson Regression Model

The Poisson Regression Model The Poisson Regression Model The Poisson regression model aims at modeling a counting variable Y, counting the number of times that a certain event occurs during a given time eriod. We observe a samle

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

B8.1 Martingales Through Measure Theory. Concept of independence

B8.1 Martingales Through Measure Theory. Concept of independence B8.1 Martingales Through Measure Theory Concet of indeendence Motivated by the notion of indeendent events in relims robability, we have generalized the concet of indeendence to families of σ-algebras.

More information

EE/Stats 376A: Information theory Winter Lecture 5 Jan 24. Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B.

EE/Stats 376A: Information theory Winter Lecture 5 Jan 24. Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B. EE/Stats 376A: Information theory Winter 207 Lecture 5 Jan 24 Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B. 5. Outline Markov chains and stationary distributions Prefix codes

More information

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu and Markov Models Huizhen Yu janey.yu@cs.helsinki.fi Det. Comuter Science, Univ. of Helsinki Some Proerties of Probabilistic Models, Sring, 200 Huizhen Yu (U.H.) and Markov Models Jan. 2 / 32 Huizhen Yu

More information

Stochastic integration II: the Itô integral

Stochastic integration II: the Itô integral 13 Stochastic integration II: the Itô integral We have seen in Lecture 6 how to integrate functions Φ : (, ) L (H, E) with resect to an H-cylindrical Brownian motion W H. In this lecture we address the

More information

MATH 2710: NOTES FOR ANALYSIS

MATH 2710: NOTES FOR ANALYSIS MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Introduction to Probability and Statistics

Introduction to Probability and Statistics Introduction to Probability and Statistics Chater 8 Ammar M. Sarhan, asarhan@mathstat.dal.ca Deartment of Mathematics and Statistics, Dalhousie University Fall Semester 28 Chater 8 Tests of Hyotheses Based

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

BOUNDS FOR THE COUPLING TIME IN QUEUEING NETWORKS PERFECT SIMULATION

BOUNDS FOR THE COUPLING TIME IN QUEUEING NETWORKS PERFECT SIMULATION BOUNDS FOR THE COUPLING TIME IN QUEUEING NETWORKS PERFECT SIMULATION JANTIEN G. DOPPER, BRUNO GAUJAL AND JEAN-MARC VINCENT Abstract. In this aer, the duration of erfect simulations for Markovian finite

More information

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar 15-859(M): Randomized Algorithms Lecturer: Anuam Guta Toic: Lower Bounds on Randomized Algorithms Date: Setember 22, 2004 Scribe: Srinath Sridhar 4.1 Introduction In this lecture, we will first consider

More information

Robust hamiltonicity of random directed graphs

Robust hamiltonicity of random directed graphs Robust hamiltonicity of random directed grahs Asaf Ferber Rajko Nenadov Andreas Noever asaf.ferber@inf.ethz.ch rnenadov@inf.ethz.ch anoever@inf.ethz.ch Ueli Peter ueter@inf.ethz.ch Nemanja Škorić nskoric@inf.ethz.ch

More information

An Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices

An Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices Coyright 202 Tech Science Press CMES, vol.86, no.4,.30-39, 202 An Inverse Problem for Two Sectra of Comlex Finite Jacobi Matrices Gusein Sh. Guseinov Abstract: This aer deals with the inverse sectral roblem

More information

MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression

MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression 1/9 MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression Dominique Guillot Deartments of Mathematical Sciences University of Delaware February 15, 2016 Distribution of regression

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems Int. J. Oen Problems Comt. Math., Vol. 3, No. 2, June 2010 ISSN 1998-6262; Coyright c ICSRS Publication, 2010 www.i-csrs.org Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various

More information

18.312: Algebraic Combinatorics Lionel Levine. Lecture 12

18.312: Algebraic Combinatorics Lionel Levine. Lecture 12 8.3: Algebraic Combinatorics Lionel Levine Lecture date: March 7, Lecture Notes by: Lou Odette This lecture: A continuation of the last lecture: comutation of µ Πn, the Möbius function over the incidence

More information

The Longest Run of Heads

The Longest Run of Heads The Longest Run of Heads Review by Amarioarei Alexandru This aer is a review of older and recent results concerning the distribution of the longest head run in a coin tossing sequence, roblem that arise

More information

On split sample and randomized confidence intervals for binomial proportions

On split sample and randomized confidence intervals for binomial proportions On slit samle and randomized confidence intervals for binomial roortions Måns Thulin Deartment of Mathematics, Usala University arxiv:1402.6536v1 [stat.me] 26 Feb 2014 Abstract Slit samle methods have

More information

ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010

ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010 ECE 6960: Adv. Random Processes & Alications Lecture Notes, Fall 2010 Lecture 16 Today: (1) Markov Processes, (2) Markov Chains, (3) State Classification Intro Please turn in H 6 today. Read Chater 11,

More information

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0 Advanced Finite Elements MA5337 - WS7/8 Solution sheet This exercise sheets deals with B-slines and NURBS, which are the basis of isogeometric analysis as they will later relace the olynomial ansatz-functions

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Topic 7: Using identity types

Topic 7: Using identity types Toic 7: Using identity tyes June 10, 2014 Now we would like to learn how to use identity tyes and how to do some actual mathematics with them. By now we have essentially introduced all inference rules

More information

On a Markov Game with Incomplete Information

On a Markov Game with Incomplete Information On a Markov Game with Incomlete Information Johannes Hörner, Dinah Rosenberg y, Eilon Solan z and Nicolas Vieille x{ January 24, 26 Abstract We consider an examle of a Markov game with lack of information

More information

Sums of independent random variables

Sums of independent random variables 3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where

More information

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction GOOD MODELS FOR CUBIC SURFACES ANDREAS-STEPHAN ELSENHANS Abstract. This article describes an algorithm for finding a model of a hyersurface with small coefficients. It is shown that the aroach works in

More information

Solutions to exercises on delays. P (x = 0 θ = 1)P (θ = 1) P (x = 0) We can replace z in the first equation by its value in the second equation.

Solutions to exercises on delays. P (x = 0 θ = 1)P (θ = 1) P (x = 0) We can replace z in the first equation by its value in the second equation. Ec 517 Christohe Chamley Solutions to exercises on delays Ex 1: P (θ = 1 x = 0) = P (x = 0 θ = 1)P (θ = 1) P (x = 0) = 1 z)µ (1 z)µ + 1 µ. The value of z is solution of µ c = δµz(1 c). We can relace z

More information

LECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)]

LECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)] LECTURE 7 NOTES 1. Convergence of random variables. Before delving into the large samle roerties of the MLE, we review some concets from large samle theory. 1. Convergence in robability: x n x if, for

More information

arxiv:cond-mat/ v2 25 Sep 2002

arxiv:cond-mat/ v2 25 Sep 2002 Energy fluctuations at the multicritical oint in two-dimensional sin glasses arxiv:cond-mat/0207694 v2 25 Se 2002 1. Introduction Hidetoshi Nishimori, Cyril Falvo and Yukiyasu Ozeki Deartment of Physics,

More information

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS #A13 INTEGERS 14 (014) ON THE LEAST SIGNIFICANT ADIC DIGITS OF CERTAIN LUCAS NUMBERS Tamás Lengyel Deartment of Mathematics, Occidental College, Los Angeles, California lengyel@oxy.edu Received: 6/13/13,

More information

Applied Probability Trust (24 March 2004) Abstract

Applied Probability Trust (24 March 2004) Abstract Alied Probability Trust (24 March 2004) STOPPING THE MAXIMUM OF A CORRELATED RANDOM WALK, WITH COST FOR OBSERVATION PIETER ALLAART, University of North Texas Abstract Let (S n ) n 0 be a correlated random

More information

1 Riesz Potential and Enbeddings Theorems

1 Riesz Potential and Enbeddings Theorems Riesz Potential and Enbeddings Theorems Given 0 < < and a function u L loc R, the Riesz otential of u is defined by u y I u x := R x y dy, x R We begin by finding an exonent such that I u L R c u L R for

More information

On the Toppling of a Sand Pile

On the Toppling of a Sand Pile Discrete Mathematics and Theoretical Comuter Science Proceedings AA (DM-CCG), 2001, 275 286 On the Toling of a Sand Pile Jean-Christohe Novelli 1 and Dominique Rossin 2 1 CNRS, LIFL, Bâtiment M3, Université

More information

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)

More information

Weakly Short Memory Stochastic Processes: Signal Processing Perspectives

Weakly Short Memory Stochastic Processes: Signal Processing Perspectives Weakly Short emory Stochastic Processes: Signal Processing Persectives by Garimella Ramamurthy Reort No: IIIT/TR/9/85 Centre for Security, Theory and Algorithms International Institute of Information Technology

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

1-way quantum finite automata: strengths, weaknesses and generalizations

1-way quantum finite automata: strengths, weaknesses and generalizations 1-way quantum finite automata: strengths, weaknesses and generalizations arxiv:quant-h/9802062v3 30 Se 1998 Andris Ambainis UC Berkeley Abstract Rūsiņš Freivalds University of Latvia We study 1-way quantum

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Notes on Instrumental Variables Methods

Notes on Instrumental Variables Methods Notes on Instrumental Variables Methods Michele Pellizzari IGIER-Bocconi, IZA and frdb 1 The Instrumental Variable Estimator Instrumental variable estimation is the classical solution to the roblem of

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

FINITE MARKOV CHAINS

FINITE MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona FINITE MARKOV CHAINS Lidia Pinilla Peralta Director: Realitzat a: David Márquez-Carreras Departament de Probabilitat,

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

On generalizing happy numbers to fractional base number systems

On generalizing happy numbers to fractional base number systems On generalizing hay numbers to fractional base number systems Enriue Treviño, Mikita Zhylinski October 17, 018 Abstract Let n be a ositive integer and S (n) be the sum of the suares of its digits. It is

More information

Chapter 1: PROBABILITY BASICS

Chapter 1: PROBABILITY BASICS Charles Boncelet, obability, Statistics, and Random Signals," Oxford University ess, 0. ISBN: 978-0-9-0005-0 Chater : PROBABILITY BASICS Sections. What Is obability?. Exeriments, Outcomes, and Events.

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

A BOUND FOR THE COPS AND ROBBERS PROBLEM *

A BOUND FOR THE COPS AND ROBBERS PROBLEM * SIAM J. DISCRETE MATH. Vol. 25, No. 3,. 1438 1442 2011 Society for Industrial and Alied Mathematics A BOUND FOR THE COPS AND ROBBERS PROBLEM * ALEX SCOTT AND BENNY SUDAKOV Abstract. In this short aer we

More information

Some Definition and Example of Markov Chain

Some Definition and Example of Markov Chain Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and

More information

Asymptotically Optimal Simulation Allocation under Dependent Sampling

Asymptotically Optimal Simulation Allocation under Dependent Sampling Asymtotically Otimal Simulation Allocation under Deendent Samling Xiaoing Xiong The Robert H. Smith School of Business, University of Maryland, College Park, MD 20742-1815, USA, xiaoingx@yahoo.com Sandee

More information

DRAFT - do not circulate

DRAFT - do not circulate An Introduction to Proofs about Concurrent Programs K. V. S. Prasad (for the course TDA383/DIT390) Deartment of Comuter Science Chalmers University Setember 26, 2016 Rough sketch of notes released since

More information

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

Lecture 10: Hypercontractivity

Lecture 10: Hypercontractivity CS 880: Advanced Comlexity Theory /15/008 Lecture 10: Hyercontractivity Instructor: Dieter van Melkebeek Scribe: Baris Aydinlioglu This is a technical lecture throughout which we rove the hyercontractivity

More information

Feedback-error control

Feedback-error control Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller

More information

CHAPTER 2: SMOOTH MAPS. 1. Introduction In this chapter we introduce smooth maps between manifolds, and some important

CHAPTER 2: SMOOTH MAPS. 1. Introduction In this chapter we introduce smooth maps between manifolds, and some important CHAPTER 2: SMOOTH MAPS DAVID GLICKENSTEIN 1. Introduction In this chater we introduce smooth mas between manifolds, and some imortant concets. De nition 1. A function f : M! R k is a smooth function if

More information

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle] Chater 5 Model checking, verification of CTL One must verify or exel... doubts, and convert them into the certainty of YES or NO. [Thomas Carlyle] 5. The verification setting Page 66 We introduce linear

More information

Notes on pressure coordinates Robert Lindsay Korty October 1, 2002

Notes on pressure coordinates Robert Lindsay Korty October 1, 2002 Notes on ressure coordinates Robert Lindsay Korty October 1, 2002 Obviously, it makes no difference whether the quasi-geostrohic equations are hrased in height coordinates (where x, y,, t are the indeendent

More information

Homework 6: Solutions Sid Banerjee Problem 1: (The Flajolet-Martin Counter) ORIE 4520: Stochastics at Scale Fall 2015

Homework 6: Solutions Sid Banerjee Problem 1: (The Flajolet-Martin Counter) ORIE 4520: Stochastics at Scale Fall 2015 Problem 1: (The Flajolet-Martin Counter) In class (and in the prelim!), we looked at an idealized algorithm for finding the number of distinct elements in a stream, where we sampled uniform random variables

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Improved Bounds on Bell Numbers and on Moments of Sums of Random Variables

Improved Bounds on Bell Numbers and on Moments of Sums of Random Variables Imroved Bounds on Bell Numbers and on Moments of Sums of Random Variables Daniel Berend Tamir Tassa Abstract We rovide bounds for moments of sums of sequences of indeendent random variables. Concentrating

More information

1 Random walks: an introduction

1 Random walks: an introduction Random Walks: WEEK Random walks: an introduction. Simple random walks on Z.. Definitions Let (ξ n, n ) be i.i.d. (independent and identically distributed) random variables such that P(ξ n = +) = p and

More information

Applications to stochastic PDE

Applications to stochastic PDE 15 Alications to stochastic PE In this final lecture we resent some alications of the theory develoed in this course to stochastic artial differential equations. We concentrate on two secific examles:

More information

Uniform Law on the Unit Sphere of a Banach Space

Uniform Law on the Unit Sphere of a Banach Space Uniform Law on the Unit Shere of a Banach Sace by Bernard Beauzamy Société de Calcul Mathématique SA Faubourg Saint Honoré 75008 Paris France Setember 008 Abstract We investigate the construction of a

More information

Monte Carlo Studies. Monte Carlo Studies. Sampling Distribution

Monte Carlo Studies. Monte Carlo Studies. Sampling Distribution Monte Carlo Studies Do not let yourself be intimidated by the material in this lecture This lecture involves more theory but is meant to imrove your understanding of: Samling distributions and tests of

More information

Probability Estimates for Multi-class Classification by Pairwise Coupling

Probability Estimates for Multi-class Classification by Pairwise Coupling Probability Estimates for Multi-class Classification by Pairwise Couling Ting-Fan Wu Chih-Jen Lin Deartment of Comuter Science National Taiwan University Taiei 06, Taiwan Ruby C. Weng Deartment of Statistics

More information

Chapter 11 Advanced Topic Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section

More information

Convex Optimization methods for Computing Channel Capacity

Convex Optimization methods for Computing Channel Capacity Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem

More information

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek Use of Transformations and the Reeated Statement in PROC GLM in SAS Ed Stanek Introduction We describe how the Reeated Statement in PROC GLM in SAS transforms the data to rovide tests of hyotheses of interest.

More information

Theory of Stochastic Processes 3. Generating functions and their applications

Theory of Stochastic Processes 3. Generating functions and their applications Theory of Stochastic Processes 3. Generating functions and their applications Tomonari Sei sei@mist.i.u-tokyo.ac.jp Department of Mathematical Informatics, University of Tokyo April 20, 2017 http://www.stat.t.u-tokyo.ac.jp/~sei/lec.html

More information

ECE 534 Information Theory - Midterm 2

ECE 534 Information Theory - Midterm 2 ECE 534 Information Theory - Midterm Nov.4, 009. 3:30-4:45 in LH03. You will be given the full class time: 75 minutes. Use it wisely! Many of the roblems have short answers; try to find shortcuts. You

More information