1 Random Variables and Probability Distributions

Size: px
Start display at page:

Download "1 Random Variables and Probability Distributions"

Transcription

1 1 Random Variables and Probability Distributions 1.1 Random Variables Discrete random variables A random variable X is called discrete if the number of values that X takes is finite or countably infinite. To be more recise, for a discrete random variable X there exist a (finite or infinite) seuence of real numbers a 1, a 2,... and corresonding nonnegative numbers 1, 2,... such that P(X = a i ) = i, i 0, i = 1. In this case is called the (robability) distribution of X. Obviously, µ X (dx) = i δ ai (dx) = i δ(x a i )dx i P(a X b) = i i:a a i b i i... a a a... a i Examle (coin toss) We set Then For a fair coin we set = 1/2. 1, heads, X = 0, tails. P(X = 1) =, P(X = 0) = = 1. Examle (waiting time) Fli a fair coin reeatedly until we get the heads. Let T be the number of coin tosses to get the first heads. (If the heads occurs at the first trial, we have T = 1; If the tails occurs at the first trial and the heads at the second trial, we have T = 2, and so on.) Continuous random variables P(T = k) = (1 ) k 1, k = 1, 2,.... A random variable X is called continuous if P(X = a) = 0 for all a R. We understand intuitively that X varies continuously. If there exists a function f (x) such that P(a X b) = b we say that X admits a robability density function. Note that + a f (x)dx, a < b, f (x)dx = 1, f (x) 0. 1

2 In this case, is called the (robability) distribution of X. µ X (dx) = f (x)dx f (x) It is useful to consider the distribution function: a b x Then we have F X (x) = P(X x) = x f X (x) = d dx F X(x). f X (t)dt, x R. Remark (1) A continuous random variable does not necessarily admit a robability density function. But many continuous random variables in ractical alications admit robability density functions. (2) There is a random variable which is neither discrete nor continuous. But most random variables in ractical alications are either discrete or continuous. Examle (random cut) Divide the interval [0, L] (L > 0) into two segments. (1) Let X be the coordinate of the cutting oint (the length of the segment containing 0). 0, x < 0; F X (x) = x/l, 0 x L; 1, x > L. (2) Let M be the length of the longer segment. 0, x < L/2; F M (x) = (2x L)/L, L/2 x L; 1, x > L. Examle Let A be a randomly chosen oint from the disc with radius R > 0. Let X be the distance between the center O and A. We have P(a X b) = π(b2 a 2 ) πr 2 = 1 R 2 so the robability density function is given by b 0, x 0, f (x) = 2x R, 0 x R, 2 0, x > R. a 2xdx, 0 < a < b < R, 2

3 O X a b Figure 1.1: Random choice of a oint Mean and variance Definition The mean or exectation value of a random variable X is defined by m = E[X] = + xµ X (dx) If X is discrete, we have E[X] = a i i. i If X admits a robability density function f (x), we have E[X] = Remark For a function φ(x) we have E[φ(X)] = For examle, + + x f (x)dx. φ(x)µ(dx). E[X m ] = E[e itx ] = + + x m µ(dx) e itx µ(dx) (mth moment), (characteristic function). Definition The variance of a random variable X is defined by or euivalently, σ 2 = V[X] = σ 2 = V[X] = E[(X E[X]) 2 ] = E[X 2 ] E[X] 2, + (x E[X]) 2 µ(dx) = + ( + 2 x 2 µ(dx) xµ(dx)). Exercise (see Examle 1.1.2) Calculate the mean and variance of the waiting time T. Exercise Let S be the length of the shorter segment obtained by randomly cutting the interval [0, L]. Calculate the mean and variance of S. 3

4 1.2 Discrete Distributions Bernoulli distribution For 0 1 the distribution (1 )δ 0 + δ 1 is called Bernoulli distribution with success robability. This is the distribution of coin toss. The mean value and variance are given by m =, σ 2 = (1 ) Exercise Let a, b be distinct real numbers. A general two-oint distribution is defined by δ a + δ b, where 0 1 and + = 1. Determine the two-oint distribution having mean 0, variance Binomial distribution For 0 1 and n 1 the distribution n k=0 ( ) n k (1 ) n k δ k k ( ) n is called the binomial distribution B(n, ). The uantity k (1 ) n k is the robability that n coin tosses with k robabilities for heads and = 1 for tails result in k heads and n k tails B(100, 0.4) Exercise Verify that m = n and σ 2 = n(1 ) for B(n, ) Geometric distribution For 0 1 the distribution (1 ) k 1 δ k k=1 is called the geometric distribution with success robability. This is the distribution of waiting time for the first heads (Examle 1.1.2). Exercise Verify that m = 1 and σ2 = 1 2 4

5 = Figure 1.2: Geometric distribution with arameter = 0.4 Remark In some literatures, the geometric distribution with arameter is defined by Poisson distribution For λ > 0 the distribution (1 ) k δ k k=0 k=0 λ λk e k! δ k is called the Poisson distribution with arameter λ. The mean and variance are given by m = λ, σ 2 = λ λ = λ = λ = Figure 1.3: Poisson distribution λ = 1/2, 1, 3 Problem 1 The robability generating function of the Poisson distribution is defined by G(z) = k z k, k=0 λ λk k = e k!. 5

6 (1) Find a concise exression of G(z). (2) By using G (1) and G (1) find the mean value and the variance of the Poisson distribution with arameter λ. (3) Show that k < k. k:odd k:even In other words, the robability of taking even values is greater than that of odd values. (4) Discuss relevant toics. 1.3 Continuous Distributions (Density Functions) Uniform distribution For a finite interval [a, b], 1 f (x) = b a, a x b, 0, otherwise becomes a density function, which determines the uniform distribution on [a, b]. 1 b a a b x The mean value and the variance are given by m = b Exonential distribution a x dx b a = a + b b, σ 2 = x 2 dx 2 a b a (b m2 a)2 =. 12 The exonential distribution with arameter λ > 0 is defined by the density function λe λx, x 0, f (x) = 0, otherwise. This is a model for waiting time (continuous time). λ 0 x Exercise Verify that m = 1 λ and σ2 = 1 λ 2. 6

7 1.3.3 Normal distribution For m R and sigma > 0 we may check that { } 1 f (x) = ex (x m)2 2πσ 2 2σ 2 becomes a density function. The distribution defined by the above density function is called the normal distribution or Gaussian distribution and denoted by N(m, σ 2 ). In articular, N(0, 1) is called the standard normal distribution or the standard Gaussian distribution. Exercise Differentiating both sides of the known formula: + π e tx2 dx = 0 2 t, t > 0, find the values + x 2n e x2 dx, n = 0, 1, 2, Exercise Prove that the above f (x) is a robability density function. Then rove by integration that the mean is m and the variance is σ 2 : 1 + } (x m)2 m = x ex { dx, 2πσ 2 2σ 2 σ } = (x m) 2 (x m)2 ex { dx 2πσ 2 2σ 2 Problem 2 Choose randomly a oint A from the disc with radius one and let X be the radius of the inscribed circle with center A. (1) For x 0 find the robability P(X x). (2) Find the robability density function f X (x) of X. (Note that x varies over all real numbers.) (3) Calculate the mean and variance of X. (4) Calculate the mean and variance of the area of inscribed circle S = πx 2. (5) Discuss similar uestions for a ball. X A 7

8 2 Indeendence and Deendence 2.1 Indeendent Events Definition (Pairwise indeendence) A (finite or infinite) seuence of events A 1, A 2,... is called airwise indeendent if any air of events A i1, A i2 (i 1 i 2 ) verifies P(A i1 A i2 ) = P(A i1 )P(A i2 ). Definition (Indeendence) A (finite or infinite) seuence of events A 1, A 2,... is called indeendent if any choice of finitely many events A i1,..., A in (i 1 < i 2 < < i n ) satisfies P(A i1 A i2 A in ) = P(A i1 )P(A i2 ) P(A in ). Examle Consider the trial to randomly draw a card from a deck of 52 cards. Let A be the event that the result is an ace and B the event that the result is sades. Then A, B are indeendent. Problem 3 An urn contains four balls with numbers 112, 121, 211, 222. We draw a ball at random and let X 1 be the first digit, X 2 the second digit, and X 3 the last digit. For i = 1, 2, 3 we define an event A i by A i = {X i = 1}. Show that {A 1, A 2, A 3 } is airwise indeendent but is not indeendent. Remark It is allowed to consider whether the seuence of events {A, A} is indeendent or not. If they are indeendent, by definition we have P(A A) = P(A)P(A). Then P(A) = 0 or P(A) = 1. Notice that P(A) = 0 does not imly A = (emty event). Similarly, P(A) = 1 does not imly A = Ω (whole event). Exercise For A we write A # for itself A or its comlementary event A c. Prove the following assertions. (1) If A and B are indeendent, so are A # and B #. (2) If A 1, A 2,... are indeendent, so are A # 1, A# 2,.... Definition (Conditional robability) For two events A, B the conditional robability of A relative to B (or on the hyothesis B, or for given B) is defined by P(A B) = P(A B) P(B) whenever P(B) > 0. Theorem Let A, B be events with P(A) > 0 and P(B) > 0. Then, the following assertions are euivalent: (i) A, B are indeendent; (ii) P(A B) = P(A); (iii) P(B A) = P(B); 8

9 2.2 Indeendent Random Variables Definition A (finite or infinite) seuence of random variables X 1, X 2,... is indeendent (res. airwise indeendent) if so is the seuence of events {X 1 a 1 }, {X 1 a 2 },... for any a 1, a 2, R. In other words, a (finite or infinite) seuence of random variables X 1, X 2,... is indeendent if for any finite X i1,..., X in (i 1 < i 2 < < i n ) and constant numbers a 1,..., a n P(X i1 a 1, X i2 a 2,..., X in a n ) = P(X i1 a 1 )P(X i2 a 2 ) P(X in a n ) (2.1) holds. Similar assertion holds for the airwise indeendence. If random variables X 1, X 2,... are discrete, (2.1) may be relaced with P(X i1 = a 1, X i2 = a 2,..., X in = a n ) = P(X i1 = a 1 )P(X i2 = a 2 ) P(X in = a n ). Examle Choose at random a oint from the rectangle Ω = {(x, y) ; a x b, c y d}. Let X denote the x-coordinates of the chosen oint and Y the y-coordinates. Then X, Y are indeendent. Examle (Bernoulli trials) This is a model of coin-toss and is the most fundamental stochastic rocess. A seuence of random variables (or a discrete-time stochastic rocess) {X 1, X 2,..., X n,... } is called the Bernoulli trials with success robability (0 1) if they are indeendent and have the same distribution as By definition we have P(X 1 = ξ 1, X 2 = ξ 2,..., X n = ξ n ) = P(X n = 1) =, P(X n = 0) = = 1. n P(X k = ξ k ) for all ξ 1, ξ 2,..., ξ n {0, 1}. k=1 In general, statistical uantity in the left-hand side is called the finite dimensional distribution of the stochastic rocess {X n }. The total set of finite dimensional distributions characterizes a stochastic rocess. 2.3 Covariance and Correlation Coefficient Recall that the mean of a random variable X is defined by m X = E(X) = + xµ X (dx). Theorem (Linearity) For two random variables X, Y and two constant numbers a, b it holds that E(aX + by) = ae(x) + be(y). Theorem (Multilicativity) If random variables X 1, X 2,..., X n are indeendent, we have E[X 1 X 2 X n ] = E[X 1 ] E[X n ]. (2.2) Proof We first rove the assertion for X k = 1 Ak (indicator random variable). By definition X 1,..., X n are indeendent if and only if so are A 1,..., A n. Therefore, E[X 1 X n ] = E[1 A1 A n ] = P(A 1 A n ) = P(A 1 ) P(A n ) = E[X 1 ] E[X n ]. Thus (2.2) is verified. Then, by linearity the assertion is valid for X k taking finitely many values (finite linear combination of indicator random variables). Finally, for general X k, coming back to the definition of Lebesgue integration, we can rove the assertion by aroximation argument. 9

10 Remark E[XY] = E[X]E[Y] is not a sufficient condition for the random variables X and Y being indeendent. It is merely a necessary condition! The variance of X is defined by By means of the distribution µ(dx) of X we may write V(X) = σ 2 X = V(X) = E[(X m X) 2 ] = E[X 2 ] E[X] 2. + (x m X ) 2 µ(dx) = + ( + 2 x 2 µ(dx) xµ(dx)). Definition The covariance of two random variables X, Y is defined by Cov (X, Y) = σ XY = E[(X E(X))(Y E(Y))] = E[XY] E[X]E[Y]. In articular, σ XX = σ 2 X becomes the variance of X. The correlation coefficient of two random variables X, Y is defined by ρ XY = σ XY, σ X σ Y whenever σ X > 0 and σ Y > 0. Definition X, Y are called uncorrelated if σ XY = 0. They are called ositively (res. negatively) correlated if σ XY > 0 (res. σ XY < 0). Theorem If two random variables X, Y are indeendent, they are uncorrelated. Remark The converse of Theorem is not true in general. Let X be a random variable satisfying P(X = 1) = P(X = 1) = 1 4, P(X = 0) = 1 2 and set Y = X 2. Then, X, Y are not indeendent, but σ XY = 0. On the other hand, for random variables X, Y taking only two values, the converse of Theorem is valid (see Problem 5). Theorem (Additivity of variance) Let X 1, X 2,..., X n be random variables, any air of which is uncorrelated. Then n n V X k = V[X k ]. Theorem ρ XY 1 for two random variables X, Y with σ X > 0, σ Y > 0. Proof Note that E[{t(X m X ) + (Y m Y )} 2 ] 0 for all t R. k=1 k=1 Problem 4 Throw two dice and let L be the larger sot and S the smaller. (If double sots, set L = S.) (1) Show the joint robability of (L, S ) by a table. (2) Calculate the correlation coefficient ρ LS and exlain the meaning of the signature of ρ LS. Problem 5 Let X and Y be random variables such that P(X = a) = 1, P(X = b) = 1 = 1 1, P(Y = c) = 2, P(Y = d) = 2 = 1 2, where a, b, c, d are constant numbers and 0 < 1 < 1, 0 < 2 < 1. Show that X, Y are indeendent if σ XY = 0. [Notice: In general, uncorrelated random variables are not necessarily indeendent. Hence, the situation in this roblem falls into a very articular one.] 10

11 3 Markov Chains 3.1 Conditional Probability For two events A, B we define P(A B) = P(A B) P(B) whenever P(B) > 0. We call P(A B) the conditional robability of A relative to B It is interreted as the robability of the event A assuming the event B occurs, see Section 2.1. Formula (3.1) is often used in the following form: (3.1) P(A B) = P(B)P(A B) (3.2) This is the so-called theorem on comound robabilities, giving a ground to the usage of tree diagram in comutation of robability. For examle, for two events A, B see Fig P(A) P(A c ) A A c P(B A) P(B c A) P(B A c ) A B A B c A c B P(B c A c ) A c B c Figure 3.1: Tree diagram Theorem (Comound robabilities) For events A 1, A 2,..., A n we have P(A 1 A 2 A n ) = P(A 1 )P(A 2 A 1 )P(A 3 A 1 A 2 ) P(A n A 1 A 2 A n 1 ). (3.3) Proof Straightforward by induction on n. 3.2 Markov Chains Let S be a finite or countable set. Consider a discrete time stochastic rocess {X n ; n = 0, 1, 2,... } taking values in S. This S is called a state sace and is not necessarily a subset of R in general. In the following we often meet the cases of S = {0, 1}, S = {1, 2,..., N} and S = {0, 1, 2,... }. Definition Let {X n ; n = 0, 1, 2,... } be a discrete time stochastic rocess over S. It is called a Markov rocess over S if P(X n = b X i1 = a 1, X i2 = a 2,..., X ik = a k, X i = a) = P(X n = b X i = a) holds for any 0 i 1 < i 2 < < i k < i < n and a 1, a 2,..., a k, a, b S. If {X 1, X 2,... } are indeendent random variables with values in S, obviously they form a Markov chain. Hence the Markov roerty is weaker than indeendence. 11

12 Examle Let r 1 and s 1 such that r + s = N. There are r black balls and s white balls in a box. We ick u balls in the box one by one and set X n = 1 if a black ball is icked u at the nth trial and X n = 0 if a white ball is icked u at the nth trial. Then {X 1, X 2,..., X N } is a stochastic rocess. We note that P(X n = 1 X 1 = a 1, X 2 = a 2,..., X n 1 = a n 1 ) = 1 N (n 1) n 1 r a k (3.4) and P(X n = 1 X n 1 = a n 1 ) = r a n 1 N 1, (3.5) for a 1,..., a n 1 {0, 1}. Hence {X n } is not a Markov chain. Problem 6 We kee the notations and assumtions in Examle (1) Prove (3.4) and (3.5). (2) Let Y n be the number of black balls icked u during the first n trials, i.e., Show that {Y n } is a Markov chain. Definition For a Markov chain {X n } over S, Y n = n X k. k=1 P(X n+1 = j X n = i) is called the transition robability at time n from a state i to j. If this is indeendent of n, the Markov chain is called time homogeneous. In this case we write i j = (i, j) = P(X n+1 = j X n = i) and simly call it the transition robability. Moreover, the matrix is called the transition matrix. P = [ i j ] Obviously, we have for each i S, (i, j) = P(X n+1 = j X n = i) = 1. Taking this into account, we give the following j S Definition A matrix P = [ i j ] with index set S is called a stochastic matrix if i j 0 and i j = 1. j S Theorem The transition matrix of a Markov chain is a stochastic matrix. Conversely, given a stochastic matrix we can construct a Markov chain of which the transition matrix coincides with the given stochastic matrix. It is convenient to use the transition diagram to illustrate a Markov chain. With each state we associate a oint and we draw an arrow from i to j when (i, j) > 0. j S k=1 12

13 Examle (2-state Markov chain) A Markov chain over the state sace {0, 1} is determined by the transition robabilities: (0, 1) =, (0, 0) = 1, (1, 0) =, (1, 1) = 1. The transition matrix is defined by [ ] 1. 1 The transition diagram is as follows: = = 1 = 1 = Examle (3-state Markov chain) An animal is healthy, sick or dead, and changes its state every day. Consider a Markov chain on {H, S, D} described by the following transition diagram: b a H S D r The transition matrix is defined by a b 0 r, a + b = 1, + + r = 1. Examle (Random walk on Z 1 ) The random walk on Z 1 is illustrated as The transition robabilities are given by, if j = i + 1, (i, j) = = 1, if j = i 1, 0, otherwise. The transition matrix is a two-sided infinite matrix given by

14 Examle (Random walk with absorbing barriers) Let A > 0 and B > 0. The state sace of a random walk with absorbing barriers at A and B is S = { A, A + 1,..., B 1, B}. Then the transition robabilities are given as follows. For A < i < B,, if j = i + 1, (i, j) = = 1, if j = i 1, 0, otherwise. For i = A or i = B, In a matrix form we have 1, if j = A, ( A, j) = 0, otherwise, 1, if j = B, (B, j) = 0, otherwise A B Examle (Random walk with reflecting barriers) Let A > 0 and B > 0. The state sace of a random walk with absorbing barriers at A and B is S = { A, A + 1,..., B 1, B}. The transition robabilities are given as follows. For A < i < B,, if j = i + 1, (i, j) = = 1, if j = i 1, 0, otherwise. For i = A or i = B, 1, if j = A + 1, ( A, j) = 0, otherwise, 1, if j = B 1, (B, j) = 0, otherwise. In a matrix form we have A B 14

15 3.3 Distribution of a Markov Chain Let S be a state sace as before. In general, a row vector π = [ π i ] indexed by S is called a distribution on S if π i 0 and π i = 1. (3.6) For a Markov chain {X n } on S we set i S π(n) = [ π i (n) ], π i (n) = P(X n = i), which becomes a distribution on S. We call π(n) the distribution of X n. In articular, π(0), the distribution of X 0, is called the initial distribution. We often take π(0) = [ 0, 1, 0, ], where 1 occurs at ith osotion. In this case the Markov chain {X n } starts from the state i. For a Markov chain {X n } with a transition matrix P = [ i j ] the n-ste transition robability is defined by n (i, j) = P(X m+n = j X m = i), i, j S. The right-hand side is indeendent of n because our Markov chain is assumed to be time homogeneous. Theorem (Chaman Kolmogorov euation) For 0 r n we have n (i, j) = r (i, k) n r (k, j). (3.7) k S Proof First we note the obvious identity: n (i, j) = P(X m+n = j X m = i) = P(X m+n = j, X m+r = k X m = i). k S Moreover, P(X m+n = j, X m+r = k X m = i) = P(X m+n = j, X m+r = k, X m = i) P(X m+r = k, X m = i) Using the Markov roerty, we have so that P(X m+r = k, X m = i) P(X m = i) = P(X m+n = j X m+r = k, X m = i)p(x m+r = k X m = i). P(X m+n = j X m+r = k, X m = i) = P(X m+n = j X m+r = k) P(X m+n = j, X m+r = k X m = i) = P(X m+n = j X m+r = k)p(x m+r = k X m = i). Finally, by the roerty of being time homogeneous, we come to Thus we have obtained (3.7). P(X m+n = j, X m+r = k X m = i) = n r (k, j) r (i, k). Alying (3.7) reeatedly and noting that 1 (i, j) = (i, j), we obtain n (i, j) = (i, k 1 )(k 1, k 2 ) (k n 1, j). (3.8) k 1,...,k n 1 S The right-hand side is nothing else but the multilication of matrices, i.e., the n-ste transition robability n (i, j) is the (i, j)-entry of the n-ower of the transition matrix P. Summing u, we obtain the following imortant result. 15

16 Theorem For m, n 0 and i, j S we have Proof Immediate from Theorem P(X m+n = j X m = i) = n (i, j) = (P n ) i j. Remark As a result, the Chaman-Kolmogorov euation is nothing else but an entrywise exression of the obvious relation for the transition matrix: (As usual, P 0 = E (identity matrix).) P n = P r P n r Theorem We have or euivalently, Therefore, π(n) = π(n 1)P, n 1, π j (n) = π i (n 1) i j. i π(n) = π(0)p n. Proof We first note that π j (n) = P(X n = j) = P(X n = j X n 1 = i)p(x n 1 = i) = i j π i (n 1), which roves π(n) = π(n 1)P. By reeated alication we have as desired. i S π(n) = π(n 1)P = (π(n 2)P)P = (π(n 2)P 2 = = π(0)p n, Examle (2-state Markov chain) Let {X n } be the Markov chain introduced in Examle The eigenvalues of the transition matrix [ ] 1 P =. 1 are 1, 1. These are distinct if + > 0. Omitting the case of + = 0, i.e., = = 0, we assume that + > 0. By standard argument we obtain P n = 1 [ ] + r n r n + r n + r n, r = 1. Let π(0) = [π 0 (0) π 1 (0)] be the distriution of X 0. Then the distribution of X n is given by π(n) = [P(X n = 0), P(X n = 1)] = [π 0 (0) π 1 (0)]P n = π(0)p n. Problem 7 There are two arties, say, A and B, and their suorters of a constant ratio exchange at every election. Suose that just before an election, 25% of the suorters of A change to suort B and 20% of the suorters of B change to suort A. At the beginning, 85% of the voters suort A and 15% suort B. (1) When will the arty B command a majority? (2) Find the final ratio of suorters after many elections if the same situation continues. (3) Discuss relevant toics. [ ] 1 Problem 8 Let {X n } be a Markov chain on {0, 1} given by the transition matrix P = with the 1 initial distribution π 0 = [ +, ]. Calculate the following statistical uantities: + E[X n ], V[X n ], Cov (X m+n, X n ) = E[X m+n X n ] E[X m+n ]E[X n ], ρ(x m+n, X n ) = Cov (X m+n, X n ) V[Xm+n ]V[X n ] i S 16

17 4 Stationary Distributions 4.1 Definition and Examles Definition Let {X n } be a Markov chain on S with transition robability matrix P. A distribution π on S is called stationary (or invariant) if π = πp, (4.1) or euivalently, π j = π i i j, j S. (4.2) i S Thus, to find a stationary distribution we need to solve (4.1) (or euivalently (4.2)) together with (3.6). If S is a finite set, finding stationary distributions is reduced to a simle linear system. Examle (2-state Markov chain) Consider the transition matrix: [ ] 1 P =. 1 Let π = [π 0 π 1 ] and suose πp = π. Then we have [ ] 1 [π 0 π 1 ] = [(1 )π π 1 π 0 + (1 )π 1 ] = [π 0 π 1 ], which is euivalent to the following Together with π 0 + π 1 = 1, we obtain π 0 = π 0 π 1 = 0. +, π 1 = +, whenever + > 0. Indeed, π 0 0 and π 1 0, so this is a stationary distribution. The following roerties are noteworthy: (i) If + > 0, a stationary distribution is uniue. (ii) If = = 0, the stationary distribution is not uniuely determined. In fact, any distribution π = [π 0, π 1 ] is stationary. Then Moreover, we see from Examle that if 0 < + < 2, or euivalently, if r < 1, we have lim n Pn = 1 [ ]. + lim π(n) = lim n n π(0)pn = [π 0 (0) π 1 (0)] 1 + Thus we get the stationary distribution as a limit distribution. [ ] [ = + ]. + Examle (3-state Markov chain) We discuss the Markov chain {X n } introduced in Examle > 0 and b > 0, a stationary distribution is uniue and given by π = [0 0 1]. If Examle (One-dimensional RW) Consider the 1-dimensional random walk with right-move robability > 0 and left-move robability = 1 > 0. Let [ π(k) ] be a distribution on Z. If it is stationary, we have π(k) = π(k 1) + π(k + 1), k Z. (4.3) 17

18 The characteristic euation of the above difference euation is 0 = λ 2 λ + = (λ )(λ 1) so that the eigenvalues are 1, /. (Case 1). Then a general solution to (4.3) is given by π(k) = C 1 1 k + C 2 ( ) k ( ) k = C 1 + C 2, k Z. This never becomes a robability distribution for any choice of C 1 and C 2. Namely, there is no stationary distribution. (Case 2) =. In this case a general solution to (4.3) is given by π(k) = (C 1 + C 2 k)1 k = C 1 + C 2 k, k Z. This never becomes a robability distribution for any choice of C 1 and C 2. Namely, there is no stationary distribution. Examle (One-dimensional RW with reflection barrier) There is a uniue stationary distribution when <. In fact, ( ) k π(0) = C, π(k) = C, k 1, where C is determined in such a way that k=0 π(k) = 1. Namey, If, then there is no stationary distribution. C = 2. On stationary distributions of a Markov chain we uestion: (1) Is there a stationary distribution? (2) If yes, is it uniue? If not, how to classify? (3) Does the distributions of a Markov chain converge to a stationary distribution? 4.2 Existence Theorem A Markov chain over a finite state sace S has a stationary distribution. A simle roof is based on the Brouwer s fixed-oint theorem saying that every continuous function from a convex comact subset of a Euclidean sace to itself has a fixed oint. In fact, the set of distributions on S is a convex comact subset of a Euclidean sace and the ma π πp is continuous. Note that the stationary distribution mentioned in the above theorem is not necessarily uniue. Definition We say that a state j can be reached from a state i if there exists some n 0 such that n (i, j) > 0. By definition every state i can be reached from itself. We say that two states i and j intercommunicate if i can be reached form j and j can be reached from i, i.e., there exist m 0 and n 0 such that n (i, j) > 0 and m ( j, i) > 0. For i, j S we introduce a binary relation i j when they intercommunicate. Then becomes an euivalence relation on S : (i) i i; (ii) i j = j i; (iii) i j, j k = i k. In fact, (i) and (ii) are obvious by definition, and (iii) is verified by the Chaman-Kolmogorov euation. Thereby the state sace S is classified into a disjoint set of euivalence classes. In each euivalence class any two states intercommunicate each other. 18

19 Definition A state i is called absorbing if ii = 1. In articular, an absorbing state is a state which constitutes an euivalence class by itself. Definition A Markov chain is called irreducible if every state can be reached from every other state, i.e., if there is only one euivalence class of intercommunicating states. Theorem An irreducible Markov chain on a finite state sace S admits a uniue stationary distribution π = [π i ]. Moreover, π i > 0 for all i S. In fact, the roof owes to the following two facts: (1) For an irreducible Markov chain the following assertions are euivalent: (i) it admits a stationary distribution; (ii) every state is ositive recurrent. In this case the stationary distribution π is uniue and given by π i = 1 E(T i X 0 = i), i S. (2) Every state of an irreducible Markov chain on a finite state sace is ositive recurrent (Theorem 5.1.8). 4.3 Convergence Examle (2-state Markov chain) We recall Examles and If + > 0, the distribution of the above Markov chain converges to the uniue stationary distribution. Consider the case of = = 1, i.e., the transition matrix becomes [ ] 0 1 P =. 1 0 The stationary distribution is uniue. But for a given initial distribution π(0) it is not necessarily true that π(n) converges to the stationary distribution. Roughly seaking, we need to avoid the eriodic transition in order to have the convergence to a stationary distribution. Definition For a state i S, GCD{n 1 ; P(X n = i X 0 = i) > 0} is called the eriod of i. (When the set in the right-hand side is emty, the eriod is not defined.) A state i S is called aeriodic if its eriod is one. Theorem For an irreducible Markov chain, every state has a common eriod. Theorem Let π be a stationary distribution of an irreducible Markov chain on a finite state sace (It is uniue, see Theorem 4.2.5). If {X n } is aeriodic, for any j S we have lim P(X n = j) = π j. n Examle (age rank) The hyerlinks among N websites give rise to a digrah (directed grah) G on N vertices. It is natural to consider a Markov chain on G, which is defined by the transition matrix P = [ i j ], where where deg i = { j ; i j} is the out-degree of i. 1 if i j, deg i i j = 0, if i j and i j, 1, deg i = 0 and j = i, 19

20 1 1/2 1/2 1 1 There exists a stationary state but not necessarily uniue. Taking 0 d 1 we modify the transition matrix: Q = [ i j ], i j = d i j + ϵ, ϵ = 1 d N. If 0 d < 1, the Markov chain determined by Q has necessarily a uniue stationary distribution. Choosing a suitable d < 1, we may understand the stationary distribution π = [π(i)] as the age rank among the websites. Problem 9 Consider the age rank introduced in Examle (1) Let π(i) be the age rank of a site i. Show that π(i) satisfies the following relation π(i) = 1 d N + d π( j) deg j and exlain the meaning. (2) Show more examles of the age rank and discuss the role of sites which have no hyerlinks, that is, deg i = 0 (in terms of P = [ i j ] such sites corresond to absorbing states). Problem 10 Find all stationary distributions of the Markov chain determined by the transition diagram below. Then discuss convergence of distributions. j: j i Problem 11 Let {X n } be the Markov chain introduced in Examle 3.2.7: b a H S D r For n = 1, 2,... let H n denote the robability of starting from H and terminating at D at n-ste. Similarly, for n = 1, 2,... let S n denote the robability of starting from S and terminating at D at n-ste. (1) Show that {H n } and {S n } satisfies the following linear system: H n = ah n 1 + bs n 1, n 2; H 1 = 0, S 1 =. S n = H n 1 + rs n 1, (2) Let H and S denote the life times starting from the state H and S, resectively. Solving the linear system in (1), rove the following identities for the mean life times: E[H] = n=1 nh n = b + + b, E[S ] = n=1 ns n = b + b. 20

21 5 Toics in Markov Chains 5.1 Recurrence Definition Let i S be a state. Define the first hitting time or first assage time to i by T i = inf{n 1 ; X n = i}. If there exists no n 1 such that X n = i, we define T i =. A state i is called recurrent if P(T i < X 0 = i) = 1. It is called transient if P(T i = X 0 = i) > 0. Theorem A state i S is recurrent if and only if If a state i is transient, we have n (i, i) =. n=0 n (i, i) < n=0 and 1 n (i, i) = 1 P(T i < X 0 = i). n=0 Proof We first ut n (i, j) = P(X n = j X 0 = i), n = 0, 1, 2,..., f n (i, j) = P(T j = n X 0 = i) = P(X 1 j,..., X n 1 j, X n = j X 0 = i), n = 1, 2,.... n (i, j) is nothing else but the n ste transition robability. On the other hand, f n (i, j) is the robability that the Markov chain starts from i and reach j first time after n ste. Dividing the set of samle aths from i to j in n stes according to the number of stes after which the ath reaches j for the first time, we obtain n (i, j) = n f r (i, j) n r ( j, j), i, j S, n = 1, 2,.... (5.1) r=1 We next introduce the generating functions: G i j (z) = n (i, j)z n, F i j (z) = f n (i, j)z n. n=0 n=1 In view of (5.1) we see easily that Setting i = j in (5.2), we obtain G i j (z) = 0 (i, j) + F i j (z)g j j (z). (5.2) G ii (z) = 1 + F ii (z)g ii (z) G ii (z) = 1 1 F ii (z). On the other hand, since G ii (1) = n (i, i), F ii (1) = f n (i, i) = P(T i < X 0 = i) n=0 n=1 we see that two conditions F ii (1) = 1 and G ii (1) = are euivalent. The second statement is readily clear. 21

22 Examle (random walk on Z) Since the random walk starting from the origin 0 returns to it only after even stes, for recurrence we only need to comute the sum of 2n (0, 0). We start with the obvious result: Then, using the Stirling formula: we obtain 2n (0, 0) = (2n)! n!n! n n, + = 1. n! ( n ) n 2πn (5.3) e 2n (0, 0) 1 πn (4) n. Hence, 2n (0, 0) n=0 <,, =, = = 1/2. Conseuently, one-dimensional random walk is transient if, and it is recurrent if = = 1 2. Remark Let {a n } and {b n } be seuences of ositive numbers. We write a n b n if a n lim = 1. n b n In this case, there exist two constant numbers c 1 > 0 and c 2 > 0 such that c 1 a n b n c 2 a n. Hence n=1 a n and n=1 b n converge or diverge at the same time. Examle (random walk on Z 2 ) Obviously, the random walk starting from the origin 0 returns to it only after even stes. Therefore, for recurrence we only need to comute the sum of 2n (0, 0). For two-dimensional random walk we need to consider two directions along with x-axis and y-axis. We see easily that 2n (0, 0) = i+ j=n (2n)! i!i! j! j! ( ) 2n 1 = (2n)! 4 n!n! Emloying the formula for the binomial coefficients: which is a good exercise for the readers, we obtain Then, by using the Stirling formula, we see that n i=0 2n (0, 0) = ( ) 2 n = i ( ) 2n 1 4 i+ j=n ( 2n n n!n! i!i! j! j! = ( 2n n ) ( 1 4 ) 2n n i=0 ( ) 2 n. i ( ) 2n, (5.4) n ) 2 ( 1 4 ) 2n. so that 2n (0, 0) 1 πn 2n (0, 0) =. Conseuently, two-dimensional random walk is recurrent. n=1 Examle (random walk on Z 3 ) Let us consider the isotroic random walk in 3-dimension. As there are three directions, say, x, y, z-axis, we have 2n (0, 0) = i+ j+k=n (2n)! i!i! j! j!k!k! ( ) 2n 1 = (2n)! 6 n!n! ( ) 2n 1 6 i+ j+k=n 22 n!n! i!i! j! j!k!k! = ( 2n n ) ( 1 6 ) 2n i+ j+k=n ( ) 2 n!. i! j!k!

23 We note the following two facts. First, Second, the maximum value is attained when n 3 1 i, j, k n so by the Stirling formula. Then we have i+ j+k=n n! i! j!k! = 3n. (5.5) n! M n = max i+ j+k=n i! j!k! M n 3 3 2πn 3n Therefore. 2n (0, 0) ( 2n n ) ( 1 6 ) 2n 3 n M n 3 3 2π π n 3/2. 2n (0, 0) <, which imlies that the random walk is not recurrent (i.e., transient). If a state i is recurrent, i.e., P(T i < X 0 = i) = 1, the mean recurrent time is defined: E(T i X 0 = i) = np(t i = n X 0 = i). The state i is called ositive recurrent if E(T i X 0 = i) <, and null recurrent otherwise. n=1 n=1 Theorem The states in an euivalence class are all ositive recurrent, or all null recurrent, or all transient. In articular, for an irreducible Markov chain, the states are all ositive recurrent, or all null recurrent, or all transient. Theorem For an irreducible Markov chain on a finite state sace S, every state is ositive recurrent. Examle The mean recurrent time of the one-dimensional isotroic random walk is infinity, i.e., the onedimensional isotroic random walk is null recurrent. The roof will be given in Section??. Problem 12 Let {X n } be a Markov chain described by the following transition diagram: = = 1 = 1 where > 0 and > 0. For a state i S let T i be the first hitting time to i defined by T i = inf{n 1 ; X n = i}. (1) Calculate P(T 0 = 1 X 0 = 0), P(T 0 = 2 X 0 = 0), P(T 0 = 3 X 0 = 0), P(T 0 = 4 X 0 = 0). = (2) Find P(T 0 = n X 0 = 0) and calculate P(T 0 = n X 0 = 0), n=1 np(t 0 = n X 0 = 0). n=1 23

24 5.2 Absortion A state i is called absorbing if ii = 1 and i j = 0 for all j i. Once a Markov chain hits an absorbing state, it stays thereat forever. Let us consider a Markov chain on a finite state sace S with some absorbing states. We set S = S a S 0, where S a denotes the set of absorbing states and S 0 the rest. According to the above artition, the transition matrix is written as [ ] I 0 P = =. S T Then [ ] n I 0 P n = = S T [ ] I 0 S n T n, where S 1 = S and S n = S n 1 + T n 1 S. To avoid inessential tediousness we assume the following condition (C1) For any i S 0 there exist j S a and n 1 such that (P n ) i j > 0. In other words, the Markov chain starting from i S 0 has a ositive robability of absortion. Since S is finite by assumtion, the n in (C1) is chosen indeendently of i S 0. Hence (C1) is euivalent to the following (C2) There exists N 1 such that for any i S 0 there exist j S a with (P N ) i j > 0. Lemma Notations and assumtions being as above, lim n T n = 0. Proof We see from the obvious relation 1 = (P N ) i j = (P N ) i j + (P N ) i j j S j S 0 j S a and condition (C2) that j S 0 (P N ) i j < 1, i S 0. Note that for i, j S 0 we have (P N ) i j = (T N ) i j. We choose δ < 1 such that j S 0 (T N ) i j δ < 1 i S 0. Now let i S 0 and n N. We see that (T n ) i j = (T n N ) ik (T N ) k j = (T n N ) ik (T N ) k j δ (T n N ) ik = δ (T n N ) i j. j S 0 j,k S 0 k S 0 j S 0 k S 0 j S 0 Reeating this rocedure, we have (T n ) i j δ k (T n kn ) i j δ k (P n kn ) i j δ k, j S 0 j S 0 j S where 0 n kn < N. Therefore, from which we have lim n (T n ) i j = 0 for all i, j S 0. lim (T n ) i j = 0, n j S 0 24

25 Remark It is shown that every state i S 0 is transient. Theorem Let π 0 = [α β] be the initial distribution (according to S = S a S 0 ). Then the limit distribution is given by [α + βs 0], where S = (I T) 1 S. Proof The limit distribution is given by [ I lim 0P n = lim [α β] n n S n ] 0 T n = lim [α + βs n n βt n ]. We see from Lemma that On the other hand, since S n = S n 1 + T n 1 S we have lim βt n = 0. n S n = (I + T + T T n 1 )S and Hence which shows the result. (I T)S n = (I T n )S. lim S n = lim (I T) 1 (I T n )S = (I T) 1 S, n n Examle Consider the Markov chain given by the transition diagram, which is a random walk with absorbing barriers. 1 1 The transition matrix is given by Then [ I 0 P = = 0 0 S T 0 0 S = (I T) 1 S = ], S = 1 1 [ ] 0, T = 0 [ ] 2 2 [ ] 0. 0 Suose that the initial distribution is given by π 0 = [α β γ δ]. Then the limit distribution is [ α + γ + 2 δ 1 β + 2 γ + δ 1 ] 0 0. In articular, if the Markov chain starts at the state 3, setting π 0 = [ ], we obtain the limit distribution [ 2 ] , which means that the Markov chain is absorbed in the states 1 or 2 at the ratio : 2. 25

26 Problem 13 Following Examle 5.2.4, study the Markov chain given by the following transition diagram, where + = Gambler s Ruin We consider a random walk with absorbing barriers at A and B, where A > 0 and B > 0. This is a Markov chain on the state sace S = { A, A + 1,..., B 1, B} with the transition diagram as follows: A B We are interested in the absorbing robability, i.e., R = P(X n = A for some n = 1, 2,... ) = P {X n = A}, S = P(X n = B for some n = 1, 2,... ) = P {X n = B}. Note that the events in the right-hand sides are not the unions of disjoint events. A samle ath is shown in the following icture: n=1 n=1 B 0 A A key idea is to introduce a similar random walk starting at k, A k B, which is denoted by X n (k). Then the original one is X n = X n (0). Let R k and S k be the robabilities that the random walk X n (k) is absorbed at A and B, resectively. We wish to find R = R 0 and S = S 0. Lemma {R k ;, A k B} fulfills the following difference euation: R k = R k+1 + R k 1, R A = 1, R B = 0. (5.6) Similarly, {S k ;, A k B} fulfills the following difference euation: S k = S k+1 + S k 1, S A = 0, S B = 1. (5.7) 26

27 Theorem Let A 1 and B 1. Let {X n } be the random walk with absorbing barriers at A and B, and with right-move robability and left-move robability ( + = 1). Then the robabilities that {X n } is absorbed at the barriers are given by (/) A (/) A+B P(X n = A for some n) = 1 (/) A+B,, B A + B, = = 1 2, 1 (/) A,, P(X n = B for some n) = 1 (/) A+B A A + B, = = 1 2. In articular, the random walk is absorbed at the barriers at robability 1. An interretation of Theorem gives the solution to the gambler s ruin roblem. Two layers A and B toss a fair coin by turns. Let A and B be their allotted oints when the game starts. They exchange 1 oint after each trial. This game is over when one of the layers loses all the allotted oints and the other gets A + B oints. We are interested in the robability of each layer s win. For each n 0 define X n in such a way that the allotted oint of A at time n is given by A + X n. Then {X n } becomes a random walk with absorbing barrier at A and B. It then follows from Theorem that the winning robability of A and B are given by A P(A) = A + B, P(B) = B A + B, (5.8) resectively. As a result, they are roortional to the initial allotted oints. For examle, if A = 1 and B = 100, we have P(A) = 1/101 and P(B) = 100/101, which sounds that almost no chance of A s win. In a fair bet the recurrence is guaranteed by Theorem Even if one has much more losses than wins, continuing the game one will be back to the zero balance. However, in reality there is a barrier of limited money. (5.8) tells the effect of the barrier. It is also interesting to know the exectation of the number of coin tosses until the game is over. Theorem Let {X n } be the same as in Theorem The exected life time of this random walk until absortion is given by A A + B 1 (/) A,, 1 (/) A+B AB, = = 1 2. Proof Let Y k be the life time of a random walk starting from the osition k ( A k B) at time n = 0 until absortion. In other words, Y k = min{ j 0 ; X (k) j = A X (k) j = B }. We wish to comute E(Y 0 ). We see by definition that For A < k < B we have E(Y A ) = E(Y B ) = 0. (5.9) E(Y k ) = In a similar manner as in the roof of Theorem we note that jp(y k = j). (5.10) j=1 P(Y k = j) = P(Y k+1 = j 1) + P(Y k 1 = j 1). (5.11) Inserting (5.11) into (5.10), we obtain E(Y k ) = jp(y k+1 = j 1) + jp(y k 1 = j 1) j=1 j=1 = E(Y k+1 ) + E(Y k 1 ) + 1. (5.12) 27

28 Thus, E(Y k ) is the solution to the difference euation (5.12) with boundary condition (5.9). This difference euation is solved in a standard manner and we find A + k E(Y k ) = A + B 1 (/) A+k,, 1 (/) A+B (A + k)(b k), = = 1 2. Setting k = 0, we obtain the result. If = = 1/2 and A = 1, B = 100, the exected life time is AB = 100. The gambler A is much inferior to B in the amount of funds (as we have seen already, the robability of A s win is just 1/101), however, the exected life time until the game is over is 100, which sounds longer than one exects intuitively. Perhas this is because the gambler cannot uit gambling. Problem 14 (A bold gambler) In each game a gambler wins the dollars he bets with robability, and loses with robability = 1. The goal of the gambler is to get 5 dollars. His strategy is to bet the difference between 5 dollars and what he has. Let X n be the amount he has just after nth bet. 1 1 (1) Analyze the Markov chain {X n } with initial condition X 0 = 1. (2) Comare with the steady gambler discussed in this section, who bets just 1 dollar in each game. 28

29 6 Toics in Random Walks 6.1 The Catalan Number The Catalan number is a famous number known in combinatorics (Eugène Charles Catalan, ). Richard P. Stanley (MIT) collected many aearances of the Catalan numbers (R. P. Stanley: Catalan Numbers, Cambridge University Press, 2015; htt://www-math.mit.edu/ rstan/ec/). We start with the definition. Let n 1 and consider a seuence (ϵ 1, ϵ 2,..., ϵ n ) of ±1, that is, an element of { 1, 1} n. This seuence is called a Catalan ath if ϵ 1 0 ϵ 1 + ϵ 2 0 ϵ 1 + ϵ ϵ n 1 0 ϵ 1 + ϵ ϵ n 1 + ϵ n = 0. It is aarent that there is no Catalan ath of odd length. Definition The nth Catalan number is defined to be the number of Catalan aths of length 2n and is denoted by C n. For convenience we set C 0 = 1. The first Catalan numbers for n = 0, 1, 2, 3,... are 1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, 58786, , , ,... We will derive a concise exression for the Catalan numbers by using a grahical reresentation. Consider n n grid with the bottom-left corner being given the coordinate (0, 0). With each seuence (ϵ 1, ϵ 2,..., ϵ n ) consisting of ±1 we associate vectors ϵ k = +1 u k = (1, 0) ϵ k = 1 u k = (0, 1) and consider a olygonal line connecting (0, 0), u 1, u 1 + u 2,..., u 1 + u u n 1, u 1 + u u n 1 + u n in order. If ϵ 1 + ϵ ϵ n 1 + ϵ n = 0, the final vertex becomes u 1 + u u n 1 + u n = (n, n) so that the obtained olygonal line is a shortest ath connecting (0, 0) and (n, n) in the grid. Lemma There is a one-to-one corresondence between the Catalan aths of length 2n and the shortest aths connecting (0, 0) and (n, n) which do not ass the uer region of the diagonal y = x. Theorem (Catalan number) C n = (2n)!, n = 0, 1, 2,..., (n + 1)!n! Proof For n = 0 it is aarent by the definition 0! = 1. Suose n 1. We see from Fig. 6.1 that ( ) ( ) 2n 2n (2n)! C n = = n n + 1 n!(n + 1)!, as desired. 29

30 An alternative reresentation of the Catalan aths: Consider in the xy-lane a olygonal line connecting the vertices: (0, 0), (1, ϵ 1 ), (2, ϵ 1 + ϵ 2 ),..., (n 1, ϵ 1 + ϵ ϵ n 1 ), (n, ϵ 1 + ϵ ϵ n 1 + ϵ n ) in order. Then, there is a one-to-one corresondence between the Catalan aths of length 2n and the samle aths of a random walk starting 0 at time 0 and returning 0 at time 2n staying always in the half line [0, + ). Therefore, Lemma Let n 1. The number of samle aths of a random walk starting 0 at time 0 and returning 0 at time 2n staying always in the half line [0, + ) is the Catalan number C n. Let {X n } be a random walk on Z with right-move robability and left-move robability = 1. We assume that the random walk starts at the origin, i.e., X 0 = 0. Since the random walker returns to the origin only after even stes, the return robability is given by R = P {X 2n = 0}. (6.1) n=1 It is imortant to note that n=1 {X 2n = 0} is not the sum of disjoint events. Let 2n be the robability that the random walker is found at the origin at time 2n, that is, ( ) 2n 2n = P(X 2n = 0) = n n = (2n)! n n!n! n n, n = 1, 2,.... (6.2) For convenience set 0 = 1. Note that the right hand side of (6.1) is not the sum of 2n. Instead, we need to consider the robability that the random walker returns to the origin after 2n stes but not before: 2n = P(X 2 0, X 4 0,..., X 2n 2 0, X 2n = 0) n = 1, 2,.... Notice the essential difference between 2n and 2n. Definition We set T = inf{n 1 ; X n = 0}, (6.3) where T = + for {n 1 ; X n = 0} =. We call T the first hitting time to the origin. (Strictly seaking, T is not a random variable according to our definition in Chater 1. It is, however, commonly acceted that a random variable takes values in (, + ) {± }.) 30

31 By definition we have and therefore, the return robability is given by We will calculate P(T = 2n) and thereby R. P(T = 2n) = 2n (6.4) R = P(T < ) = 2n. (6.5) Theorem Let {X n } be the random walk starting from 0 with right-move robability and left-move robability. Let T be the first hitting-time to 0. Then n=1 2n = P(T = 2n) = 2C n 1 () n, n = 1, 2,.... Proof Obviously, we have 2n = P(X 2 0, X 4 0,..., X 2n 2 0, X 2n = 0) = P(X 1 > 0, X 2 > 0, X 3 > 0,..., X 2n 2 > 0, X 2n 1 > 0, X 2n = 0) + P(X 1 < 0, X 2 < 0, X 3 < 0,..., X 2n 2 < 0, X 2n 1 < 0, X 2n = 0). In view of Fig. 6.1 we see that P(X 1 > 0, X 2 > 0, X 3 > 0,..., X 2n 2 > 0, X 2n 1 > 0, X 2n = 0) = C n 1 () n 1. Then the result is immediate. 2n-2 0 2n Figure 6.1: Calculating P(X 1 > 0, X 2 > 0,..., X 2n 1 > 0, X 2n = 0) Remark There are some noticeable relations between { 2n } and { 2n }. 2n = 2 n 2n 2, n 1, 2n = 4 2n 2 2n, n 1. Lemma The generating function of the Catalan numbers C n is given by Proof Problem 17. f (z) = n=0 C n z n = 1 1 4z 2z. (6.6) Theorem Let R be the robability that a random walker starting from the origin returns to the origin in finite time. Then we have R = 1. 31

32 Proof We know from Theorem that the return robability R is given by R = P(T = 2n) = 2C n 1 () n. n=1 n=1 Using the generating function of the Catalan numbers in Lemma 6.1.8, we obtain R = 2 n=0 C n () n = = Since + = 1 we have 1 4 = ( + ) 2 4 = ( ) 2 =, which comletes the roof. Definition A random walk is called recurrent if R = 1, otherwise it is called transient. Theorem The one-dimensional random walk is recurrent if and only if = = 1/2 (isotroic). It is transient if and only if. When a random walk is recurrent, it is meaningful to consider the mean recurrent time defined by where T is the first hitting time to the origin. E(T) = 2nP(T = 2n) = 2n 2n, n=1 Theorem (Null recurrence) The mean recurrent time of the isotroic, one-dimensional random walk is infinity: E[T] = +. n=1 Proof In view of Theorem 6.1.6, setting = = 1/2, we obtain ( 1 n ( 1 n E(T) = 4 nc n 1 = (n + 1)C n. (6.7) 4) 4) n=1 n=0 On the other hand, the generating function of the Catalan numbers is given by f (z) = n=0 C n z n = 1 1 4z 2z. Then and differentiating by z, we have 2z f (z) = 2 2(z f (z)) = 2 C n z n+1 = 1 1 4z n=0 (n + 1)C n z n = n= z. Letting z 1/4 we have 2 ( 1 n (n + 1)C n = lim = +, 4) z 1/4 and hence E[T] = + as desired, see also Remark n=0 32

33 Remark Let a n 0 for n = 0, 1, 2,... and consider the ower series: f (x) = a n x n. If the radius of convergence of the above ower series is 1, we have n=0 lim f (x) = x 1 0 including the case of =. The verification is by elementary calculus based on the following two ineualities: lim inf x 1 0 f (x) n=0 a n N f (x) a n, N 1, n=0 a n, x < 1. Problem 15 Find the Catalan numbers C n in the following stes. n (1) Prove that C n = C k 1 C n k by using grahical exressions. k=1 (2) Using (1), rove that the generating function of the Catalan numbers f (z) = (3) Find f (z). n=0 f (z) 1 = z{ f (z)} 2. (4) Using Taylor exansion of f (z) obtained in (3), find C n. C n z n verifies Problem 16 Let {X n } be a random walk starting from 0 with right-move and left-move. Show that P(X 1 0, X 2 0,..., X 2n 1 0) n 1 = P(X 1 0, X 2 0,..., X 2n 0) = 1 C k () k for n = 1, 2,..., where C k is the Catalan number. Using this result, show next that 1 P (X n 0 for all n 1) =, >, 0,. Problem 17 (Lemma 6.1.8) (1) Using the well-known formula for binomial exansion: ( ) α (1 + x) α = x n, x < 1, n rove that n=0 (2) Let C n be the Catalan number given by Prove that C n = n=0 ( ) 2n z n = n n=0 k= z, z < 1 4. (2n)!, n = 0, 1, 2,.... n!(n + 1)! C n z n = 1 1 4z 2z, z < 1 4. n=0 33

6 Stationary Distributions

6 Stationary Distributions 6 Stationary Distributions 6. Definition and Examles Definition 6.. Let {X n } be a Markov chain on S with transition robability matrix P. A distribution π on S is called stationary (or invariant) if π

More information

1 Probability Spaces and Random Variables

1 Probability Spaces and Random Variables 1 Probability Saces and Random Variables 1.1 Probability saces Ω: samle sace consisting of elementary events (or samle oints). F : the set of events P: robability 1.2 Kolmogorov s axioms Definition 1.2.1

More information

Solution: (Course X071570: Stochastic Processes)

Solution: (Course X071570: Stochastic Processes) Solution I (Course X071570: Stochastic Processes) October 24, 2013 Exercise 1.1: Find all functions f from the integers to the real numbers satisfying f(n) = 1 2 f(n + 1) + 1 f(n 1) 1. 2 A secial solution

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

8 STOCHASTIC PROCESSES

8 STOCHASTIC PROCESSES 8 STOCHASTIC PROCESSES The word stochastic is derived from the Greek στoχαστικoς, meaning to aim at a target. Stochastic rocesses involve state which changes in a random way. A Markov rocess is a articular

More information

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Elementary Analysis in Q p

Elementary Analysis in Q p Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Homework Solution 4 for APPM4/5560 Markov Processes

Homework Solution 4 for APPM4/5560 Markov Processes Homework Solution 4 for APPM4/556 Markov Processes 9.Reflecting random walk on the line. Consider the oints,,, 4 to be marked on a straight line. Let X n be a Markov chain that moves to the right with

More information

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

B8.1 Martingales Through Measure Theory. Concept of independence

B8.1 Martingales Through Measure Theory. Concept of independence B8.1 Martingales Through Measure Theory Concet of indeendence Motivated by the notion of indeendent events in relims robability, we have generalized the concet of indeendence to families of σ-algebras.

More information

Solutions to In Class Problems Week 15, Wed.

Solutions to In Class Problems Week 15, Wed. Massachusetts Institute of Technology 6.04J/18.06J, Fall 05: Mathematics for Comuter Science December 14 Prof. Albert R. Meyer and Prof. Ronitt Rubinfeld revised December 14, 005, 1404 minutes Solutions

More information

Brownian Motion and Random Prime Factorization

Brownian Motion and Random Prime Factorization Brownian Motion and Random Prime Factorization Kendrick Tang June 4, 202 Contents Introduction 2 2 Brownian Motion 2 2. Develoing Brownian Motion.................... 2 2.. Measure Saces and Borel Sigma-Algebras.........

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

An Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices

An Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices Coyright 202 Tech Science Press CMES, vol.86, no.4,.30-39, 202 An Inverse Problem for Two Sectra of Comlex Finite Jacobi Matrices Gusein Sh. Guseinov Abstract: This aer deals with the inverse sectral roblem

More information

HENSEL S LEMMA KEITH CONRAD

HENSEL S LEMMA KEITH CONRAD HENSEL S LEMMA KEITH CONRAD 1. Introduction In the -adic integers, congruences are aroximations: for a and b in Z, a b mod n is the same as a b 1/ n. Turning information modulo one ower of into similar

More information

GENERICITY OF INFINITE-ORDER ELEMENTS IN HYPERBOLIC GROUPS

GENERICITY OF INFINITE-ORDER ELEMENTS IN HYPERBOLIC GROUPS GENERICITY OF INFINITE-ORDER ELEMENTS IN HYPERBOLIC GROUPS PALLAVI DANI 1. Introduction Let Γ be a finitely generated grou and let S be a finite set of generators for Γ. This determines a word metric on

More information

Statics and dynamics: some elementary concepts

Statics and dynamics: some elementary concepts 1 Statics and dynamics: some elementary concets Dynamics is the study of the movement through time of variables such as heartbeat, temerature, secies oulation, voltage, roduction, emloyment, rices and

More information

On a Markov Game with Incomplete Information

On a Markov Game with Incomplete Information On a Markov Game with Incomlete Information Johannes Hörner, Dinah Rosenberg y, Eilon Solan z and Nicolas Vieille x{ January 24, 26 Abstract We consider an examle of a Markov game with lack of information

More information

MATH 2710: NOTES FOR ANALYSIS

MATH 2710: NOTES FOR ANALYSIS MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite

More information

Lecture 10: Hypercontractivity

Lecture 10: Hypercontractivity CS 880: Advanced Comlexity Theory /15/008 Lecture 10: Hyercontractivity Instructor: Dieter van Melkebeek Scribe: Baris Aydinlioglu This is a technical lecture throughout which we rove the hyercontractivity

More information

The Longest Run of Heads

The Longest Run of Heads The Longest Run of Heads Review by Amarioarei Alexandru This aer is a review of older and recent results concerning the distribution of the longest head run in a coin tossing sequence, roblem that arise

More information

Lecture 4 - Random walk, ruin problems and random processes

Lecture 4 - Random walk, ruin problems and random processes Lecture 4 - Random walk, ruin problems and random processes Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 4 - Random walk, ruin problems and random processesapril 19, 2009 1 / 30 Part I Random

More information

Sums of independent random variables

Sums of independent random variables 3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where

More information

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu and Markov Models Huizhen Yu janey.yu@cs.helsinki.fi Det. Comuter Science, Univ. of Helsinki Some Proerties of Probabilistic Models, Sring, 200 Huizhen Yu (U.H.) and Markov Models Jan. 2 / 32 Huizhen Yu

More information

ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010

ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010 ECE 6960: Adv. Random Processes & Alications Lecture Notes, Fall 2010 Lecture 16 Today: (1) Markov Processes, (2) Markov Chains, (3) State Classification Intro Please turn in H 6 today. Read Chater 11,

More information

Department of Mathematics

Department of Mathematics Deartment of Mathematics Ma 3/03 KC Border Introduction to Probability and Statistics Winter 209 Sulement : Series fun, or some sums Comuting the mean and variance of discrete distributions often involves

More information

Averaging sums of powers of integers and Faulhaber polynomials

Averaging sums of powers of integers and Faulhaber polynomials Annales Mathematicae et Informaticae 42 (20. 0 htt://ami.ektf.hu Averaging sums of owers of integers and Faulhaber olynomials José Luis Cereceda a a Distrito Telefónica Madrid Sain jl.cereceda@movistar.es

More information

Analysis of some entrance probabilities for killed birth-death processes

Analysis of some entrance probabilities for killed birth-death processes Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Topic 7: Using identity types

Topic 7: Using identity types Toic 7: Using identity tyes June 10, 2014 Now we would like to learn how to use identity tyes and how to do some actual mathematics with them. By now we have essentially introduced all inference rules

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Elementary theory of L p spaces

Elementary theory of L p spaces CHAPTER 3 Elementary theory of L saces 3.1 Convexity. Jensen, Hölder, Minkowski inequality. We begin with two definitions. A set A R d is said to be convex if, for any x 0, x 1 2 A x = x 0 + (x 1 x 0 )

More information

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES AARON ZWIEBACH Abstract. In this aer we will analyze research that has been recently done in the field of discrete

More information

p-adic Measures and Bernoulli Numbers

p-adic Measures and Bernoulli Numbers -Adic Measures and Bernoulli Numbers Adam Bowers Introduction The constants B k in the Taylor series exansion t e t = t k B k k! k=0 are known as the Bernoulli numbers. The first few are,, 6, 0, 30, 0,

More information

On Isoperimetric Functions of Probability Measures Having Log-Concave Densities with Respect to the Standard Normal Law

On Isoperimetric Functions of Probability Measures Having Log-Concave Densities with Respect to the Standard Normal Law On Isoerimetric Functions of Probability Measures Having Log-Concave Densities with Resect to the Standard Normal Law Sergey G. Bobkov Abstract Isoerimetric inequalities are discussed for one-dimensional

More information

On split sample and randomized confidence intervals for binomial proportions

On split sample and randomized confidence intervals for binomial proportions On slit samle and randomized confidence intervals for binomial roortions Måns Thulin Deartment of Mathematics, Usala University arxiv:1402.6536v1 [stat.me] 26 Feb 2014 Abstract Slit samle methods have

More information

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0 Advanced Finite Elements MA5337 - WS7/8 Solution sheet This exercise sheets deals with B-slines and NURBS, which are the basis of isogeometric analysis as they will later relace the olynomial ansatz-functions

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Estimation of the large covariance matrix with two-step monotone missing data

Estimation of the large covariance matrix with two-step monotone missing data Estimation of the large covariance matrix with two-ste monotone missing data Masashi Hyodo, Nobumichi Shutoh 2, Takashi Seo, and Tatjana Pavlenko 3 Deartment of Mathematical Information Science, Tokyo

More information

A Note on Guaranteed Sparse Recovery via l 1 -Minimization

A Note on Guaranteed Sparse Recovery via l 1 -Minimization A Note on Guaranteed Sarse Recovery via l -Minimization Simon Foucart, Université Pierre et Marie Curie Abstract It is roved that every s-sarse vector x C N can be recovered from the measurement vector

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is

More information

1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th

1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th 18.415/6.854 Advanced Algorithms ovember 7, 1996 Euclidean TSP (art I) Lecturer: Michel X. Goemans MIT These notes are based on scribe notes by Marios Paaefthymiou and Mike Klugerman. 1 Euclidean TSP Consider

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem

More information

CHAPTER 2: SMOOTH MAPS. 1. Introduction In this chapter we introduce smooth maps between manifolds, and some important

CHAPTER 2: SMOOTH MAPS. 1. Introduction In this chapter we introduce smooth maps between manifolds, and some important CHAPTER 2: SMOOTH MAPS DAVID GLICKENSTEIN 1. Introduction In this chater we introduce smooth mas between manifolds, and some imortant concets. De nition 1. A function f : M! R k is a smooth function if

More information

Uniform Law on the Unit Sphere of a Banach Space

Uniform Law on the Unit Sphere of a Banach Space Uniform Law on the Unit Shere of a Banach Sace by Bernard Beauzamy Société de Calcul Mathématique SA Faubourg Saint Honoré 75008 Paris France Setember 008 Abstract We investigate the construction of a

More information

19th Bay Area Mathematical Olympiad. Problems and Solutions. February 28, 2017

19th Bay Area Mathematical Olympiad. Problems and Solutions. February 28, 2017 th Bay Area Mathematical Olymiad February, 07 Problems and Solutions BAMO- and BAMO- are each 5-question essay-roof exams, for middle- and high-school students, resectively. The roblems in each exam are

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

Chapter 1: PROBABILITY BASICS

Chapter 1: PROBABILITY BASICS Charles Boncelet, obability, Statistics, and Random Signals," Oxford University ess, 0. ISBN: 978-0-9-0005-0 Chater : PROBABILITY BASICS Sections. What Is obability?. Exeriments, Outcomes, and Events.

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Improvement on the Decay of Crossing Numbers

Improvement on the Decay of Crossing Numbers Grahs and Combinatorics 2013) 29:365 371 DOI 10.1007/s00373-012-1137-3 ORIGINAL PAPER Imrovement on the Decay of Crossing Numbers Jakub Černý Jan Kynčl Géza Tóth Received: 24 Aril 2007 / Revised: 1 November

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

1 Random Experiments from Random Experiments

1 Random Experiments from Random Experiments Random Exeriments from Random Exeriments. Bernoulli Trials The simlest tye of random exeriment is called a Bernoulli trial. A Bernoulli trial is a random exeriment that has only two ossible outcomes: success

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Split the integral into two: [0,1] and (1, )

Split the integral into two: [0,1] and (1, ) . A continuous random variable X has the iecewise df f( ) 0, 0, 0, where 0 is a ositive real number. - (a) For any real number such that 0, rove that the eected value of h( X ) X is E X. (0 ts) Solution:

More information

Random variables. Lecture 5 - Discrete Distributions. Discrete Probability distributions. Example - Discrete probability model

Random variables. Lecture 5 - Discrete Distributions. Discrete Probability distributions. Example - Discrete probability model Random Variables Random variables Lecture 5 - Discrete Distributions Sta02 / BME02 Colin Rundel Setember 8, 204 A random variable is a numeric uantity whose value deends on the outcome of a random event

More information

BOUNDS FOR THE COUPLING TIME IN QUEUEING NETWORKS PERFECT SIMULATION

BOUNDS FOR THE COUPLING TIME IN QUEUEING NETWORKS PERFECT SIMULATION BOUNDS FOR THE COUPLING TIME IN QUEUEING NETWORKS PERFECT SIMULATION JANTIEN G. DOPPER, BRUNO GAUJAL AND JEAN-MARC VINCENT Abstract. In this aer, the duration of erfect simulations for Markovian finite

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Jane Bae Stanford University hjbae@stanford.edu September 16, 2014 Jane Bae (Stanford) Probability and Statistics September 16, 2014 1 / 35 Overview 1 Probability Concepts Probability

More information

2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time:

2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time: Chapter 1 Random Variables 1.1 Elementary Examples We will start with elementary and intuitive examples of probability. The most well-known example is that of a fair coin: if flipped, the probability of

More information

MATH 3510: PROBABILITY AND STATS June 15, 2011 MIDTERM EXAM

MATH 3510: PROBABILITY AND STATS June 15, 2011 MIDTERM EXAM MATH 3510: PROBABILITY AND STATS June 15, 2011 MIDTERM EXAM YOUR NAME: KEY: Answers in Blue Show all your work. Answers out of the blue and without any supporting work may receive no credit even if they

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Introduction to Probability and Statistics

Introduction to Probability and Statistics Introduction to Probability and Statistics Chater 8 Ammar M. Sarhan, asarhan@mathstat.dal.ca Deartment of Mathematics and Statistics, Dalhousie University Fall Semester 28 Chater 8 Tests of Hyotheses Based

More information

Robust hamiltonicity of random directed graphs

Robust hamiltonicity of random directed graphs Robust hamiltonicity of random directed grahs Asaf Ferber Rajko Nenadov Andreas Noever asaf.ferber@inf.ethz.ch rnenadov@inf.ethz.ch anoever@inf.ethz.ch Ueli Peter ueter@inf.ethz.ch Nemanja Škorić nskoric@inf.ethz.ch

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

ANALYTIC NUMBER THEORY AND DIRICHLET S THEOREM

ANALYTIC NUMBER THEORY AND DIRICHLET S THEOREM ANALYTIC NUMBER THEORY AND DIRICHLET S THEOREM JOHN BINDER Abstract. In this aer, we rove Dirichlet s theorem that, given any air h, k with h, k) =, there are infinitely many rime numbers congruent to

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS #A13 INTEGERS 14 (014) ON THE LEAST SIGNIFICANT ADIC DIGITS OF CERTAIN LUCAS NUMBERS Tamás Lengyel Deartment of Mathematics, Occidental College, Los Angeles, California lengyel@oxy.edu Received: 6/13/13,

More information

Homework 4 Solution, due July 23

Homework 4 Solution, due July 23 Homework 4 Solution, due July 23 Random Variables Problem 1. Let X be the random number on a die: from 1 to. (i) What is the distribution of X? (ii) Calculate EX. (iii) Calculate EX 2. (iv) Calculate Var

More information

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek Use of Transformations and the Reeated Statement in PROC GLM in SAS Ed Stanek Introduction We describe how the Reeated Statement in PROC GLM in SAS transforms the data to rovide tests of hyotheses of interest.

More information

Extension of Minimax to Infinite Matrices

Extension of Minimax to Infinite Matrices Extension of Minimax to Infinite Matrices Chris Calabro June 21, 2004 Abstract Von Neumann s minimax theorem is tyically alied to a finite ayoff matrix A R m n. Here we show that (i) if m, n are both inite,

More information

MAZUR S CONSTRUCTION OF THE KUBOTA LEPOLDT p-adic L-FUNCTION. n s = p. (1 p s ) 1.

MAZUR S CONSTRUCTION OF THE KUBOTA LEPOLDT p-adic L-FUNCTION. n s = p. (1 p s ) 1. MAZUR S CONSTRUCTION OF THE KUBOTA LEPOLDT -ADIC L-FUNCTION XEVI GUITART Abstract. We give an overview of Mazur s construction of the Kubota Leooldt -adic L- function as the -adic Mellin transform of a

More information

CMSC 425: Lecture 4 Geometry and Geometric Programming

CMSC 425: Lecture 4 Geometry and Geometric Programming CMSC 425: Lecture 4 Geometry and Geometric Programming Geometry for Game Programming and Grahics: For the next few lectures, we will discuss some of the basic elements of geometry. There are many areas

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

EE/Stats 376A: Information theory Winter Lecture 5 Jan 24. Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B.

EE/Stats 376A: Information theory Winter Lecture 5 Jan 24. Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B. EE/Stats 376A: Information theory Winter 207 Lecture 5 Jan 24 Lecturer: David Tse Scribe: Michael X, Nima H, Geng Z, Anton J, Vivek B. 5. Outline Markov chains and stationary distributions Prefix codes

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Chapter 7 Rational and Irrational Numbers

Chapter 7 Rational and Irrational Numbers Chater 7 Rational and Irrational Numbers In this chater we first review the real line model for numbers, as discussed in Chater 2 of seventh grade, by recalling how the integers and then the rational numbers

More information

Recap of Basic Probability Theory

Recap of Basic Probability Theory 02407 Stochastic Processes? Recap of Basic Probability Theory Uffe Høgsbro Thygesen Informatics and Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: uht@imm.dtu.dk

More information

PETER J. GRABNER AND ARNOLD KNOPFMACHER

PETER J. GRABNER AND ARNOLD KNOPFMACHER ARITHMETIC AND METRIC PROPERTIES OF -ADIC ENGEL SERIES EXPANSIONS PETER J. GRABNER AND ARNOLD KNOPFMACHER Abstract. We derive a characterization of rational numbers in terms of their unique -adic Engel

More information

FINITE MARKOV CHAINS

FINITE MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona FINITE MARKOV CHAINS Lidia Pinilla Peralta Director: Realitzat a: David Márquez-Carreras Departament de Probabilitat,

More information

Robustness of classifiers to uniform l p and Gaussian noise Supplementary material

Robustness of classifiers to uniform l p and Gaussian noise Supplementary material Robustness of classifiers to uniform l and Gaussian noise Sulementary material Jean-Yves Franceschi Ecole Normale Suérieure de Lyon LIP UMR 5668 Omar Fawzi Ecole Normale Suérieure de Lyon LIP UMR 5668

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

Chapter 2: Random Variables

Chapter 2: Random Variables ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:

More information

Mathematical Methods for Computer Science

Mathematical Methods for Computer Science Mathematical Methods for Computer Science Computer Science Tripos, Part IB Michaelmas Term 2016/17 R.J. Gibbens Problem sheets for Probability methods William Gates Building 15 JJ Thomson Avenue Cambridge

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Math 416 Lecture 3. The average or mean or expected value of x 1, x 2, x 3,..., x n is

Math 416 Lecture 3. The average or mean or expected value of x 1, x 2, x 3,..., x n is Math 416 Lecture 3 Expected values The average or mean or expected value of x 1, x 2, x 3,..., x n is x 1 x 2... x n n x 1 1 n x 2 1 n... x n 1 n 1 n x i p x i where p x i 1 n is the probability of x i

More information

Journal of Mathematical Analysis and Applications

Journal of Mathematical Analysis and Applications J. Math. Anal. Al. 44 (3) 3 38 Contents lists available at SciVerse ScienceDirect Journal of Mathematical Analysis and Alications journal homeage: www.elsevier.com/locate/jmaa Maximal surface area of a

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Lecture 4: Probability and Discrete Random Variables

Lecture 4: Probability and Discrete Random Variables Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information