RECURRENCE IN COUNTABLE STATE MARKOV CHAINS

Size: px
Start display at page:

Download "RECURRENCE IN COUNTABLE STATE MARKOV CHAINS"

Transcription

1 RECURRENCE IN COUNTABLE STATE MARKOV CHAINS JIN WOO SUNG Abstract. This paper investigates the recurrence and transience of countable state irreducible Markov chains. Recurrence is the property that a chain returns to its initial state within finite time with probability ; a non-recurrent chain is said to be transient. After establishing these concepts, we suggest a technique for identifying recurrence or transience of some special Markov chains by building an analogy with electrical networks. We display a use of this technique by identifying recurrence/transience for the simple random walk on Z d. Contents. Introduction to Markov Chains 2. Recurrent Markov Chains Why is it important to distinguish between positive recurrence and null recurrence? 6 3. Identifying a Recurrent Markov Chain Harmonic Functions Voltages and Current Flows Effective Resistance and Escape Probability 2 Acknowledgments 9 References 9. Introduction to Markov Chains The central topic of this paper is an instance of the long-time behavior of Markov chains on discrete state spaces. Throughout this paper, we seek to answer the following question. Question. (Recurrence. What is the probability that a Markov chain will return to its initial state in finite time? If the answer is, what is the expected time that the chain will return to its initial state? In this section, we introduce what Markov chains are and how to describe them. Definition.2. Let P : Ω Ω [0, ] be a map such that y Ω P (x, y for every x Ω. A sequence of random variables (X t t0 with values in the state space Ω is a Markov chain if for every t N, x 0, x,, x t Ω, we have P {X t x t C t } P {X t x t X t x t } P (x t, x t where C t is the event {X 0 x 0, X x,, X t x t }. Date: August 28, 207.

2 2 JIN WOO SUNG That is, the probability distribution of the state at time t conditioned on the past depends only on the state at time t, X t. During this entire paper, we assume that Ω is either finite or countably infinite. The simple random walk on Z d is a classic example of a countable state Markov chain. Definition.3. A simple random walk on Z d is a Markov chain with the n-dimensional integer lattice as the state space with the map P given by { /2d if x y P (x, y 0 otherwise for any x, y Z d. Here, denotes the -norm on R d given by x x x 2 x d if the coordinates of x R d are (x, x 2,, x d. Since Ω is discrete, we may order the elements Ω in a sequence. Fix this ordering and arrange the values of the map P into a matrix, such that if x, y Ω are respectively the ith and jth entries of the sequence, then P (x, y is located at the ith row and jth column. It is particularly useful to consider the map P in this matrix form, and from now on we call P the transition matrix of the Markov chain. There are three types of matrix multiplication performed in this paper involving transition matrices. Let (X t t0 be a Markov chain on state space Ω with transition matrix P. ( Multiplying P by itself: Let x, y Ω be any states and t 0 be any time. Then for any s, P s (x, y P (x, z P (z 2, z 3 P (z s, y z,,z s Ω z Ω P{X ts y X t x}. z s Ω P{X t z, X ts x s, X ts y X t x} Hence, the sth power of P give the transition probability over time interval of s. We define P 0 to be the identity matrix I. (2 Multiplying a row vector on the left: Definition.4. We say that a function µ : Ω [0, ] is a probability distribution on Ω if x Ω µ(x. Suppose that µ t is the probability distribution at time t 0. That is, µ t (x P{X t x} for all x Ω. Arrange the values of µ t into a row vector in the same order that we arranged the entries of the transition matrix. Then for any y Ω and t 0, (µ t P (y µ t (xp (x, y P{X t x}p (x, y P{X t y} x Ω x Ω µ t (y. Hence, multiplying the probability distribution at time t to the left of the transition matrix P gives the probability distribution at time t.

3 RECURRENCE IN COUNTABLE STATE MARKOV CHAINS 3 (3 Multiplying a column vector to the right: Let f : Ω R be a real-valued function. Arrange the values of f into a column vector in the same order that we arranged the entries of the transition matrix. Then for any x Ω and t 0, (P f(x y Ω P (x, yf(y y Ω P{X t y X t x}f(y E [f(x t X t x]. Hence, multiplying a real-valued function on Ω to the right of the transition matrix P gives the expected value of f at time t conditioned to X t x. If we combine these three operations, we obtain (µ t P s f(x E [f(x ts X t µ t ] where X t µ t means that µ t is the probability distribution of X t. One key property shared by many interesting Markov chains is irreducibility, which means that starting from any state x Ω, every state y Ω is eventually reachable. Definition.5. A transition matrix P of a Markov chain is irreducible if for every x, y Ω, there exists some t N such that P t (x, y P (x, z P (z, z 2 P (z t, y > 0. z,,z t Ω If a Markov chain has an irreducible transition matrix, we also call the chain itself irreducible. 2. Recurrent Markov Chains As we introduced in Question., one of the main interests regarding countable state Markov chains is to analyze whether the chain would return to its initial state. Such notion is defined precisely as recurrence, which in turn is defined by the notion of a hitting time. Definition 2.. Let (X t t0 be a Markov chain on a state space Ω, and A Ω be a subset of the state space. The hitting time for A is defined as τ A : inf{t 0 : X t A}. If A {x} is a singleton set, then we denote the hitting time for x as τ x. For situations where X 0 A, we also consider the first hitting time for A, defined as τ A : inf{t : X t A}. For any event E, we use the notation P x (E to denote the probability of E occuring for a Markov chain with the initial state X 0 x. We also use E x [Y ] to denote the expected value of a real-valued random variable Y for a Markov chain with the initial state X 0 x. Definition 2.2. We say that a state x Ω is recurrent if P x {τ x < }. Otherwise, x is said to be transient. Proposition 2.3. Suppose that for an irreducible Markov chain defined on Ω, P x {τ x < } for some x Ω. Then P y {τ z < } for all y, z Ω.

4 4 JIN WOO SUNG This proposition implies that for an irreducible Markov chain, either all states are recurrent or they are all transient. Hence, we can classify an irreducible chain as either a recurrent chain or a transient chain. We introduce the following useful lemma from Section 2.2 of [], which captures the idea for the proof of Proposition 2.3. The proof below is based on the proof in [], but fills in the details omitted in it. Lemma 2.4. Let (X t t0 be a Markov chain with an irreducible transition matrix P. The Green s function, defined as G(x, y : E x [ t0 {X ty}] t0 P t (x, y, where {Xty} is the indicator function for {X t y}, is the expected number of visits to y starting from x. Then the following statements are equivalent: (i P x {τ x < } for some x Ω. (ii G(x, x for some x Ω. (iii G(y, z for all y, z Ω. Proof. (i (ii Suppose X t x at some time t 0. Because of the Markov property, the probability that the chain will never visit x again (i.e. X s x for any s > t is given by P x {τ x } P x {τ x < } independent of time t. We claim that G(x, x is the expectation of a geometric random variable with success probability p P x {τ x < }. We prove by induction on N t0 {X ty}, the number of visits to x for the chain with initial state X 0 x. ( Since X 0 x, the smallest possible value of N is. N if and only if X t x for all t, so P{N } P x {τ x } p. (2 Suppose for some K N, we have P{N k} p( p k for every integer k K. Then K P{N K } p( p k ( p K. k N K if and only if the chain does not visit x after visiting x for the (K st time. That is, if τ is the time such that τ t0 {X tx} K, then X s x for all s > τ. Hence, P{N K } P{N K }P x {τ x } p( p K. Note that P{N < } n P{N n} n p( p and hence P{N } 0 for all 0 < p. Therefore, G(x, x E x [N] np( p p( p m ( p p. n n mn This implies G(x, x if and only if p 0 if and only if P x {τ x < }. (ii (iii Let y, z Ω be any states. Since the Markov chain is irreducible, there exists some positive integers m, n such that P m (y, x > 0 and P n (x, z > 0. Since P mtn (y, z P m (y, xp t (x, xp n (x, z, G(y, z P s (y, z P m (y, xp t (x, xp n (x, z s0 t0 P m (y, xp n (x, z n P t (x, x t0 P m (y, xp n (x, zg(x, x.

5 RECURRENCE IN COUNTABLE STATE MARKOV CHAINS 5 Hence, G(x, x implies G(y, z. The other direction is trivial. Looking at the first part of the proof of Lemma 2.4, we see that the probability that the chain starting from x revisits x infinitely often is { P x {N } p( p if p 0 0 if p 0. n0 That is, if P x {τ x < }, then the probability that the chain visits x infinitely often is. Proof of Proposition 2.3. {τ y < τ x } {τ x } {τ x }, so 0 P x {τ y < τ x }P y {τ x } P x {τ x } 0. We claim that P x {τ y < τ x } > 0. If so, P y {τ x } 0 and P y {τ x < }. Let us now show that P x {τ y < τ x } > 0. If x y, then τ y 0 < τ x, so P x {τ y < τ x }. Otherwise, let τ y,min inf{t : P t (x, y > 0}. Since P is irreducible, τ y,min is finite. If τ x < τ y τ y,min, then P (τy,min τ x (x, y > 0, so we have a contradiction. Hence, P x {τ y < τ x } P x {τ y τ y,min } P τy,min (x, y > 0. Similarly, P x {τ z < τ x } > 0. If a chain starting from x never visits z, then for every time that the chain visits x, it must be true that τ z τ x. Since the probability that the chain starting from x revisits x infinitely often is, the probability that the chain starting from x will never visit z is lim n ( P x {τ z < τ x } n 0. That is, P x {τ z < }. Therefore, P y {τ z < } P y {τ x < }P x {τ z < } and P y {τ z < } for every y, z Ω. Given that an [ irreducible ] Markov chain is recurrent, the next step is of course to calculate E x τ y. However, it turns out that Px {τ y < } does not always [ ] imply E x τ y <. Examples of such chains are simple random walks on Z and Z 2 (Corollary We say that a state x Ω is positive recurrent if E x [τ x ] is finite. The following proposition (Proposition 2. in [] says that positive recurrence is a property uniform to all states in an irreducible chain. Again, the proof is based on that in the reference, but it has been supplemented using the concept of a geometric random variable. Proposition 2.5. Let (X t t0 be an irreducible Markov chain which is recurrent. If E x [τ x ] < for some x Ω, then E y [τ z ] < for all y, z Ω. Proof. Let us denote the transition matrix as P. The first part of the proof is to show that for every y Ω, E y [τ x ] <. Define m inf{n N : P n (x, y > 0}; the set is nonempty due to irreducibility, so m is finite. Then for every integer t such that 0 < t < m, P m t (x, y 0. So, 0 P x {τ x t, τ y m} P x {X t x, X m y} P t (x, xp m t (x, y 0. Then, by our choice of m, P{τ x τ y m} P m (x, y > 0 and P m (x, y ( me y [τ x ] P x {τ x τ y m}e x [ τ x {τ x τ y m} ] E x [ τ x ] <. Hence, E y [τ x ] <.

6 6 JIN WOO SUNG The next part is to show E x [τ z ] <. Again due to irreducibility, we can choose m inf{n N : P n (x, z > 0} <, so that if X t x and X tm z, then X s / {x, z} for every t < s < t m. Define τ z inf{t > m : X t m x, X t z} as the hitting time for z with the condition that stops only if the chain has taken a minimal-time path from x to z. By Wald s identity, E x [τ z m ] is equal to E x [τ x ] multiplied by the expectation for a geometric random variable of success probability P m (x, z > 0. Clearly, τ z τ z. Since E x [τ x ] <, E x [τ z ] E x [τ z] m E x [τ z m ] m [ ] ( k E x τ x kp m (x, z P m (x, z k0 m E x [ τ x ] ( P m (x, z P m (x, z < Thus, we have proved the proposition for the cases where x y or x z. Finally, note that τ z for a chain starting from y is less than or equal to the time it takes for the same chain to visit z after visiting x. That is, assuming x y, x z. E y [ τ z ] Ey [τ x ] E x [τ z ] < Definition 2.6. Let (X t t0 be an irreducible, recurrent Markov chain. We say the chain is positive recurrent if for every x, y Ω, E x [ τ y ] <. Otherwise, no states are positively recurrent, and we say that the chain is null recurrent. Example 2.7 (Renewal shift. Let Ω {0} N and the transition matrix P be given by P (0, 0 0, P (0, n 2 n for every n N P (n, m 0, P (n, n for every n N and m n. n0 P (0, n n 2 n and n0 P (m, n P (m, m for every positive integer m, so P is a transition matrix. Then for every m, n[ ] N, P m (m, 0 and P m (m, n P (0, n 2 n > 0, so P is irreducible. E 0 τ 0 n n 2 n 2, so the chain is positive recurrent. Beware that positive recurrence does not imply that the expected time to return to the state at time t 0 is finite for arbitrary initial distributions. Let µ be a probability distribution on {0} N with µ(0 0 and µ(n 6/π 2 n 2 for n N. Then [ ] [ ] E µ τ 0 µ(n E n τ 6 0 π 2 n. n0 2.. Why is it important to distinguish between positive recurrence and null recurrence? Positive recurrence is a necessary condition for a Markov chain to display convergence, which is another important topic regarding Markov chains introduced in the following question. As this is a topic that merits extensive research, we state two theorems that highlight the noteworthiness of positive recurrence and direct the reader to references for further information. Question 2.8 (Convergence. In the limit t, does the distribution of X t converge to some fixed distribution on Ω? n

7 RECURRENCE IN COUNTABLE STATE MARKOV CHAINS 7 The fixed distribution here is the stationary distribution, defined as a probability distribution π on Ω such that π πp, where P is the transition matrix of a given Markov chain. It turns out that the stationary distribution of an irreducible Markov chain exists if and only if the chain is positive recurrent. Refer to Theorem 2.2 in [] for the proof of the following theorem. Theorem 2.9. An irreducible Markov chain on Ω with the transition matrix P is positive recurrent if and only if there exists a stationary distribution π on Ω such that π πp. Hence, it is natural that positive recurrence is a necessary condition in Theorem 2.2, called the convergence theorem, which provides an answer to Question.. The definitions are necessary for the precise statement of the theorem. Definition 2.0. Let µ and ν be two probability distributions on Ω. The total variation distance between µ and ν is defined as µ ν T V sup µ(a ν(a. A Ω Definition 2.. Let (X t t0 be a Markov chain on Ω with transition matrix P (not necessarily irreducible. For a state x Ω, define T (x {t : P t (x, x > 0} be the set of positive times when a chain starting from x can return to x. We define the period of x to be the greatest common divisor of T (x. If all states in Ω have period, then we call the chain aperiodic. Otherwise, the chain is periodic. Theorem 2.2 (Convergence. Let (X t t0 be an irreducible, aperiodic, and positive recurrent Markov chain on Ω with transition matrix P. Then there exists a unique stationary distribution π with respect to P such that for any probability distribution µ on Ω, lim µp t π T V 0. t Therefore, the distribution of X t converges to π in terms of total variation distance regardless of the initial distribution of X 0. Proof. Refer to Chapter 4 of [] (Exercise 4.3 in particular. Remark 2.3. Aperiodicity is crucial for Theorem 2.2 to hold. A simple counterexample is a Markov chain on Ω {0, } such that the transition matrix is given by P (0, 0 P (, 0 and P (0, P (, 0. The stationary distribution is π(0 π( /2, but given the initial distribution µ 0, the probability distribution of the two states at time t is (µ t (0, µ t ( (µ 0 (0, µ 0 ( whenever t is even and (µ t (0, µ t ( (µ 0 (, µ 0 (0. Hence, µ t π T V µ 0 π T V for all t Identifying a Recurrent Markov Chain In this section, we develop a technique for classifying an irreducible Markov chain into a transient, null recurrent, or positive recurrent chain. An analogy is made between random walks on networks and electrical networks by identifying the voltage as a harmonic function. Our technique applies to special Markov chains that are called weighted random walks on networks. We consider an undirected, positively weighted graph G. We assume that there is at most one edge between any pair of vertices. If x and y share an edge, we denote x y. Because we consider undirected graphs, x y and y x are equivalent.

8 8 JIN WOO SUNG The weights on edges are called conductances. For the edge between vertices x y, we denote its conductance as c(x, y c(y, x > 0; the equality follows since the graph is undirected. We also call the reciprocal r(x, y /c(x, y of conductance the resistance. Let c(x y x c(x, y. Definition 3.. A weighted random walk on G is a Markov chain on the vertex set Ω as the state space with the transition matrix given by P (x, y c(x,y c(x. The simple random walk on Z d is an example of weighted random walks on networks. We will exploit it as the model with which we display the usefulness of the technique. It will turn out that the simple random network on Z d is null recurrent if d or 2 and transient if d 3. Definition 3.2 (Restatement of Definition.3. A simple random walk on Z d is a Markov chain with the n-dimensional integer lattice as the state space with the map P given by { /2d if x y P (x, y 0 otherwise for any x, y Z d. Here, denotes the -norm on R d given by x x x 2 x d where the coordinates of x R d are (x, x 2,, x d. We see that a simple random walk on Z d can be thought of as a weighted random walk on the graph G where the vertex set is Z d and the edge set consists only of those connecting the nearest neighbors (points with difference of in the -norm with uniform conductance c > 0 (and uniform resistance r /c > 0. The transition matrix of the chain does not depend on the specific value of c as long as it is positive. There is a uniform probability of advancing exactly one unit in the positive or negative axial directions. Any connected path of length l traced out on the d-dimensional integer lattice can be realized with probability (2d l, so the simple random walk is clearly irreducible. 3.. Harmonic Functions. The virtue of considering the simple random walk in terms of an electrical network is that any harmonic function on Z d has a natural analogy to the electric voltage. Definition 3.3. Let P be the transition matrix of a Markov chain on Ω (not necessarily irreducible. A function f : Ω R is harmonic at x Ω if f(x y Ω P (x, yf(y. If f is harmonic at every x A Ω, then we say that it is harmonic on A. Let Ω Ω be a subset (called the boundary of Ω, and g : Ω R be a given real-valued function. Our goal is to find an extension f : Ω R of g that is harmonic on Ω \ Ω. That is, f is a solution to the equation { P (x, yf(y if x Ω \ Ω (3.4 f(x y Ω. g(x if x Ω Proposition 3.5. Let (X t t0 be an irreducible and recurrent Markov chain with transition matrix P. Let τ Ω : inf{t 0 : X t Ω} be the stopping time for Ω. Then f(x E x [g(x τ Ω ], i.e. the expected value of g for a chain starting in

9 RECURRENCE IN COUNTABLE STATE MARKOV CHAINS 9 X 0 x at the first time it lands on Ω, is a solution to Equation 3.4. If Ω \ Ω is a finite set, then this is the unique solution. Proof. First, note that P x {τ Ω < } because the chain is irreducible and recurrent (Proposition 2.3. Let us show that f(x E x [g(x τ Ω ] satisfies Equation 3.4. If x Ω, then τ Ω 0. Hence, E x [g(x τ Ω ] g(x 0 g(x. If x Ω \ Ω, then τ Ω. Expressing the Markov property in the form P ({X 0 x, X y,, X n z} {τ Ω n} P (x, y P ({X 0 y,, X z} {τ Ω n } shows that E x [g(x τ Ω ] is a weighted average of E X [g(x τ Ω ]. Precisely, E x [g(x τ Ω ] y Ω P (x, ye y [g(x τ Ω ], which is exactly in the form of Equation 3.4. Now, let us prove that if Ω \ Ω is a finite set, then f(x E x [g(x τ Ω ] is the unique solution. Suppose that there exists another solution f to Equation 3.4. Let h f f. Then h satisfies { P (x, yh(y if x Ω \ Ω (3.6 h(x y Ω. 0 if x Ω Since Ω \ Ω is finite, the set {h(x : x Ω \ Ω} has a maximum and a minimum. Let z Ω \ Ω be a maximizer of h in Ω \ Ω, and suppose that h(z > 0. If there exists z Ω \ Ω such that P (z, z > 0 and h( z < h(z, then 0 h(z P (z, yh(y P (z, z ( h(z h( z P (z, y ( h(z h(y > 0 y Ω y Ω\{ z} since h(z h(y for every y Ω. Hence, for every y Ω, if P (z, y > 0 then h(y h(z. But since P is irreducible, for every x Ω there exists t N such that P t (z, x > 0. In particular, there exists y, y 2,, y t Ω such that P (z, y > 0, P (y, y 2 > 0,, P (y t, x > 0. Hence, h(z h(x h(x t h(y, which means that h(y h(z > 0 for every y Ω, even if y Ω. Since this contradicts Equation 3.6, we must have max y Ω h(y 0. Similarly, min y Ω h(y 0. Then h 0 everywhere on Ω, which implies that f f. (This is the discrete version of the maximum principle for harmonic functions. Remark 3.7. Suppose V Ω is a finite subset, and let its boundary satisfy V {x V : P (x, y > 0 for some y Ω \ V }. Consider a Markov chain on V with the transition matrix P such that P (x, y if x V \ V P (x, y P (x, y if x V and y V \ {x}. z V P (x, z if x y V Let (X t t0 be the chain on Ω with transition matrix P and ( X t t0 be the chain on V with transition matrix P. Then for any t, x 0, x,, x t V \ V, and

10 0 JIN WOO SUNG x t V, P x0 {X x,, X t x t } P x0 { X x,, X t x t }. Let f : V R be a function which solves { P (x, y (3.8 f(x f(y y Ω g(x if x D if x B Then f also satisfies Equation 3.4 for points in V. Since f is the unique solution for Equation 3.8, any f which solves Equation 3.4 must satisfy f(x f(x for every x V Voltages and Current Flows. We now return to our discussion of random walks on networks. By Proposition 3.5, the harmonic extension of a function g defined on the boundary states is uniquely determined if there are finitely many non-boundary points. Therefore, in the rest of this section, we consider a subgraph G of the graph G, where the vertex subset V of G is finite and the edge subset E of G consists of all edges in G that connect vertices in V. By Remark 3.7, we may restrict our attention to the unique harmonic extension of g defined on V to vertices in V only. Definition 3.9. Given g : V R, we call the unique harmonic extension of g onto V as the voltage, denoted W : V R. Because our interest in voltage is related to identifying the recurrence of a Markov chain, it is especially useful to consider the case where the boundary values are { v if x a (3.0 g(x 0 if x Z for some v R. Here, a V is some vertex which we call the source of the network; it is considered as the initial state of the Markov chain. Z V \ {a} is called the sink of the network, which is considered as the set on which we terminate the chain. The sink may contain more than one vertex, just as we often ground the electric circuit at more than one point. Definition 3.. A flow from a to Z is a function θ : V V R such that for every x, y V, (i θ(x, x 0 (ii θ(x, y 0 if x y or x, y Z (iii θ(x, y θ(y, x (iv div θ(x 0 if x V \ V where div θ(x : y V θ(x, y, the net flow coming out of x, is called the divergence of θ at x. The fourth condition is known as Kirchhoff s node law. Finally, if div θ(a, then we call θ a unit flow from a to Z. Remark 3.2. Equivalently, we can define the flow from a to Z as a real-valued function over E. Since the flow must be antisymmetric, we assign a fixed direction to each edge. If e E is an edge between x, y V in which the positive direction is from x to y, then θ E : E R is defined so that θ E (e θ(x, y θ(y, x. In this perspective, the only two conditions necessary for θ E to be a flow from a to Z are (i θ E (e 0 if e E connects two vertices in Z and (ii div θ E (x 0 for every x V \ V..

11 RECURRENCE IN COUNTABLE STATE MARKOV CHAINS Definition 3.3. Suppose W is a voltage on V. The current flow I associated with W is the flow from a to Z given by W (x W (y I(x, y c(x, y [ W (x W (y ] r(x, y for all x, y V such that x y and I(x, y 0 for all x, y V such that x y. Let us check that the current flow is a flow from a to Z. For all x, y V : (i If x y, then I(x, y c(x, y [ W (x W (y ] c(x, x 0 0. (ii If x y, then I(x, y 0 by definition. If x, y Z and x y, then W (x W (y 0 implies I(x, y 0. (iii I(x, y c(x, y [ W (x W (y ] c(x, y [ W (y W (x ] I(y, x. (iv If x V \ V, I(x, y c(x, y [W (x W (y] c(x W (x P (x, yw (y 0 y V y V y V since W is harmonic at x. The following proposition shows that our definition of voltage as the unique harmonic extension is equivalent to how voltage is defined in physics in terms of Kirchhoff s laws and Ohm s law. Proposition 3.4 (Cycle law. Let θ be a flow from a to Z. Then θ is a current flow if and only if k (3.5 r(x i, x i θ(x i, x i 0 for every k, x 0, x,, x k V such that (i x 0 x k or x 0, x k Z, and (ii x 0 x, x x 2,, x k x k. Proof. Let x 0,, x k be vertices that satisfy the above two conditions. For the current flow I, ( k k W (x i W (x i r(x i, x i I(x i, x i r(x i, x i r(x i, x i k ( W (xi W (x i 0. On the other hand, suppose that θ satisfies Equation 3.5 but is not a current flow. We claim that there exists a current flow I from a to Z such that div θ(a div I(a. For all x V, W (x E x [g(x τ V ] v P x {τ a < τ Z } by Proposition 3.5. Then W (x κ(x v, where κ(x P x {τ a < τ Z } is constant for each vertex x V. So, ( div I(a I(a, x c(a, x[w (a W (x] v c(x c(a, xκ(x x a x a x a and we may obtain the (unique current flow I with div θ(a div I(a by choosing the right value of v. Now, it is easy to check that η θ I is a flow from a to Z that satisfies Equation 3.5. Furthermore, div η(a 0 and z Z div η(z 0.

12 2 JIN WOO SUNG Let us now construct a sequence of nodes which violates Equation 3.5. Since η 0 and η is antisymmetric, there exists some x 0, x V such that x 0 x and η(x 0, x > 0. Now, there are two possibilities for x. ( x / Z: since div η(x 0, there exists x 2 V such that x x 2 and η(x, x 2 > 0. (2 x Z: since z Z div η(z 0, there exists x 2 Z and x 3 V \ Z such that x 2 x 3 and η(x 2, x 3 > 0. Let us continue to choose x 3, x 4, through this process. Since V is finite, by the pigeonhole principle there exist some m, n N (m < n such that n is the smallest integer greater than m where x m x n or x m, x n Z. Then η(x i, x i 0 for every m < i n. Due to our method of construction, x m, x m,, x n satisfies the two required conditions and η(x i, x i > 0 for all m i < n. Therefore, we achieve the following contradiction: 0 < r(x i, x i η(x i, x i im r(x i, x i θ(x i, x i r(x i, x i I(x i, x i im im 3.3. Effective Resistance and Escape Probability. We are now ready to utilize techniques from network theory to identify recurrence or transience of the simple random walk on Z d. Proposition 3.7 is the key; it identifies the relationship between effective resistance and escape probability. Effective resistance can be calculated using ideas from network theory, namely Thomson s Principle, Proposition 3.9, and Nash-Williams Inequality, Proposition Escape probability is related to recurrence/transience of the simple random walk on Z d, as explained in Remark 3.8. Definition 3.6. Let I be the current flow from a to Z. resistance between a and Z by Define the effective R(a Z : W (a/divi(a and the effective conductance C(a Z between a and Z to be the reciprocal of effective resistance. Proposition 3.7. Define the escape probability from a to be P a {τ Z < τ a }. The escape probability from a is given by P a {τ Z < τ a } c(ar(a Z C(a Z. c(a Proof. The proof follows from Proposition 3.5 and the succeeding remark, that f(x E x [g(x τb ] is the unique harmonic extension to the boundary condition given by Equation 3.0. Hence, the voltage is determined uniquely by W (x E x [g(x τb ] for every x A. Since g(y 0 for every y Z, W (x E x [g(x τb ] P x {X τb a} g(ap x {X τb Z} 0 P x {τ Z > τ a } W (a.

13 RECURRENCE IN COUNTABLE STATE MARKOV CHAINS 3 Therefore, P a {τ Z < τ a } x a x a P (a, xp x {τ Z < τ a } P (a, x ( P x {τ Z > τ a } x a ( c(a, x W (x c(a, x ( W (a W (x c(a W (a c(aw (a c(aw (a I(a, x x a divi(a c(aw (a x a C(a Z. c(a Remark 3.8. We can now reduce the problem of identifying recurrence and transience of the simple random walk on Z d into calculating the effective resistance between the origin as the source and points infinitely far away as the sink. In our discussion of a simple random walks on Z d, we would like Z to be the boundary {x V : x y for some y Z d \ V } of a large but finite subset V Z d. In this way, we can approximate a random walker who proceeds infinitely far away from the origin in the following manner: move Z farther away from the origin and consider whether the walker starting from the origin hits Z in finite time. Our formulation is to set Z n {x Z d : x n} for n N, and let n grow to infinity. Since a random walker starting from the origin must first visit Z n to visit Z n, {τ Zn < τ 0 } {τ Z n < τ 0 } for every positive integer n. Hence, ( P 0 {τ Zn < τ 0 } is a nonnegative decreasing sequence and lim n N n P 0 {τ Zn < τ 0 } exists. We consider the following two possibilities: ( If lim n P 0 {τ Zn < τ 0 } 0, then the random walk is recurrent. First, we claim that P 0 {τ Zn } 0. Since the region bounded by Z n contains finitely many points, an application of the pigeonhole principle tells us that for the simple random walk starting from the origin, there always exists some y Z d with y < n such that the random walker visits y infinitely often. But for any such y, since the minimum distance between y and Z n is n y, P y {τ y > τ Zn } P y {τ Zn n y } (2d (n y > 0. Hence, ( P y {τ Zn } lim Py {τ y < τ Zn } k 0. k This implies P 0 {τ Zn } 0. That is, conditioned on the event τ 0, τ Z n < τ 0 with probability for every n N. Therefore, ( P 0 {τ 0 } P 0 {τ Zn < τ 0 } lim P 0{τ Zn < τ n 0 } 0 n and the random walk is recurrent. (2 If lim n P 0 {τ Zn < τ 0 } > 0, then the random walk is transient. For a random walker starting at the origin, τ Zn as n because τ Zn n. Hence, n{τ Zn < τ 0 } {τ 0 }. This implies that ( P 0 {τ 0 } P 0 {τ Zn < τ 0 } lim P 0{τ Zn < τ n 0 } > 0. n

14 4 JIN WOO SUNG The following proposition is useful for finding the upper bound of lim n R(0 Z n, hence the lower bound of lim n P 0 {τ Zn < τ 0 }. Proposition 3.9 (Thomson s Principle. Define the energy of a flow θ from a to Z by E(θ : r(eθ(e 2 r(x, yθ(x, y 2. 2 e E x,y V x y (The ambiguity in the sign of θ(e does not cause problem because it is squared. Then R(a Z E(I inf{e(θ : θ a unit flow from a to z} where I is the unit current flow from a to Z. Proof. Fix a direction on each edge, so that we may consider a flow from a to Z as a real-valued function on E. Let Φ be the real Hilbert space of functions from E to R with < ϕ, ϕ 2 > e E r(eϕ (eϕ 2 (e as the inner product. For every x V \ V, define f x : Φ R be a linear map defined by f x (ϕ div ϕ(x. If we denote Θ Φ to be the subset of flows from a to Z, then Θ x V \ V ker f x. This shows that Θ is a subspace of Φ. Moreover, for any θ Θ, E(θ < θ, θ > θ 2. Let I Θ be the unit current flow from a to Z. For any ϕ Θ, < I, ϕ > r(x, yi(x, yϕ(x, y x,y V x y x,y V x y x,y V x y ( W (x W (y r(x, y r(x, y (W (x W (y ϕ(x, y x V W (x y V x y ϕ(x, y 2 ϕ(x, y y V W (y x V x y x V W (x div ϕ(x W (a div ϕ(a. ϕ(x, y For any unit flow θ Θ, θ I Θ and div(θ I(a 0. So, < I, θ I > 0 and E(θ I 2 θ I 2 2 < I, θ I > E(I θ I 2 E(I. In particular, E(θ E(I if and only if θ I. Finally, E(I < I, I > W (a div I(a W (a W (a R(a Z. div I(a Corollary The simple random walk on Z d is transient for d 3. Proof. Since Z n is finite for every n N, we can apply the Thomson s Principle to find the upper bound for R(0 Z n. We construct a unit flow θ from 0 to Z n based on a stochastic process known as Polya s urn. The idea is that the

15 RECURRENCE IN COUNTABLE STATE MARKOV CHAINS 5 random walker starts from the origin, and at time t, the random walker is on Z t {x Z d : x t and all coordinates of x are nonnegative}. The walker moves from X t x Z t to X t Z t by taking the edge in the direction of the ith unit vector with probability proportional to the ith coordinate of x. Finally, the flow on the edge is defined to be the probability that the random walker will pass through it. Let us now construct the flow θ formally. It shall be constructed so that the flow is nonzero only between points whose coordinates are all nonnegative. Let x, y be points in Z d such that x, y n. We define θ(x, y 0 if either x or y contains a negative coordinate. Clearly, div θ(x 0 whenever x Z d with x < n and x has a negative coordinate. This leaves edges between points of the form x (x,, x d (where x,, x d 0 and x x d n and y (x,, x i, x i, x i,, x d xu i for some i d, where u i is the unit vector in the ith coordinate. Let Z n 0 m nz m. Define a function φ : Z n [0, ] whose values satisfy if x 0 φ(x d ( xi φ(x u i / ( x d otherwise. x i 0 Now, we construct the flow on Z n such that for every x Z and i d, θ(x, x u i φ(x x i x d. This determines all values of θ for the edges between points in {x Z d : x n}. θ is a unit flow from 0 to Z n because div θ(0 d θ(0, u i d θ(0, u i and for every x Z m where m < n, div θ(x d θ(x, x u i d d θ(x u i, x x i 0 d x i φ(x x d φ(x d x i 0 d φ(x u i x i 0 x i φ(x u i x d d φ(x u i x i 0 x i x d d φ(x u i x i 0 φ(0 d 0 x i x d x i x d 0. We claim that for every 0 m n, φ is uniform on Z m. The proof is by induction on m. The case m 0 is trivial, because Z 0 is a singleton set. Suppose the claim is true when m k for some integer k 0, so that for every y Z k,

16 6 JIN WOO SUNG φ(y c k. Then for every x Z k, φ(x d φ(x u i x i 0 In fact, we see that x i x d c k c m m k0 k k d d x i 0 x i x d c k k k d c k. ( d m. m Suppose that the uniform resistance between immediate neighbors of Z d is r > 0 (the specific value of r does not matter as long as it is positive. Let V n m0z m and V Z n. Since Z m ( m(d m, we obtain Since R(0 Z n E(θ r(e θ(e 2 r θ(e 2 e E e E d r θ(x, x u i 2 lim m lim r rd m0 x Z m ( (d / m d m k m m0 x Z m m0 x Z m m d d k m k m (d! k φ(x 2 d φ(x 2 rd lim m ( lim m ( 2 xi x d m0 ( d m. m ( (d / m d m (d!, m k m d ( dm m0 is convergent for d 3 by the limit comparison test with m0 m (d. Therefore, m By Proposition 3.7, 0 lim n R(0 Z n rd lim P 0{τ Zn < τ n 0 } lim n ( d m <. m m0 c(0r(0 Z n r 2d lim n R(0 Z n > 0. We refer to Remark 3.8 to conclude that the simple random walk on Z d is transient for d 3. Remark 3.2. The above proof provides the upper bound for the growth rate of R(0 Z n even when d or 2. It shows that ( d m 0 R(0 Z n rd. m m0

17 RECURRENCE IN COUNTABLE STATE MARKOV CHAINS 7 For d, ( dm m, so R(0 Zn nr. For d 2, ( dm m Hence, m, so R(0 Z n 2r m0 P 0 {τ Zn < τ 0 } r 2d R(0 Z n m 2( log nr. { /2n if d /8( log n if d 2. To find the lower bound for the growth rate of R(0 Z n, the following Proposition is useful. Proposition 3.22 (Nash-Williams Inequality. Call a collection of edges Π E an edge-cutset separating a from Z if every path from a to Z includes some edge in Π. If {Π k } is a family of disjoint edge-cutsets separating a from Z such that Π k is finite for every index k, then R(a Z ( c(e. k e Π k Proof. Let I be the unit current flow from a to Z. For any index k, due to the Cauchy-Schwarz inequality, ( ( ( 2 c(e r(ei(e 2 I(e. e Π k e Π k e Π k Let V k be the set of nodes that can be reached from a without passing through any edge in Π k (including a. Also let V k V k be the subset of nodes that are on one side of some edge in Π k : if x V k, then (x, y Π k for some y V. Then due to the antisymmetry of I, a discrete version of the divergence theorem applies: I(e I(x, y e Π k x V k y V (x,y Π k I(x, y I(x, x 2 x V k y V x V k x 2 V k (x,y Π k x x 2 div I(x div I(a. x V k By Thomson s Principle (Proposition 3.9, R(a Z E(I r(ei(e 2 ( r(ei(e 2 c(e. e E k e Π k k e Π k Corollary The simple random walks on Z and Z 2 are null recurrent. Proof. Let n be an arbitrary positive integer. For every 0 k < n, we choose the edge-cutset Π k separating 0 from Z n to be all edges between points in Z k and points in Z k. Again, suppose that the uniform resistance of the edges are r > 0.

18 8 JIN WOO SUNG d : For the one-dimensional case, Π k 2. Then by the Nash-Williams Inequality, ( r R(0 Z n r 2 n r 2. k0 e Π k k0 Combining the result from Remark 3.2, ( nr 2 R(0 Z n nr and P 0 {τ Zn < τ 0 } Θ. n d 2: For the two-dimensional case, Π k 4(2k. By the Nash-Williams Inequality, ( r R(0 Z n r 4(2k r 4 log(2n. 2 k0 e Π k k0 Combining the result from Remark 3.2, r log(2n R(0 Z n 2r( log n. 8 An application of L Hôpital s rule shows lim (( log n/log(2n, so n ( P 0 {τ Zn < τ 0 } Θ. log n We refer to Remark 3.8 to conclude that the simple random walks on Z and Z 2 are both recurrent. Let us now prove that the simple random walk on Z and Z 2 are not positive recurrent. For every n N, the random walker starting from the origin must visit Z n before visiting Z n, so τ Zn < τ Zn. Thus {τ Z < τ 0 } {τ Z 2 < τ 0 } {τ Z 3 < τ 0 }, and P 0 {τ Zn < τ 0 < τ Z n } P 0 {τ Zn < τ 0 } P 0{τ Zn < τ 0 }. [ Also, E 0 τ 0 {τ Zn < τ 0 }] 2n, since for a random walker starting from the origin, it takes at least n steps to reach Z n and at least n further steps to return to the origin. Since ( {τ Zn < τ 0 < τ Z n } are disjoint events, n N [ ] E 0 τ 0 P 0 {τ Zn < τ 0 < τ [ Z n } E 0 τ {τzn 0 < τ 0 < τ Z n } ] n ( P0 {τ Zn < τ 0 } P 0{τ < τ Zn 0 } 2n n ( N lim 2 P 0 {τ Zn < τ 0 } N P 0{τ ZN < τ 0 } N lim N n n N 2 ( P 0 {τ Zn < τ 0 } P 0{τ ZN < τ 0 } For each N, choose M N to be the largest integer such that P 0 {τ ZMN < τ 0 } 2P 0 {τ ZN < τ 0 }. Since P 0{τ Zn < τ 0 } 0 as n for both dimensions,

19 RECURRENCE IN COUNTABLE STATE MARKOV CHAINS 9 M N as n. Then N lim 2 ( P 0 {τ Zn < τ 0 } P 0{τ ZN < τ 0 } N n lim N n Hence, E 0 [ τ 0 ] for both dimensions. M N P 0 {τ Zn < τ 0 } P 0 {τ Zn < τ 0 }. Remark In fact, we can use Nash-Williams inequality to find the lower bound of R(0 Z n for d 3 as well. The vertices in Z k with nonzero coordinates each share edges with exactly d vertices in Z k. The vertices in Z k that have at least one zero coordinates share edges with at most 2d vertices in Z k. But we count the latter kind of vertices more than once if we multiply Z k ( kd by 2 d, so Π k d 2 d (kd k and R(0 Z n k0 n ( r r d 2 d e Π k ( k d. k Combining this result from Corollary 3.20, r ( k d ( k d d 2 d R(0 Z n rd. k k k0 k0 Since we calculated ( ( kd k Θ /k d in Corollary 3.20, we conclude that the growth rate of R(0 Z n is on the order of n k k d Θ(n 2 d. Remark Since the simple random walks on Z d are either null recurrent or transient for every dimension d, there do not exist any stationary distributions for these chains. Acknowledgments. I wish to thank my mentor Peter Morfe for his guidance and support throughout the REU. His help was crucial during every stage of writing this paper. I would also like to thank Professor Peter May for his dedication to this program. References [] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov Chains and Mixing Times. American Mathematical Society, [2] Peter G. Doyle and J. Laurie Snell. Random Walks and Electrical Networks [3] S. R. S. Varadhan. Probability Theory. American Mathematical Society, 200. [4] Greg Lawler. Notes on Probability. lawler/probnotes.pdf [5] Avner Friedman. Foundations of Modern Analysis. Dover, 982. k0 k

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS ARI FREEDMAN Abstract. In this expository paper, I will give an overview of the necessary conditions for convergence in Markov chains on finite state spaces.

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

MARKOV CHAINS AND HIDDEN MARKOV MODELS

MARKOV CHAINS AND HIDDEN MARKOV MODELS MARKOV CHAINS AND HIDDEN MARKOV MODELS MERYL SEAH Abstract. This is an expository paper outlining the basics of Markov chains. We start the paper by explaining what a finite Markov chain is. Then we describe

More information

ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS

ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS DANIEL STRAUS Abstract This paper investigates recurrence in one dimensional random walks After proving recurrence, an alternate approach

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES JAMES READY Abstract. In this paper, we rst introduce the concepts of Markov Chains and their stationary distributions. We then discuss

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Electrical networks and Markov chains

Electrical networks and Markov chains A.A. Peters Electrical networks and Markov chains Classical results and beyond Masterthesis Date master exam: 04-07-016 Supervisors: Dr. L. Avena & Dr. S. Taati Mathematisch Instituut, Universiteit Leiden

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

MARKOV CHAIN MONTE CARLO

MARKOV CHAIN MONTE CARLO MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Modern Discrete Probability Spectral Techniques

Modern Discrete Probability Spectral Techniques Modern Discrete Probability VI - Spectral Techniques Background Sébastien Roch UW Madison Mathematics December 22, 2014 1 Review 2 3 4 Mixing time I Theorem (Convergence to stationarity) Consider a finite

More information

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Necessary and sufficient conditions for strong R-positivity

Necessary and sufficient conditions for strong R-positivity Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

MARKOV CHAINS AND MIXING TIMES

MARKOV CHAINS AND MIXING TIMES MARKOV CHAINS AND MIXING TIMES BEAU DABBS Abstract. This paper introduces the idea of a Markov chain, a random process which is independent of all states but its current one. We analyse some basic properties

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

CHAPTER II THE HAHN-BANACH EXTENSION THEOREMS AND EXISTENCE OF LINEAR FUNCTIONALS

CHAPTER II THE HAHN-BANACH EXTENSION THEOREMS AND EXISTENCE OF LINEAR FUNCTIONALS CHAPTER II THE HAHN-BANACH EXTENSION THEOREMS AND EXISTENCE OF LINEAR FUNCTIONALS In this chapter we deal with the problem of extending a linear functional on a subspace Y to a linear functional on the

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible

More information

Summary of Results on Markov Chains. Abstract

Summary of Results on Markov Chains. Abstract Summary of Results on Markov Chains Enrico Scalas 1, 1 Laboratory on Complex Systems. Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale Amedeo Avogadro, Via Bellini 25 G,

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................

More information

USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON S EQUATION

USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON S EQUATION USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON S EQUATION YI WANG Abstract. We study Banach and Hilbert spaces with an eye towards defining weak solutions to elliptic PDE. Using Lax-Milgram

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

5 Flows and cuts in digraphs

5 Flows and cuts in digraphs 5 Flows and cuts in digraphs Recall that a digraph or network is a pair G = (V, E) where V is a set and E is a multiset of ordered pairs of elements of V, which we refer to as arcs. Note that two vertices

More information

Chapter 1 Preliminaries

Chapter 1 Preliminaries Chapter 1 Preliminaries 1.1 Conventions and Notations Throughout the book we use the following notations for standard sets of numbers: N the set {1, 2,...} of natural numbers Z the set of integers Q the

More information

Martingale techniques

Martingale techniques Chapter 3 Martingale techniques Martingales are a central tool in probability theory. In this chapter we illustrate their use, as well as some related concepts, on a number of applications in discrete

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space 1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

Random Walks and Electric Resistance on Distance-Regular Graphs

Random Walks and Electric Resistance on Distance-Regular Graphs Random Walks and Electric Resistance on Distance-Regular Graphs Greg Markowsky March 16, 2011 Graphs A graph is a set of vertices V (can be taken to be {1, 2,..., n}) and edges E, where each edge is an

More information

1 Inner Product Space

1 Inner Product Space Ch - Hilbert Space 1 4 Hilbert Space 1 Inner Product Space Let E be a complex vector space, a mapping (, ) : E E C is called an inner product on E if i) (x, x) 0 x E and (x, x) = 0 if and only if x = 0;

More information

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider

More information

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that 15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Generalized Pigeonhole Properties of Graphs and Oriented Graphs

Generalized Pigeonhole Properties of Graphs and Oriented Graphs Europ. J. Combinatorics (2002) 23, 257 274 doi:10.1006/eujc.2002.0574 Available online at http://www.idealibrary.com on Generalized Pigeonhole Properties of Graphs and Oriented Graphs ANTHONY BONATO, PETER

More information

Course : Algebraic Combinatorics

Course : Algebraic Combinatorics Course 18.312: Algebraic Combinatorics Lecture Notes #29-31 Addendum by Gregg Musiker April 24th - 29th, 2009 The following material can be found in several sources including Sections 14.9 14.13 of Algebraic

More information

Homework #2 Solutions Due: September 5, for all n N n 3 = n2 (n + 1) 2 4

Homework #2 Solutions Due: September 5, for all n N n 3 = n2 (n + 1) 2 4 Do the following exercises from the text: Chapter (Section 3):, 1, 17(a)-(b), 3 Prove that 1 3 + 3 + + n 3 n (n + 1) for all n N Proof The proof is by induction on n For n N, let S(n) be the statement

More information

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

INTRODUCTION TO FURSTENBERG S 2 3 CONJECTURE

INTRODUCTION TO FURSTENBERG S 2 3 CONJECTURE INTRODUCTION TO FURSTENBERG S 2 3 CONJECTURE BEN CALL Abstract. In this paper, we introduce the rudiments of ergodic theory and entropy necessary to study Rudolph s partial solution to the 2 3 problem

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

Harmonic Functions and Brownian Motion in Several Dimensions

Harmonic Functions and Brownian Motion in Several Dimensions Harmonic Functions and Brownian Motion in Several Dimensions Steven P. Lalley October 11, 2016 1 d -Dimensional Brownian Motion Definition 1. A standard d dimensional Brownian motion is an R d valued continuous-time

More information

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

Measurable functions are approximately nice, even if look terrible.

Measurable functions are approximately nice, even if look terrible. Tel Aviv University, 2015 Functions of real variables 74 7 Approximation 7a A terrible integrable function........... 74 7b Approximation of sets................ 76 7c Approximation of functions............

More information

Probability & Computing

Probability & Computing Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...

More information

Stone-Čech compactification of Tychonoff spaces

Stone-Čech compactification of Tychonoff spaces The Stone-Čech compactification of Tychonoff spaces Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto June 27, 2014 1 Completely regular spaces and Tychonoff spaces A topological

More information

Analysis Comprehensive Exam Questions Fall 2008

Analysis Comprehensive Exam Questions Fall 2008 Analysis Comprehensive xam Questions Fall 28. (a) Let R be measurable with finite Lebesgue measure. Suppose that {f n } n N is a bounded sequence in L 2 () and there exists a function f such that f n (x)

More information

Review of Multi-Calculus (Study Guide for Spivak s CHAPTER ONE TO THREE)

Review of Multi-Calculus (Study Guide for Spivak s CHAPTER ONE TO THREE) Review of Multi-Calculus (Study Guide for Spivak s CHPTER ONE TO THREE) This material is for June 9 to 16 (Monday to Monday) Chapter I: Functions on R n Dot product and norm for vectors in R n : Let X

More information

arxiv: v2 [math.pr] 25 May 2017

arxiv: v2 [math.pr] 25 May 2017 STATISTICAL ANALYSIS OF THE FIRST PASSAGE PATH ENSEMBLE OF JUMP PROCESSES MAX VON KLEIST, CHRISTOF SCHÜTTE, 2, AND WEI ZHANG arxiv:70.04270v2 [math.pr] 25 May 207 Abstract. The transition mechanism of

More information

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.]

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.] Lectures 16-17: Random walks on graphs CSE 525, Winter 2015 Instructor: James R. Lee 1 Random walks Let G V, E) be an undirected graph. The random walk on G is a Markov chain on V that, at each time step,

More information

THE TYPE PROBLEM: EFFECTIVE RESISTANCE AND RANDOM WALKS ON GRAPHS. Contents 1. Introduction 1 3. Effective Resistance (infinite network) 9

THE TYPE PROBLEM: EFFECTIVE RESISTANCE AND RANDOM WALKS ON GRAPHS. Contents 1. Introduction 1 3. Effective Resistance (infinite network) 9 THE TYPE PROBLEM: EFFECTIVE RESISTANCE AND RANDOM WALKS ON GRAPHS BENJAMIN MCKENNA Abstract. The question of recurrence or transience the so-called type problem is a central one in the theory of random

More information

Computational Game Theory Spring Semester, 2005/6. Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg*

Computational Game Theory Spring Semester, 2005/6. Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg* Computational Game Theory Spring Semester, 2005/6 Lecture 5: 2-Player Zero Sum Games Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg* 1 5.1 2-Player Zero Sum Games In this lecture

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

MATH 423 Linear Algebra II Lecture 3: Subspaces of vector spaces. Review of complex numbers. Vector space over a field.

MATH 423 Linear Algebra II Lecture 3: Subspaces of vector spaces. Review of complex numbers. Vector space over a field. MATH 423 Linear Algebra II Lecture 3: Subspaces of vector spaces. Review of complex numbers. Vector space over a field. Vector space A vector space is a set V equipped with two operations, addition V V

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Capacitary inequalities in discrete setting and application to metastable Markov chains. André Schlichting

Capacitary inequalities in discrete setting and application to metastable Markov chains. André Schlichting Capacitary inequalities in discrete setting and application to metastable Markov chains André Schlichting Institute for Applied Mathematics, University of Bonn joint work with M. Slowik (TU Berlin) 12

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

Lecture: Modeling graphs with electrical networks

Lecture: Modeling graphs with electrical networks Stat260/CS294: Spectral Graph Methods Lecture 16-03/17/2015 Lecture: Modeling graphs with electrical networks Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough.

More information

Two Proofs of Commute Time being Proportional to Effective Resistance

Two Proofs of Commute Time being Proportional to Effective Resistance Two Proofs of Commute Time being Proportional to Effective Resistance Sandeep Nuckchady Report for Prof. Peter Winkler February 17, 2013 Abstract Chandra et al. have shown that the effective resistance,

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University Markov Chains Andreas Klappenecker Texas A&M University 208 by Andreas Klappenecker. All rights reserved. / 58 Stochastic Processes A stochastic process X tx ptq: t P T u is a collection of random variables.

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ.

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ. ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ. ANDREW SALCH 1. Hilbert s Nullstellensatz. The last lecture left off with the claim that, if J k[x 1,..., x n ] is an ideal, then

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

Signed Measures. Chapter Basic Properties of Signed Measures. 4.2 Jordan and Hahn Decompositions

Signed Measures. Chapter Basic Properties of Signed Measures. 4.2 Jordan and Hahn Decompositions Chapter 4 Signed Measures Up until now our measures have always assumed values that were greater than or equal to 0. In this chapter we will extend our definition to allow for both positive negative values.

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

REAL AND COMPLEX ANALYSIS

REAL AND COMPLEX ANALYSIS REAL AND COMPLE ANALYSIS Third Edition Walter Rudin Professor of Mathematics University of Wisconsin, Madison Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information