Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

Similar documents
Probability & Computing

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

25.1 Markov Chain Monte Carlo (MCMC)

Lecture 20 : Markov Chains

Markov Chains Handout for Stat 110

Markov Chains, Random Walks on Graphs, and the Laplacian

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Probability Generating Functions

1.3 Convergence of Regular Markov Chains

Markov Processes Hamid R. Rabiee

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Model Counting for Logical Theories

Random Variables. Andreas Klappenecker. Texas A&M University

Lecture 5: Random Walks and Markov Chain

6.842 Randomness and Computation February 24, Lecture 6

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

Markov Chains and Stochastic Sampling

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Lecture 2: September 8

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Markov Chains and MCMC

6 Markov Chain Monte Carlo (MCMC)

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

Markov chains. Randomness and Computation. Markov chains. Markov processes

The Markov Chain Monte Carlo Method

Convergence Rate of Markov Chains

Basics of Probability Theory


Introduction to Machine Learning CMU-10701

Lecture notes for Analysis of Algorithms : Markov decision processes

Stochastic Simulation

Convex Optimization CMU-10725

Nonnegative Matrices I

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Approximate Counting and Markov Chain Monte Carlo

Propp-Wilson Algorithm (and sampling the Ising model)

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

Markov and Gibbs Random Fields

Coupling AMS Short Course

Markov Chains and MCMC

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MARKOV CHAIN MONTE CARLO

Random walks, Markov chains, and how to analyse them

2. Transience and Recurrence

Warmup Recall, a group H is cyclic if H can be generated by a single element. In other words, there is some element x P H for which Multiplicative

Discrete Optimization 23

The Theory behind PageRank

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

Chapter 7. Markov chain background. 7.1 Finite state space

Essentials on the Analysis of Randomized Algorithms

Ch5. Markov Chain Monte Carlo

Some Definition and Example of Markov Chain

Lecture 9 Classification of States

Markov Chain Monte Carlo

Online Social Networks and Media. Link Analysis and Web Search

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

1 Random Walks and Electrical Networks

12 Markov chains The Markov property

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

Strongly chordal and chordal bipartite graphs are sandwich monotone

Markov Chains on Countable State Space

ECEN 689 Special Topics in Data Science for Communications Networks

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design

The Riemann Roch theorem for metric graphs

Stochastic optimization Markov Chain Monte Carlo

Mathematische Methoden der Unsicherheitsquantifizierung

Chapter 7 Network Flow Problems, I

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications

Graphic sequences, adjacency matrix

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

LECTURE 3. Last time:

Rao s degree sequence conjecture

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Stochastic Processes (Week 6)

STOCHASTIC PROCESSES Basic notions

6.842 Randomness and Computation March 3, Lecture 8

Homework #2 Solutions Due: September 5, for all n N n 3 = n2 (n + 1) 2 4

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Link Analysis Ranking

Chapter 11 Advanced Topic Stochastic Processes

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

Random Variable. Pr(X = a) = Pr(s)

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

JOHN THICKSTUN. p x. n sup Ipp y n x np x nq. By the memoryless and stationary conditions respectively, this reduces to just 1 yi x i.

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

MARKOV CHAINS AND COUPLING FROM THE PAST

ACO Comprehensive Exam October 14 and 15, 2013

Notes 6 : First and second moment methods

HW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ).

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

New Bounds on the Minimum Density of a Vertex Identifying Code for the Infinite Hexagonal Grid

Transcription:

Markov Chains Andreas Klappenecker Texas A&M University 208 by Andreas Klappenecker. All rights reserved. / 58

Stochastic Processes A stochastic process X tx ptq: t P T u is a collection of random variables. The index t usually represents time. We call X ptq the state of the process at time t. If T is countably infinite, then we call X a discrete time process. We will mainly choose T to be the set of nonnegative integers. 2 / 58

Markov Chains Definition A discrete time process X tx 0, X, X 2, X 3,...u is called a Markov chain if and only if the state at time t merely depends on the state at time t. More precisely, the transition probabilities PrrX t a t X t a t,..., X 0 a 0 s PrrX t a t X t a t s for all values a 0, a,..., a t and all t ě. In other words, Markov chains are memoryless discrete time processes. This means that the current state (at time t ) is sufficient to determine the probability of the next state (at time t). All knowledge of the past states is comprised in the current state. 3 / 58

Homogeneous Markov Chains Definition A Markov chain is called homogeneous if and only if the transition probabilities are independent of the time t, that is, there exist constants P i,j such that holds for all times t. P i,j PrrX t j X t is Assumption We will assume that Markov chains are homogeneous unless stated otherwise. 4 / 58

Discrete State Space Definition We say that a Markov chain has a discrete state space if and only if the set of values of the random variables is countably infinite tv 0, v, v 2,...u. For ease of presentation we will assume that the discrete state space is given by the set of nonnegative integers t0,, 2,...u. 5 / 58

Finite State Space Definition We say that a Markov chain is finite if and only if the set of values of the random variables is a finite set tv 0, v, v 2,..., v n u. For ease of presentation we will assume that finite Markov chains have values in t0,, 2,..., n u. 6 / 58

Transition Probabilities The transition probabilities P i,j PrrX t j X t is. determine the Markov chain. The transition matrix P 0,0 P 0, P 0,j P,0 P, P,j P pp i,j q......... P i,0 P i, P i,j......... comprises all transition probabilities. 7 / 58

Multiple Step Transition Probabilities For any m ě 0, we define the m-step transition probability P m i,j PrrX t`m j X t is. This is the probability that the chain moves from state i to state j in exactly m steps. If P pp i,j q denotes the transition matrix, then the m-step transition matrix is given by pp m i,jq P m. 8 / 58

Small Example Example P 3 0 4 0 4 2 0 3 6 0 0 0 0 2 4 4 P20 0.0072635 0.00268246 0.992286 0.00330525 0.0039476 0.0026748 0.993767 0.00267057 0 0 0 0.0032339 0.00205646 0.994086 0.0025340 9 / 58

Graphical Representation A Markov chain with state space V and transition matrix P can be represented by a labeled directed graph G pv, Eq, where edges are given by transitions with nonzero probability E tpu, vq P u,v ą 0u. The edge pu, vq is labeled by the probability P u,v. Self-loops are allowed in these directed graphs, since we might have P u,u ą 0. 0 / 58

Example of a Graphical Representation P 0 4 0 3 4 2 0 3 6 0 0 0 0 2 4 4 2 {2 {4 0 {6 {3 {2 3{4 {4 3 {4 / 58

Irreducible Markov Chains 2 / 58

Accessible States We say that a state j is accessible from state i if and only if there exists some integer n ě 0 such that P n i,j ą 0. If two states i and j are accessible from each other, then we say that they communicate and we write i Ø j. In the graph-representation of the chain, we have i Ø j if and only if there are directed paths from i to j and from j to i. 3 / 58

Irreducible Markov Chains Proposition The communication relation is an equivalence relation. By definition, the communication relation is reflexive and symmetric. Transitivity follows by composing paths. Definition A Markov chain is called irreducible if and only if all states belong to one communication class. A Markov chain is called reducible if and only if there are two or more communication classes. 4 / 58

Irreducible Markov Chains Proposition A finite Markov chain is irreducible if and only if its graph representation is a strongly connected graph. 5 / 58

Exercise P 0 4 0 3 4 2 0 3 6 0 0 0 0 2 4 4 2 {2 {4 0 {3 {6 {2 3{4 {4 3 Question Is this Markov chain irreducible? {4 6 / 58

Exercise P 0 4 0 3 4 2 0 3 6 0 0 0 0 2 4 4 2 {2 {4 0 {3 {6 {2 3{4 {4 3 Question Is this Markov chain irreducible? {4 Answer No, since no other state can be reached from 2. 6 / 58

Exercise {3 2 P 0 4 0 3 4 2 0 3 6 0 0 0 0 2 4 4 {2 {4 0 {6 {2 3{4 {4 3 {4 Question Is this Markov chain irreducible? 7 / 58

Exercise {3 2 P 0 4 0 3 4 2 0 3 6 0 0 0 0 2 4 4 {2 {4 0 {6 {2 3{4 {4 3 {4 Question Is this Markov chain irreducible? Answer Yes. 7 / 58

Periodic and Aperiodic Markov Chains 8 / 58

Period Definition The period dpkq of a state k of a homogeneous Markov chain with transition matrix P is given by dpkq gcdtm ě : P m k,k ą 0u. if dpkq, then we call the state k aperiodic. A Markov chain is aperiodic if and only if all its states are aperiodic. 9 / 58

Exercise {3 2 P 0 4 0 3 4 2 0 3 6 0 0 0 0 2 4 4 {2 {4 0 {6 {2 3{4 {4 3 {4 Question What is the period of each state? 20 / 58

Exercise {3 2 P 0 4 0 3 4 2 0 3 6 0 0 0 0 2 4 4 {2 {4 0 {6 {2 3{4 {4 3 {4 Question What is the period of each state? dp0q 20 / 58

Exercise {3 2 P 0 4 0 3 4 2 0 3 6 0 0 0 0 2 4 4 {2 {4 0 {6 {2 3{4 {4 3 {4 Question What is the period of each state? dp0q, dpq 20 / 58

Exercise {3 2 P 0 4 0 3 4 2 0 3 6 0 0 0 0 2 4 4 {2 {4 0 {6 {2 3{4 {4 3 {4 Question What is the period of each state? dp0q, dpq, dp2q 20 / 58

Exercise {3 2 P 0 4 0 3 4 2 0 3 6 0 0 0 0 2 4 4 {2 {4 0 {6 {2 3{4 {4 3 {4 Question What is the period of each state? dp0q, dpq, dp2q, dp3q 20 / 58

Exercise {3 2 P 0 4 0 3 4 2 0 3 6 0 0 0 0 2 4 4 {2 {4 0 {6 {2 3{4 {4 3 {4 Question What is the period of each state? dp0q, dpq, dp2q, dp3q, so the chain is aperiodic. 20 / 58

Aperiodic Markov Chains Aperiodicity can lead to the following useful result. Proposition Suppose that we have an aperiodic Markov chain with finite state space and transition matrix P. Then there exists a positive integer N such that pp m q i,i ą 0 for all states i and all m ě N. Before we prove this result, let us explore the claim in an exercise. 2 / 58

Exercise 2 P 0 0 0 0 0 0 0 0 0 3 4 0 0 4 0 Question What is the smallest number of steps N i such that P m i,i ą 0 for all m ě N for i P t0,, 2, 3u? 3{4 3 {4 22 / 58

Exercise 2 P 0 0 0 0 0 0 0 0 0 3 4 0 0 4 0 Question What is the smallest number of steps N i such that P m i,i ą 0 for all m ě N for i P t0,, 2, 3u? 3{4 3 {4 Answer N 0 22 / 58

Exercise 2 P 0 0 0 0 0 0 0 0 0 3 4 0 0 4 0 Question What is the smallest number of steps N i such that P m i,i ą 0 for all m ě N for i P t0,, 2, 3u? 3{4 3 {4 Answer N 0 4, N 22 / 58

Exercise 2 P 0 0 0 0 0 0 0 0 0 3 4 0 0 4 0 Question What is the smallest number of steps N i such that P m i,i ą 0 for all m ě N for i P t0,, 2, 3u? 3{4 3 {4 Answer N 0 4, N 4, N 2 4, N 3 4. 22 / 58

Aperiodic Markov Chains Now back to the general statement. Proposition Suppose that we have an aperiodic Markov chain with finite state space and transition matrix P. Then there exists a positive integer N such that pp m q i,i ą 0 for all states i and all m ě N. Let us now prove this claim. 23 / 58

Proof. We will use the following fact from number theory. Lemma If a subset A of the set of nonnegative integers is closed under addition, A ` A Ď A, and 2 satisfies gcdta a P Au, then it contains all but finitely many nonnegative integers, so there exists a positive integer n such that tn, n `, n ` 2,...u Ď A. 24 / 58

Proof of the Lemma. Suppose that A ta, a 2, a 3,...u. Since gcd A, there must exist some postiive integer k such that gcdpa, a 2,..., a k q. Thus, there exist integers n, n 2,..., n k such that n a ` n 2 a 2 ` ` n k a k. We can split this sum into a positive part P and a negative part N such that P N. As sums of elements in A, both P and N are contained in A. 25 / 58

Proof of the Lemma (Continued) Suppose that n is a positive integer such that n ě NpN q. We can express n in the form n an ` r for some integer a and a nonnegative integer r in the range 0 ď r ď N. We must have a ě N. Indeed, if a were less than N, then we would have n an ` r ă NpN q, contradicting our choice of n. We can express n in the form n an ` r an ` rpp Nq pa rqn ` rp. Since a ě N ě r, the factor pa rq is nonnegative. As N and P are in A, we must have n pa rqn ` rp P A. We can conclude that all sufficiently large integers n are contained in A. 26 / 58

Proof of the Proposition. For each state i, consider the set A i of possible return times A i tm ě P m i,i ą 0u. Since the Markov chain in aperiodic, the state i is aperiodic, so gcd A i. If m, m are elements of A i, then Therefore, PrrX m i X 0 is ą 0 and PrrX m`m i X m is ą 0. PrrX m`m i X 0 is ě PrrX m`m i X m is PrrX m i X 0 is ą 0. So m ` m is an element of A i. Therefore, A i ` A i Ď A i. By the lemma, A i contains all but finitely many nonnegative integers. Therefore, A contains all but finitely many nonnegative integers. 27 / 58

Aperiodic Irreducible Markov Chains Proposition Let X be an irreducible and aperiodic Markov chain with finite state space and transition matrix P. Then there exists an M ă 8 such that pp m q i,j ą 0 for all states i and j and all m ě M. In other words, in an irreducible, aperiodic, and finite Markov chain, one can reach each state from each other state in an arbitrary number of steps with a finite number of exceptions. 28 / 58

Proof. Since the Markov chain is aperiodic, there exist a positive integer N such that pp n q i,i ą 0 for all states i and all n ě N. Since P is irreducible, there exist a positive integer n i,j such that P n i,j i,j m ě N ` n i,j steps, we have P n i,j i,j ą0 ą 0. After PrrX looooooooooomooooooooooon m j X 0 is ě PrrX looooooooooooomooooooooooooon m j X m n i,j is looooooooooooomooooooooooooon PrrX m ni,j i X 0 is. Pi,j m ą0 In other words, we have P m i,j ą 0, as claimed. P m n i,j i,i ą0 29 / 58

Stationary Distributions 30 / 58

Stationary Distribution Definition Suppose that X is a finite Markov chain with transition matrix P. A row vector v pp 0, p,..., p n q is called a stationary distribution for P if and only if the p k are nonnegative real numbers such that 2 vp v. n ÿ k 0 p k. 3 / 58

Examples Example Every probability distribution on the states is a stationary probability distribution when P is the identity matrix. Example ˆ {2 {2 If P, then v p{6, 5{6q satisfies vp v. {0 9{0 32 / 58

Existence and Uniqueness of Stationary Distribution Proposition Any aperiodic and irreducible finite Markov chain has precisely one stationary distribution. 33 / 58

Total Variation Distance Definition If p pp 0, p,..., p n q and q pq 0, q,..., q n q are probability distributions on a finite state space, then d TV pp, qq 2 n ÿ k 0 p k q k is called the total variation distance between p and q. In general, 0 ď d TV pp, qq ď. If p q, then d TV pp, qq 0. 34 / 58

Convergence in Total Variation Definition If p pmq pp pmq 0, p pmq,..., p pmq n q is a probability distribution for each m ě and p pp 0, p,..., p n q is a probability distribution, then we say that p pmq converges to p in total variation if and only if lim d TV pp pmq, pq 0. mñ8 35 / 58

Convergence Theorem Proposition Let X be a finite irreducible aperiodic Markov chain with transition matrix P. If p p0q is some initial probability distribution on the states and p is a stationary distribution, then p pmq p p0q P m converges in total variation to the stationary distribution, lim d TV pp pmq, pq 0. mñ8 36 / 58

Reversible Markov Chains 37 / 58

Definition Suppose that X is a Markov chain with finite state space and transition matrix P. A probability distribution π on S is called reversible for the chain if and only if holds for all states i and j in S. π i P i,j π j P j,i A Markov chain is called reversible if and only if there exists a reversible distribution for it. 38 / 58

Reversible Markov Chains Proposition Suppose that X is a Markov chain with finite state space and transition matrix P. If π is a reversible distribution for the Markov chain, then it is also a stationary distribution for it. 39 / 58

Proof. We need to show that πp π. In other words, we need to show that π j ÿ kps π k P k,j. holds for all states j. This is straightforward, since π j π j ÿ kps P j,k ÿ kps π j P j,k ÿ kps π k P k,j, where we used the reversibility condition π j P j,k π k P k,j in the last equality. 40 / 58

Random Walks 4 / 58

Random Walks Definition A random walk on an undirected graph G pv, Eq is given by the transitition matrix P with # dpuq if pu, vq P E, P u,v 0 otherwise. 42 / 58

Properties Proposition For a random walk on a undirected graph with transition matrix P, we have P is irreducible if and only if G is connected, 2 P is aperiodic if and only if G is not bipartite. Proof. If P is irreducible, then the graphical representation is a strongly connected directed graph, so the underlying undirected graph is connected. The converse is clear. 43 / 58

Proof. (Continued) The Markov chain corresponding to a random walk on an undirected graph has either period or 2. It has period 2 if and only if G is bipartite. In other words, P is aperiodic if and only if G is not bipartite. 44 / 58

Reversible Markov Chain Proposition A random walk on a graph with vertex set V tv, v 2,..., v n u is a Markov chain with reversible distribution ˆdpv q π d, dpv 2q d,..., dpv nq d, where d ř vpv dpvq is the total degree of the graph. 45 / 58

Proof. Suppose that u and v are adjacent vertices. Then π u P u,v dpuq d dpuq d dpvq d dpvq π vp v,u. If u and v are non-adjacent vertices, then since P u,v 0 P v,u. π u P u,v 0 π v P v,u, 46 / 58

Example V 8 and E 2 8ÿ dpv k q 2 E 24 k v v 2 v 3 v 4 v 5 π ˆ 2 24, 3 24, 5 24, 3 24, 2 24, 3 24, 3 24, 3 24 v 6 v 7 v 8 47 / 58

Markov Chain Monte Carlo Algorithms 48 / 58

Markov Chain Monte Carlo Algorithms The Idea Given a probability distribution π on a set S, we want to be able to sample from this probability distribution. In MCMC, we define a Markov chain that has π as a stationary distribution. We run the chain for some iterations and then sample from it. 49 / 58

Markov Chain Monte Carlo Algorithms The Idea Given a probability distribution π on a set S, we want to be able to sample from this probability distribution. In MCMC, we define a Markov chain that has π as a stationary distribution. We run the chain for some iterations and then sample from it. Why? Sometimes it is easier to construct the Markov chain than the probability distribution π. 49 / 58

Hardcore Model Definition Let G pv, Eq be a graph. The hardcore model of G randomly assigns either 0 or to each vertex such that no neighboring vertices both have the value. Assignment of the values 0 or to the vertices are called configurations. So a configuration is a map in t0, u V. A configuration is called feasible if and only if no adjacent vertices have the value. In the hardcore model, the feasible configurations are chosen uniformly at random. 50 / 58

Hardcore Model Question For a given graph G, how can you directly choose a feasible configuration uniformly at random? 5 / 58

Hardcore Model Question For a given graph G, how can you directly choose a feasible configuration uniformly at random? An equivalent question is: Question For a given graph G, how can you directly choose independent sets of G uniformly at random? 5 / 58

Grid Graph Example Observation In an n ˆ n grid graph, there are 2 n2 configurations. 52 / 58

Grid Graph Example Observation In an n ˆ n grid graph, there are 2 n2 configurations. Observation There are at least 2 n2 {2 feasible configurations in the grid graph. Indeed, set every other node in the grid graph to 0. For example, if we label the vertices by tpx, yq 0 ď x ă n, 0 ď y ă nu. Then set all vertices with x ` y 0 pmod 2q to 0. The value of the remaining n 2 {2 vertices can be chosen arbitrarily, giving at least 2 n2 {2 feasible configurations. Direct sampling from the feasible configurations seems difficult. 52 / 58

Hardcore Model Markov Chain Given a graph G pv, Eq with a set F of feasible configurations. We can define a Markov chain with state space F and the following transitions Let X n be the current feasible configuration. Pick a vertex v P V uniformly at random. 2 For all vertices w P V ztvu, the value of the configuration will not change: X n` pwq X n pwq. 3 Toss a fair coin. If the coin shows heads and all neighbors of v have the value 0, then X n` pvq ; otherwise X n` pvq 0. 53 / 58

Hardcore Model Proposition The hardcore model Markov chain is irreducible. 54 / 58

Hardcore Model Proposition The hardcore model Markov chain is irreducible. Proof. Given an arbitrary feasible configuration with m ones, it is possible to reach the configuration with all zeros in m steps. Similarly, it is possible to go from the zero configuration to an arbitrary feasible configuration with positive probability in a finite number of steps. Therefore, it is possible to go from an arbitrary feasible configuration to another in a finite number of steps with positive probability. 54 / 58

Hardcore Model Proposition The hardcore model Markov chain is aperiodic. 55 / 58

Hardcore Model Proposition The hardcore model Markov chain is aperiodic. Proof. For each state, there is a small but nonzero probability that the Markov chain stays in the same state. Thus, each state is aperiodic. Therefore, the Markov chain is aperiodic. 55 / 58

Hardcore Model Proposition Let π denote the uniform distribution on the set of feasible configurations F. Let P denote the transition matrix. Then π f P f,g π g P g,f for all feasible configurations f and g. 56 / 58

Proof. Since π f π g { F, it suffices to show that P f,g P g,f. This is trivial if f g. 2 If f and g differ in more than one vertex, then P f,g 0 P g,f. 3 If f and g differ only on the vertex v. If G has k vertices, then P f,g 2 k P g,f. 57 / 58

Hardcore Model Corollary The stationary distribution of the hardcore model Markov chain is the uniform distribution on the set of feasible configurations. 58 / 58