Mixing times and hitting times: lecture notes

Similar documents
An Introduction to Malliavin calculus and its applications

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

5. Stochastic processes (1)

LECTURE 1: GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Cash Flow Valuation Mode Lin Discrete Time

1 Review of Zero-Sum Games

Hamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation:

Chapter 2. First Order Scalar Equations

Oscillation of an Euler Cauchy Dynamic Equation S. Huff, G. Olumolode, N. Pennington, and A. Peterson

Lecture 10: The Poincaré Inequality in Euclidean space

Problem set 2 for the course on. Markov chains and mixing times

KEY. Math 334 Midterm I Fall 2008 sections 001 and 003 Instructor: Scott Glasgow

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:

Some Ramsey results for the n-cube

Lecture Notes 2. The Hilbert Space Approach to Time Series

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

4 Sequences of measurable functions

The Arcsine Distribution

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

Approximation Algorithms for Unique Games via Orthogonal Separators

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Notes for Lecture 17-18

Vehicle Arrival Models : Headway

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems

Optimality Conditions for Unconstrained Problems

Solutions from Chapter 9.1 and 9.2

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

14 Autoregressive Moving Average Models

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004

arxiv: v1 [math.pr] 19 Feb 2011

Stochastic Structural Dynamics. Lecture-6

MODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE

The Optimal Stopping Time for Selling an Asset When It Is Uncertain Whether the Price Process Is Increasing or Decreasing When the Horizon Is Infinite

Stochastic models and their distributions

Weyl sequences: Asymptotic distributions of the partition lengths

Tracking Adversarial Targets

Stationary Distribution. Design and Analysis of Algorithms Andrei Bulatov

On a Fractional Stochastic Landau-Ginzburg Equation

Random variables. A random variable X is a function that assigns a real number, X(ζ), to each outcome ζ in the sample space of a random experiment.

Matlab and Python programming: how to get started

We just finished the Erdős-Stone Theorem, and ex(n, F ) (1 1/(χ(F ) 1)) ( n

10. State Space Methods

ENGI 9420 Engineering Analysis Assignment 2 Solutions

Lecture 6: Wiener Process

arxiv: v1 [math.pr] 6 Oct 2008

Supplementary Material

CHARACTERIZATION OF REARRANGEMENT INVARIANT SPACES WITH FIXED POINTS FOR THE HARDY LITTLEWOOD MAXIMAL OPERATOR

Convergence of the Neumann series in higher norms

arxiv:math/ v1 [math.nt] 3 Nov 2005

BU Macro BU Macro Fall 2008, Lecture 4

Bernoulli numbers. Francesco Chiatti, Matteo Pintonello. December 5, 2016

1 Solutions to selected problems

Math 334 Fall 2011 Homework 11 Solutions

Representation of Stochastic Process by Means of Stochastic Integrals

SZG Macro 2011 Lecture 3: Dynamic Programming. SZG macro 2011 lecture 3 1

Lecture 20: Riccati Equations and Least Squares Feedback Control

CHAPTER 12 DIRECT CURRENT CIRCUITS

arxiv: v1 [math.fa] 9 Dec 2018

Sections 2.2 & 2.3 Limit of a Function and Limit Laws

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Online Convex Optimization Example And Follow-The-Leader

MATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence

Chapter 6. Systems of First Order Linear Differential Equations

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Math 527 Lecture 6: Hamilton-Jacobi Equation: Explicit Formulas

11!Hí MATHEMATICS : ERDŐS AND ULAM PROC. N. A. S. of decomposiion, properly speaking) conradics he possibiliy of defining a counably addiive real-valu

Final Spring 2007

The expectation value of the field operator.

Heat kernel and Harnack inequality on Riemannian manifolds

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

Singular control of SPDEs and backward stochastic partial diffe. reflection

Games Against Nature

KINEMATICS IN ONE DIMENSION

IB Physics Kinematics Worksheet

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n

INDEPENDENT SETS IN GRAPHS WITH GIVEN MINIMUM DEGREE

Lecture 2 April 04, 2018

Two Coupled Oscillators / Normal Modes

The Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales

Families with no matchings of size s

Guest Lectures for Dr. MacFarlane s EE3350 Part Deux

The Strong Law of Large Numbers

Question 1: Question 2: Topology Exercise Sheet 3

Linear Cryptanalysis

6.2 Transforms of Derivatives and Integrals.

An Introduction to Backward Stochastic Differential Equations (BSDEs) PIMS Summer School 2016 in Mathematical Finance.

non -negative cone Population dynamics motivates the study of linear models whose coefficient matrices are non-negative or positive.

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

GMM - Generalized Method of Moments

Macroeconomic Theory Ph.D. Qualifying Examination Fall 2005 ANSWER EACH PART IN A SEPARATE BLUE BOOK. PART ONE: ANSWER IN BOOK 1 WEIGHT 1/3

18 Biological models with discrete time

Transcription:

Miing imes and hiing imes: lecure noes Yuval Peres Perla Sousi 1 Inroducion Miing imes and hiing imes are among he mos fundamenal noions associaed wih a finie Markov chain. A variey of ools have been developed o esimae boh hese noions; in paricular, hiing imes are closely relaed o poenial heory and hey can be deermined by solving a sysem of linear equaions. In his paper we esablish a new connecion beween miing imes and hiing imes for reversible Markov chains Theorem 1.1). Le X ) 0 be an irreducible Markov chain on a finie sae space wih ransiion mari P and saionary disribuion π. For, y in he sae space we wrie for he ransiion probabiliy in seps. P, y) = P X = y), Le d) = ma P, ) π, where µ ν sands for he oal variaion disance beween he wo probabiliy measures µ and ν. Le ε > 0. The oal variaion miing is defined as follows: mi ε) = min{ 0 : d) ε}. We wrie PL for he ransiion probabiliy in seps of he lazy version of he chain, i.e. he chain wih ransiion mari P +I 2. If we now le d L) = ma P Li, ) π, hen we can define he miing ime of he lazy chain as follows: L ε) = min{ 0 : d L ) ε}. 1.1) For noaional convenience we will simply wrie L and mi when ε = 1/4. Before saing our firs heorem, we inroduce he maimum hiing ime of big ses. Le α < 1/2, hen we define H α) = ma E [τ A,,A:πA) α where τ A sands for he firs hiing ime of he se A. I is clear and we prove i laer) ha if he Markov chain has no hi a big se, hen i canno have mied. Thus for every α > 0, here is a posiive consan c α so ha L c α H α). In he following heorem, we show ha he converse is also rue when a chain is reversible. Microsof Research, Redmond, Washingon, USA; peres@microsof.com Universiy of Cambridge, Cambridge, UK; p.sousi@saslab.cam.ac.uk 1

Theorem 1.1. Le α < 1/2. reversible chain Then here eis posiive consans c α and c α so ha for every c α H α) L c α H α). Remark 1.2. The proof of Theorem 1.1 is somewha echnical and is given in [7. Insead in Theorem 6.1 we will prove he equivalence of H α) o G, miing a a geomeric ime, defined in he ne secion. Remark 1.3. Aldous in [2 showed ha he miing ime, cs, of a coninuous ime reversible chain is equivalen o prod = ma πa)e [τ A. The inequaliy prod c 1 cs, for a posiive consan,a:πa)>0 c 1, which was he hard par in Aldous proof, follows from Theorem 1.1 and he equivalence L cs see [4, Theorem 20.3). 2 Preliminaries and furher equivalences In his secion we firs inroduce some more noions of miing. We will hen sae some furher equivalences beween hem mosly in he reversible case and will prove hem in laer secions. These equivalences will be useful for he proofs of he main resuls, bu are also of independen ineres. The following noion of miing was firs inroduced by Aldous in [2 in he coninuous ime case and laer sudied in discree ime by Lovász and Winkler in [5, 6. I is defined as follows: sop = ma min{e [Λ : Λ is a sopping ime s.. P X Λ ) = π )}. 2.1) The definiion does no make i clear why sopping imes achieving he minimum always eis. We will recall he consrucion of such a sopping ime in Secion 3. Definiion 2.1. We say ha wo miing parameers s and r are equivalen for a class of Markov chains M and wrie s r, if here eis universal posiive consans c and c so ha cs r c s for every chain in M. We will now define he noion of miing in a geomeric ime. The idea of using his noion of miing o prove Theorem 1.1 was suggesed o us by Oded Schramm privae communicaion June 2008). This noion is also of independen ineres, because of is properies ha we will prove in his secion. For each, le Z be a Geomeric random variable aking values in {1, 2,...} of mean and success probabiliy 1. We firs define The geomeric miing is hen defined as follows d G ) = ma P X Z = ) π. G = G 1/4) = min{ 0 : d G ) 1/4}. We sar by esablishing he monooniciy propery of d G ). Lemma 2.2. The oal variaion disance d G ) is decreasing as a funcion of. Before proving his lemma, we noe he following sandard fac. 2

Claim 2.1. Le T and T be wo independen posiive random variables, also independen of he Markov chain. Then for all P X T +T = ) π P X T = ) π. Proof of Lemma 2.2. We firs describe a coupling beween he wo Geomeric random variables, Z and Z +1. Le U i ) i 1 be a sequence of i.i.d. random variables uniform on [0, 1. We now define { Z = min i 1 : U i 1 } and { Z +1 = min i 1 : U i 1 }. + 1 I is easy o see ha Z +1 Z is independen of Z. Indeed, PZ +1 = Z Z ) = +1 and similarly for every k 1 we have PZ +1 = Z + k Z ) = ) k 1 1 +1 +1. We can hus wrie Z +1 = Z +1 Z ) + Z, where he wo erms are independen. Claim 2.1 and he independence of Z +1 Z and Z give he desired monooniciy of d G ). Lemma 2.3. For all chains we have ha G 4 sop + 1. The converse of Lemma 2.3 is rue for reversible chains in a more general seing. Namely, le N be a random variable independen of he Markov chain and of mean. We define he oal variaion disance d N ) in his seing as follows: d N ) = ma P X N = ) π. Defining N = N 1/4) = min{ 0 : d N ) 1/4} we have he following: Lemma 2.4. There eiss a posiive consan c 4 such ha for all reversible chains In paricular, sop c 4 G. sop c 4 N. We will give he proofs of Lemmas 2.3 and 2.4 in Secion 5. 3 Saionary sopping imes In his secion we will firs give he consrucion of a sopping ime T ha achieves saionariy, i.e. for all, y we have ha P X T = y) = πy), and also for a fied aains he minimum in he definiion of sop in 2.1), i.e. E [T = min{e [Λ : Λ is a sopping ime s.. P X Λ ) = π )}. 3.1) 3

The sopping ime ha we will consruc is called he filling rule and i was firs discussed in [3. This consrucion can also be found in [1, Chaper 9, bu we include i here for compleeness. Firs for any sopping ime S and any saring disribuion µ one can define a sequence of vecors These vecors clearly saisfy θ ) = P µ X =, T ), σ ) = P µ X =, T = ). 3.2) 0 σ) θ), θ) σ))p = θ + 1) ; θ 0 = µ. 3.3) We can also do he converse, namely given vecors θ), σ); 0) saisfying 3.3) we can consruc a sopping ime S saisfying 3.2). We wan o define S so ha PS = S > 1, X =, X 1 = 1,..., X 0 = 0 ) = σ ) θ ). 3.4) Formally we define he random variable S as follows: Le U i ) i 0 be a sequence of independen random variables uniform on [0, 1. We now define S via { S = inf 0 : U σ } X ). θ X ) From his definiion i is clear ha 3.4) is saisfied and ha S is a sopping ime wih respec o an enlarged filraion conaining also he random variables U i ) i 0, namely F s = σx 0, U 0,..., X s, U s ). Also, equaions 3.2) are saisfied. Indeed, seing = we have P µ X =, S ) = 0, 1,..., 1 µ 0 ) 1 k=0 1 σ ) k k) P k, k+1 ) = θ ), θ k k) since θ 0 y) = µy) for all y and also θ + 1) = θ) σ))p so cancelaions happen. Similarly we ge he oher equaliy of 3.2). We are now ready o give he consrucion of he filling rule T. Before defining i formally, we give he inuiion behind i. Every sae has a quoa which is equal o π). Saring from an iniial disribuion µ we wan o calculae inducively he probabiliy ha we have sopped so far a each sae. When we reach a new sae, we decide o sop here if doing so does no increase he probabiliy of sopping a ha sae above he quoa. Oherwise we sop here wih he righ probabiliy o eacly fill he quoa and we coninue wih he complemenary probabiliy. We will now give he rigorous consrucion by defining he sequence of vecors θ), σ); 0). Le more generally he saring disribuion be µ, if we sar from, hen simply µ = δ. Firs we se θ0) = µ. We now inroduce anoher sequence of vecors Σ); 1). Le Σ 1) = 0 for all. We define inducively { θ ), if Σ 1) + θ ) π); σ ) = π) Σ 1), oherwise and ne we le Σ ) = s σ s). Then σ will saisfy 3.2) and Σ ) = P µ X T =, T ). Also noe from he descripion above i follows ha Σ ) π), for all and all. Thus we ge ha P µ X T = ) = lim Σ ) π) and since boh P µ X T = ) and π ) are probabiliy disribuions, we ge ha hey mus be equal. Hence he above consrucion yielded a saionary sopping ime. I only remains o prove he mean-opimaliy 3.1). Before doing so we give a definiion. 4

Definiion 3.1. Le S be a sopping ime. A sae z is called a haling sae for he sopping ime if S T z a.s. where T z is he firs hiing ime of sae z. We will now show ha he filling rule has a haling sae and hen he following heorem gives he mean-opimaliy. Theorem 3.2 Lovász and Winkler). Le µ and ρ be wo disribuions. Le S be a sopping ime such ha P µ X S = ) = ρ) for all. Then S is mean opimal in he sense ha E µ [S = min{e µ [U : U is a sopping ime s.. P µ X U ) = ρ )} if and only if i has a haling sae. Now we will prove ha here eiss z such ha T T z a.s. For each we define = min{ : Σ ) = π)}. Take z such ha z = ma. We will show ha T T z a.s. If here eiss a such ha P µ T >, T z = ) > 0, hen Σ ) = π), for all, since he sae z is he las one o be filled. So if he above probabiliy is posiive, hen we ge ha P µ T ) = Σ ) = 1, which is a conradicion. Hence, we obain ha P µ T >, T z = ) = 0 and hus by summing over all we deduce ha P µ T T z ) = 1. [ S 1 Proof of Theorem 3.2. We define he ei frequencies for S via ν = E µ 1X k = ), for all. Since P µ X S = ) = ρ ), we can wrie We also have ha [ S [ S 1 E µ 1X k = ) = E µ 1X k = ) + ρ) = ν + ρ). k=0 k=0 [ S [ S E µ 1X k = j) = µ) + E µ 1X k = ). k=0 Since S is a sopping ime, i is easy o see ha [ S E µ 1X k = ) = y Hence we ge ha k=1 k=1 ν y P y, ). k=0 ν + ρ) = µ) + y ν y P y, ). 3.5) 5

Le T be anoher sopping ime wih P µ X T = ) = ρ ) and le ν be is ei frequencies. Then hey would saisfy 3.5), i.e. ν + ρ) = µ) + y ν yp y, ). Thus if we se d = ν ν, hen d as a vecor saisfies d = dp, and hence d mus be a muliple of he saionary disribuion, i.e. for a consan α we have ha d = απ. Suppose firs ha S has a haling sae, i.e. here eiss a sae z such ha ν z = 0. Therefore we ge ha ν z = απz), and hence α 0. Thus ν ν for all and E µ [T = ν ) ν = E µ [S, and hence proving mean-opimaliy. We will now show he converse, namely ha if S is mean-opimal hen i should have a haling sae. The filling rule was proved o have a haling sae and hus is mean-opimal. Hence using he same argumen as above we ge ha S is mean opimal if and only if min ν = 0, which is he definiion of a haling sae. 4 Miing a a geomeric ime Proof of Lemma 2.3. We fi. Le τ be a saionary ime, i.e. P X τ = ) = π. Then τ + s is also a saionary ime for all s 1. Hence we have ha = πy) = m 1 m=1 l=0 s=1 1 1 ) s 1 P X τ+s = y) = 1 1 ) m l 1 P X m = y, τ = l) s=1 1 1 ) s 1 P X l+s = y, τ = l) l=0 1 1 ) m 1 P X m = y, τ < m). As in Secion 2, le Z be a geomeric random variable of success probabiliy 1. Then Thus we obain ν y) := P X Z = y) = ν y) πy) = = s=1 s=1 s=1 m=1 1 1 1 ) s 1 P X s = y). 1 1 ) s 1 P X s = y) P X s = y, τ < s)) 1 1 ) s 1 P X s = y, τ s). 6

Summing over all y such ha ν y) πy) > 0, we ge ha ν π s=1 Therefore, if we ake 4E [τ, hen we ge ha and hence G 4 sop + 1. 1 1 ) s 1 P τ s) E [τ. ν π 1 4, Recall from Secion 2 he definiion of N as a random variable independen of he Markov chain and of mean. We also defined d N ) = ma P X N = ) π. We now inroduce some noaion and a preliminary resul ha will be used in he proof of Lemma 2.4. For any we le [ s) = ma 1 P, y) and,y πy) d) = ma P, ) P y, ).,y We will call s he oal separaion disance from saionariy. Lemma 4.1. For a reversible Markov chain we have ha d) d) 2d) and s2) 1 1 d)) 2. Proof. A proof of his resul can be found in [1, Chaper 4, Lemma 7 or [4, Lemma 4.11 and Lemma 19.3. Le N 1), N 2) be i.i.d. random variables disribued as N and se V = N 1) + N 2) s N ) = ma,y [ 1 P X V = y) πy) and. We now define d N ) = ma P X N = ) P y X N = ).,y When N is a geomeric random variable we will wrie d G ) and d G ). Lemma 4.2. For all we have ha d N ) d N ) 2d N ) and s N ) 1 1 d N )) 2. Proof. Fi and consider he chain Y wih ransiion mari Q, y) = P X N = y). Then Q 2, y) = P X V = y), where V is as defined above. Thus, if we le [ s Y u) = ma 1 Q, y) and,y πy) Y u) = ma Y u = ) P y Y u = ),,y hen we ge ha s N ) = s Y 2) and d N ) = d Y 1). Hence, he lemma follows from Lemma 5.1. We now define sep = min { 0 : s N ) 3 4}. 7

Lemma 4.3. There eiss a posiive consan c so ha for every chain sop c sep. Proof. Fi = sep. Consider he chain Y wih ransiion kernel Q, y) = P X V = y), where V is as defined above. By he definiion of s N ) we have ha for all and y Q, y) 1 s N ))πy) 1 4 πy). Hence, we can wrie Q, y) = 1 4 πy) + 3 4 ν y), where ν ) is a probabiliy measure. We can hus consruc a sopping ime S {1, 2,...} such ha for all and by inducion on k such ha P Y S, S = 1) = 1 3/4)π ) P Y S, S = k) = 3/4) k 1 1/4)π ). Hence, i is clear ha Y S is disribued according o π and E [S = 4 for all. Le V 1), V 2),... be i.i.d. random variables disribued as V. Then we can wrie Y u = X 1) V If we le T = V 1) +...+V S) inequaliy for sopping imes we ge ha for all +...+V u), hen T is a sopping ime for X such ha LX T ) = π and by Wald s E [T = E [SE[V = 8.. Therefore we proved ha sop 8 sep. Proof of Lemma 2.4. From Lemma 5.2 we ge ha Finally Lemma 5.3 complees he proof. sep 2 N. Remark 4.4. Le N be a uniform random variable in {1,..., } independen of he Markov chain. The miing ime associaed o N is called Cesaro miing and i has been analyzed by Lovász and Winkler in [6. From [4, Theorem 6.15 and he lemmas above we ge he equivalence beween he Cesaro miing and he miing of he lazy chain. Remark 4.5. From he remark above we see ha he miing a a geomeric ime and he Cesaro miing are equivalen for a reversible chain. The miing a a geomeric ime hough has he advanage ha is oal variaion disance, namely d G ), has he monooniciy propery Lemma 2.2, which is no rue for he corresponding oal variaion disance for he Cesaro miing. 8

Recall ha d) = ma P X = ) P y X = ) is submuliplicaive as a funcion of see for,y insance [4, Lemma 4.12). In he following lemma and corollary, which will be used in he proof of Theorem 1.1, we show ha d G saisfies some sor of submuliplicaiviy. Lemma 4.6. Le β < 1 and le be such ha d G ) β. Then for all k N we have ha ) 1 + β k d G 2 k ) dg ). 2 Proof. As in he proof of Lemma 2.2 we can wrie Z 2 = Z 2 Z ) + Z, where Z 2 Z and Z are independen. Hence i is easy o show similar o he case for deerminisic imes) ha d G 2) d G ) ma,y P X Z2 Z = ) P y X Z2 Z = ). 4.1) By he coupling of Z 2 and Z i is easy o see ha Z 2 Z can be epressed as follows: Z 2 Z = 1 ξ) + ξg 2, where ξ is a Bernoulli 1 2 ) random variable and G 2 is a Geomeric random variable of mean 2 independen of ξ. By he riangle inequaliy we ge ha P X Z2 Z = ) P y X Z2 Z = ) 1 2 + 1 2 P X G2 = ) P y X G2 = ) = 1 2 + 1 2 d G 2), and hence 5.1) becomes d G 2) d G ) 1 2 + 1 ) 2 d G 2) 1 2 d G ) 1 + d G ) ), where for he second inequaliy we used he monooniciy propery of d G same proof as for d G )). Thus, since saisfies d G ) β, we ge ha ) 1 + β d G 2) d G ), 2 and hence ieraing we deduce he desired inequaliy. Combining Lemma 5.6 wih Lemma 5.2 we ge he following: Corollary 4.7. If is such ha d G ) β, hen for all k we have ha ) 1 + β k d G 2 k ) 2 d G ). 2 Also if d G ) α < 1/2, hen here eiss a consan c = cα) depending only on α, such ha d G c) 1/4. 9

5 Hiing large ses In his secion we are going o give he proof of Theorem 1.1. We firs prove an equivalence ha does no require reversibiliy. Theorem 5.1. Le α < 1/2. For every chain G H α). The implied consans depend on α.) Proof. We will firs show ha G c H α). By Corollary 5.7 here eiss k = kα) so ha d G 2 k G ) α 2. Le = 2k G. Then for any saring poin we have ha P X Z A) πa) α/2 α/2. Thus by performing independen eperimens, we deduce ha τ A is sochasically dominaed by N i=1 G i, where N is a Geomeric random variable of success probabiliy α/2 and he G i s are independen Geomeric random variables of success probabiliy 1. Therefore for any saring poin we ge ha and hence his gives ha E [τ A 2 α, ma E [τ A 2,A:πA) α α 2k G. In order o show he oher direcion, le < G. Then d G ) > 1/4. For a given α < 1/2, we fi γ α, 1/2). From Corollary 5.7 we have ha here eiss a consan c = cγ) such ha d G c ) > γ. Se = c. Then here eiss a se A and a saring poin such ha and hence πa) > γ, or equivalenly We now define a se B as follows: πa) P X Z A) > γ, P X Z A) < πa) γ. B = {y : P y X Z A) πa) α}, where c is a consan smaller han α. Since π is a saionary disribuion, we have ha πa) = y B P y X Z A)πy) + y / B P y X Z A)πy) πb) + πa) α, and hence rearranging, we ge ha πb) α. We will now show ha for a consan θ o be deermined laer we have ha We will show ha for a θ o be specified laer, assuming ma E z [τ B > θ. 5.1) z ma E z [τ B θ 5.2) z 10

will yield a conradicion. By Markov s inequaliy, 6.2) implies ha For any posiive ineger M we have ha P τ B 2θ) 1 2. 5.3) P τ B 2Mθ) = P τ B 2Mθ τ B 2M 1)θ)P τ B 2M 1)θ), and hence ieraing we ge ha P τ B 2Mθ) 1 2 M. 5.4) By he memoryless propery of he Geomeric disribuion and he srong Markov propery applied a he sopping ime τ B, we ge ha P X Z A) P τ B 2θM, Z τ B, X Z A) P τ B 2θM, Z τ B )P X Z A τ B 2θM, Z τ B ) ) P τ B 2θM)P Z 2θM) inf P wx Z A). w B Bu since Z is a Geomeric random variable, we obain ha P Z 2θM) = 1 1 ) 2θM, which for 2θM > 1 gives ha 6.2) implies ha θ 1, so cerainly 2θM > 1.) We now se θ = 1 2M2 M. Using 6.3) and 6.5) we deduce ha P Z 2θM) 1 2θM. 5.5) P X Z A) 1 2 M) 2 πa) α). Since γ > α, we can ake M large enough so ha 1 2 M) 2 πa) α) > πa) γ, and we ge a conradicion o 6.2). Thus 6.1) holds; since πb) α, his complees he proof. 6 Eamples and Quesions We sar his secion wih eamples ha show ha he reversibiliy assumpion is essenial. Eample 6.1. Biased random walk on he cycle. Le Z n = {1, 2,..., n} denoe he n-cycle and le P i, i + 1) = 2 3 for all 1 i < n and P n, 1) = 2 3. Also P i, i 1) = 1 3, for all 1 < i n, and P 1, n) = 1 3. Then i is easy o see ha he miing ime of he lazy random walk is of order n 2, while he maimum hiing ime of large ses is of order n. Also, in his case sop = On), since for any saring poin, he sopping ime ha chooses a random arge according o he saionary disribuion and wais unil i his i, is saionary and has mean of order n. This eample demonsraes ha for non-reversible chains, H and sop can be much smaller han L. 11

Eample 6.2. The greasy ladder. Le S = {1,..., n} and P i, i + 1) = 1 2 = 1 P i, 1) for i = 1,..., n 1 and P n, 1) = 1. Then i is easy o check ha πi) = 2 i 1 2 n is he saionary disribuion and ha L and H are boh of order 1. This eample was presened in Aldous [2, who wroe ha sop is of order n. We give an easy proof here. Essenially he same eample is discussed by Lovász and Winkler [6 under he name he winning sreak. Le τ π be he firs hiing ime of a saionary arge, i.e. a arge chosen according o he saionary disribuion. Then saring from 1, his sopping ime achieves he minimum in he definiion of sop, i.e. E 1 [τ π = min{e 1 [Λ : Λ is a sopping ime s.. P 1 X Λ ) = π )}. Indeed, saring from 1 he sopping ime τ π has a haling sae, which is n, and hence from Theorem 3.2 we ge he mean opimaliy. By he random arge lemma [1 and [4 we ge ha E i [τ π = E 1 [τ π, for all i n. Since for all i we have ha E i [τ π min{e i [Λ : Λ is a sopping ime s.. P i X Λ ) = π )}, i follows ha sop E 1 [τ π. Bu also E 1 [τ π sop, and hence sop = E 1 [τ π. By sraighforward calculaions, we ge ha E 1 [T i = 2 i 1 2 n ), for all i 2, and hence sop = E 1 [τ π = n 2 i 1 2 n 2 i ) = n 1. 1 2 n i=2 This eample shows ha for a non-reversible chain sop can be much bigger han L or H. Quesion 6.3. The equivalence H α) L in Theorem 1.1 is no valid for α > 1 2, since for wo n-vere complee graphs wih a single edge connecing hem, L is of order n 2 and H α) is a mos n for any α > 1/2. Does he equivalence H 1/2) L hold for all reversible chains? References [1 David Aldous and J. Fill. Reversible Markov Chains and Random Walks on Graphs. In preparaion, hp://www.sa.berkeley.edu/ aldous/rwg/book.hml. [2 David J. Aldous. Some inequaliies for reversible Markov chains. J. London Mah. Soc. 2), 253):564 576, 1982. [3 J. R. Baer and R. V. Chacon. Sopping imes for recurren Markov processes. Illinois J. Mah., 203):467 475, 1976. [4 David A. Levin, Yuval Peres, and Elizabeh L. Wilmer. Markov chains and miing imes. American Mahemaical Sociey, Providence, RI, 2009. Wih a chaper by James G. Propp and David B. Wilson. [5 László Lovász and Peer Winkler. Efficien sopping rules for markov chains. In Proceedings of he weny-sevenh annual ACM symposium on Theory of compuing, STOC 95, pages 76 82, New York, NY, USA, 1995. ACM. 12

[6 László Lovász and Peer Winkler. Miing imes. In Microsurveys in discree probabiliy Princeon, NJ, 1997), volume 41 of DIMACS Ser. Discree Mah. Theore. Compu. Sci., pages 85 133. Amer. Mah. Soc., Providence, RI, 1998. [7 Y. Peres and P. Sousi. Miing imes are hiing imes of large ses. preprin, 2011. 13