(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.]

Similar documents
EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

1 Random Walks and Electrical Networks

Two Proofs of Commute Time being Proportional to Effective Resistance

Probability & Computing

Physical Metaphors for Graphs

Lecture: Modeling graphs with electrical networks

25.1 Markov Chain Monte Carlo (MCMC)

Effective Resistance and Schur Complements

Electrical networks and Markov chains

Deterministic approximation for the cover time of trees

R ij = 2. Using all of these facts together, you can solve problem number 9.

6.842 Randomness and Computation February 24, Lecture 6

Walks, Springs, and Resistor Networks

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

1 Adjacency matrix and eigenvalues

2 JOHAN JONASSON we address in this paper is if this pattern holds for any nvertex graph G, i.e. if the product graph G k is covered faster the larger

Introduction to Stochastic Processes

THE TYPE PROBLEM: EFFECTIVE RESISTANCE AND RANDOM WALKS ON GRAPHS. Contents 1. Introduction 1 3. Effective Resistance (infinite network) 9

Notes 6 : First and second moment methods

2.1 Laplacian Variants

Laplacian eigenvalues and optimality: II. The Laplacian of a graph. R. A. Bailey and Peter Cameron

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods

Independencies. Undirected Graphical Models 2: Independencies. Independencies (Markov networks) Independencies (Bayesian Networks)

2 Chandra, et al. 1994, Feige 1993), and in the design of approximation algorithms for some hard combinatorial problems (Dyer et al. 1991, Jerrum & Si

Classes of Problems. CS 461, Lecture 23. NP-Hard. Today s Outline. We can characterize many problems into three classes:

Characterization of cutoff for reversible Markov chains

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications

CSE 190, Great ideas in algorithms: Pairwise independent hash functions

Introducing the Laplacian

Lecture 3: Oct 7, 2014

Random Walks and Electric Resistance on Distance-Regular Graphs

Lecture Introduction. 2 Brief Recap of Lecture 10. CS-621 Theory Gems October 24, 2012

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

P, NP, NP-Complete, and NPhard

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018

Random walk: notes. Contents. December 5, 2017

RECURRENCE IN COUNTABLE STATE MARKOV CHAINS

Lecture 7: The Subgraph Isomorphism Problem

NP-Completeness Review

Lecture 18: March 15

Lecture 10 February 4, 2013

Fiedler s Theorems on Nodal Domains

Lecture 6: Random Walks versus Independent Sampling

ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS

Review of Circuit Analysis

A sharp estimate for cover times on binary trees

Evolutionary Dynamics on Graphs

Random walks on graphs: a survey

0.1 Naive formulation of PageRank

Lecture 28: April 26

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

Combinatorics of optimal designs

NP-Completeness. Until now we have been designing algorithms for specific problems

Lecture 7: February 6

MATH 22 HAMILTONIAN GRAPHS. Lecture V: 11/18/2003

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Physics 202: Lecture 5, Pg 1

Lecture 6. Today we shall use graph entropy to improve the obvious lower bound on good hash functions.

Spectral and Electrical Graph Theory. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

Scribes: Po-Hsuan Wei, William Kuzmaul Editor: Kevin Wu Date: October 18, 2016

The Strong Largeur d Arborescence

Dominating Set. Chapter 7

Some Definition and Example of Markov Chain

TUG OF WAR INFINITY LAPLACIAN

ACO Comprehensive Exam October 14 and 15, 2013

1 Matchings in Non-Bipartite Graphs

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality

Lecture 15 - NP Completeness 1

Ring Sums, Bridges and Fundamental Sets

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014

Personal PageRank and Spilling Paint

Specral Graph Theory and its Applications September 7, Lecture 2 L G1,2 = 1 1.

Eigenvalues, random walks and Ramanujan graphs

EE292: Fundamentals of ECE

Chapter 34: NP-Completeness

NP Completeness. CS 374: Algorithms & Models of Computation, Spring Lecture 23. November 19, 2015

Tree sets. Reinhard Diestel

Dominating Set. Chapter 26

Dominating Set. Chapter Sequential Greedy Algorithm 294 CHAPTER 26. DOMINATING SET

Polynomial-time reductions. We have seen several reductions:

The Markov Chain Monte Carlo Method

Oberwolfach workshop on Combinatorics and Probability

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Outline. 1 NP-Completeness Theory. 2 Limitation of Computation. 3 Examples. 4 Decision Problems. 5 Verification Algorithm

Lecture 24: April 12

The Basis. [5] The Basis

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Lecture 18: Inapproximability of MAX-3-SAT

Glauber Dynamics for Ising Model I AMS Short Course

1 Locally computable randomized encodings

PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions. My T. Thai

Spectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents

CLIQUES IN THE UNION OF GRAPHS

CS/COE

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Resistive Touchscreen - expanding the model

NP-Complete Problems. More reductions

Lecture 11 October 11, Information Dissemination through Social Networks

MINIMALLY NON-PFAFFIAN GRAPHS

Transcription:

Lectures 16-17: Random walks on graphs CSE 525, Winter 2015 Instructor: James R. Lee 1 Random walks Let G V, E) be an undirected graph. The random walk on G is a Markov chain on V that, at each time step, moves to a uniformly random neighbor of the current vertex. Ffsor x V, use d x to denote the degree of vertex x. Then more formally, random walk on G is the following process {X t }. We start at at some node X 0 v 0 V. Then if X t v, we put X t+1 w with probability 1/d v for every neighbor w of v. 1.1 Hitting times and cover times One can study many natural properties of the random walk. For two vertices u, v V, we define the hitting time H uv from u to v as the expected number of steps for the random walk to hit v when started at u. Formally, define the random variable T min{t 0 : X t v}. Then H uv [T X 0 u]. The cover time of G starting from u is the quantity cov u G) which is the expected number of steps needed to visit every vertex of G started at u. Again, we can define this formally: Let T min{t 0 : {X 0, X 1,..., X t } V}. Then cov u G) [T X 0 u]. Finally, we define the cover time of G as covg) max u V cov u G). 1.2 Random walks and electrical networks It turns out that random walks on undirected graphs) are very closely related to electrical networks. We recall the basics of such networks now. Again, we let G V, E) be a connected, undirected graph which we think of as an electrical circuit with unit resistors on every edge. If we create a potential difference at two vertices by, say, connecting the positive and negative terminals of a battery), then we induce an electrical flow in the graph. Between every two nodes u, v there is a potential φ u,v. Electrical networks satisfying the following three laws. K1) The flow into every node equals the flow out. K2) For sum of the potential differences around any cycle is equal to zero. Ohm) The current flowing from u to v on an edge e {u, v} is precisely φ u,v r uv resistance of {u, v}. [In other words, V ir.] where r uv is the In our setting, all resistances are equal to one, but one can define things more generally. [If we put conductances c uv on the edges {u, v} E, then the corresponding random walk would operate as c follows: If X t u then X t+1 v with probability uv v V c uv for every neighbor v of u. In that case, we would have r uv 1/c uv.] Remark 1.1. In fact, K2) is related to a somewhat more general fact. The potential differences are given naturally by differences in a potential. There exists a map ϕ : V such that φ u,v ϕu) ϕv). If G is connected, then the potential ϕ is uniquely definep to a tranlation. To define the potential ϕ, put ϕv 0 ) 0 for some fixed node v 0. Now for any v V and any path γ v 0, v 1, v 2,..., v k v in G, we can define ϕv) φ v0,v 1 + φ v1,v 2 + + φ vk 1,v k. This is well-defined independent of the choice of path γ since by K2), the potential differences around every cycle sum to zero. 1

Finally, we make an important definition: The effective resistance R eff u, v) between two nodes u, v V is defined to be the necessary potential difference created between u and v to induce a current of one unit to flow between them. If we imagine the entire graph G acting as a single wire between u and v, then R eff u, v) denotes the effective resistance of that single wire recall Ohm s law). We will now prove the following. Theorem 1.2. If G V, E) has m edges, then for any two nodes u, v V, we have H uv + H vu 2mR eff u, v). In order to prove this, we will setup four electrical networks corresponding to the graph G. We label these networks A)-D). A) We inject d x units of flow at every vertex x X, and extract x V d x 2m units of flow at vertex v. B) We inject d x units of flow at every vertex x X, and extract 2m units of flow at vertex u. C) We inject 2m units of flow at vertex u and extract d x units of flow at every vertex x X. D) We inject 2m units of flow at vertex u and extract 2m units of flow at vertex v. We will use the notation networks. x,y, φ B) Lemma 1.3. For any vertex u V, we have H uv u,v. Proof. Calculate: For u v, x,y, etc. to denote the potential differences in each of these u,w u,v A) φ u,v ) w,v w,v, where we have first use K1), then K2). Rearranging yields u,v 1 + 1 w,v. But now the hitting times satisfy the same set of linear equations: For u v, H uv 1 + 1 H wv. We conclude that H uv u,v as long as this system of linear equations has a unique solution. But consider some other solution H uv and define f u) H uv H uv. Plugging this into the preceding family of equations yields f u) 1 f w). Such a map f is called harmonic, and it is a well-known fact that every harmonic function f on a finite, connected graph is constant. Since f v) H vv 0, this implies that f 0, and hence the family of equations has a unique solution, completing the proof. 2

Remark 1.4. To prove that every harmonic function on a finite, connected graph is constant, we can look at the corresponding Laplace operator: L f )u) f u) f u). A function f is harmonic if and only if L f 0. But we have already seen that, on a connected graph, the Laplacian has rank n 1 and kerl) span1,..., 1), i.e. the only harmonic functions on our graph are multiples of the constant function. Define now the commute time between u and v as the quantity C uv H uv + H vu. We restate and prove Theorem 1.2. Theorem 1.5. In any connected graph with m edges, we have C uv 2mR eff u, v) for every pair of vertices u, v. Proof. From Lemma 1.3, we have H uv u,v. By symmetry, we have H vu φ B) v,u. Since network C is the reverse of network B, this yields H vu φ C) u,v. Finally, since network D is the sum of networks A and C, by linearity we have φ D) u,v φ C) u,v + u,v H uv + H vu C uv. Finally, note that R eff u, v) 2mφ D) u,v by definition, since network D has exactly 2m units of current flowing from u to v. This yields the claim of the theorem. 1.3 Cover times We can now use Theorem 1.5 to give a universal upper bound on the cover time of any graph. Theorem 1.6. For any connected graph G V, E), we have covg) 2 E V 1). Proof. Fix a spanning tree T of G. Then we have covg) {x,y} ET) The right-hand side can be interpreted as a very particular way of covering the graph G: Start at some node x 0 and walk around the edges of the spanning tree in order x 0, x 1, x 2,..., x 2n 1) x 0. If we require the walk to first go from x 0 to x 1, then from x 1 to x 2, etc., we get the sum 2n 1) 1 i 0 H xi x i+1 {x,y} ET) C x y. This is one particular way to visit every node of G, so it gives an upper bound on the cover time. Finally, we note that if {x, y} is an edge of the graph, then by Theorem 1.5, we have C x y 2 E R eff x, y) 2 E. Here we use the fact that for every edge {x, y} of a graph, the effective resistance is at most the resistance, which is at most one. This completes the proof. Remark 1.7. The last stated fact is a special case of the Rayleigh monotonicity principle. This states that adding edges to the graph or, more generally, decreasing the resistance of any edge) cannot increase any effective resistance. In the other direction, removing edges from the graph or, more generally, increasing the resistance of any edge) cannot decrease any effective resistance. A similar fact is false for hitting times and commute times, as we will see in the next few examples. C x y. 3

Examples. 1. The path. Consider first G to be the path on vertices {0, 1,..., n}. Then H 0n + H n0 C 0n 2nR eff 0, n) 2n 2. Since H 0n H n0 by symmetry, we conclude that H 0n n 2. Note that Theorem 1.6 implies that covg) 2n 2, and clearly covg) H 0n n 2, so the upper bound is off by at most a factor of 2. 2. The lollipop. Consider next the lollipop graph which is a path of length n/2 from u to v with an n/2 clique attached to v. We have H uv + H vu C uv Θn 2 )R eff u, v) Θn 3 ). On the other hand, we have already seen that H uv Θn 2 ). We conclude that H vu Θn 3 ), hence covg) Ωn 3 ). Again, the bound of Theorem 1.6 is covg) On 3 ), so it s tight up to a constant factor here as well. 3. The complete graph. Finally, consider the complete graph G on n nodes. In this case, Theorem 1.6 gives covg) On 3 ) which is way off from the actual value covg) Θn log n) since this is just the coupon collector problem is flimsy disguise). 1.4 Matthews bound The last example shows that sometimes Theorem 1.6 doesn t give such a great upper bound. Fortunately, a relatively simple bound gets us within an Olog n) factor of the cover time. Theorem 1.8. If G V, E) is a connected graph and R max x,y V R eff x, y) is the maximum effective resistance in G, then E R covg) Olog n) E R. Proof. One direction is easy: covg) max u,v H uv 1 2 max C uv 1 2 E max u,v 2 R effu, v) E R. u,v For the other direction, we will examine a random walk of length 2c E R log n divided into log n epochs of length 2c E R. Note that for any vertex v and any epoch i, we have [v unvisited in epoch i] 1 c. This is because no matter what vertex is the first of epoch i, we know that the hitting time to v is at most max u H uv max u C uv 2 E R. Now Markov s inequality tells us that the probability it takes more than 2c E R steps to hit v is at most 1/c. Therefore the probability we don t visit v in any epoch is at most c log n n log c, and by a union bound, the probability that there is some vertex left unvisited after all the epochs is at most n 1 log c. We conclude that covg) 2c E R log n + n 1 log c 2n 3, where we have used the trivial bound on the cover time from Theorem 1.6. Choosing c to be a large enough constant makes the second term negligible, yielding covg) O E R log n), as desired. One can make some improvements to this soft proof, yielding the following stronger bounds. 4

Theorem 1.9. For any connected graph G V, E), the following holds. Let t hit denote the maximum hitting time in G. Then covg) t hit 1 + 1 2 + + 1 ). n Moreover, if we define for any subset A V, the quantity tmin A min u,v A,u v H uv, then covg) max A V ta min 1 + 1 ) 2 + + 1. 1.1) A 1 For the proofs, consult Chapter 11 of the Levin-Peres-Wilmer book http://pages.uoregon. edu/dlevin/markov/markovmixing.pdf. Kahn, Kim, Lovász, and Vu showed that the best lower bound in 1.1) is within an Olog log n) 2 factor of covg), improving over the Olog n)-approximation in Theorem 1.8. In a paper with Jian Ding and Yuval Peres, we showed that one can keep going and compute an O1) approximation using a multi-scale generalization of the bound 1.1) based on Talagrand s majorizing measures theory. 5