Markov chain methods for small-set expansion

Similar documents
Graph Partitioning Algorithms and Laplacian Eigenvalues

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality

Unique Games and Small Set Expansion

Random Walks and Evolving Sets: Faster Convergences and Limitations

18.S096: Spectral Clustering and Cheeger s Inequality

Lecture 13: Spectral Graph Theory

Lecture 17: Cheeger s Inequality and the Sparsest Cut Problem

Conductance and Rapidly Mixing Markov Chains

Conductance, the Normalized Laplacian, and Cheeger s Inequality

Convergence of Random Walks and Conductance - Draft

Lecture 6: Random Walks versus Independent Sampling

arxiv: v4 [cs.ds] 23 Jun 2015

Lecture 8: Path Technology

1 Review: symmetric matrices, their eigenvalues and eigenvectors

Eigenvalues of graphs

1 Matrix notation and preliminaries from spectral graph theory

Lecture 7: Spectral Graph Theory II

Lecture 9: Counting Matchings

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Holistic Convergence of Random Walks

8.1 Concentration inequality for Gaussian random matrix (cont d)

EIGENVECTOR NORMS MATTER IN SPECTRAL GRAPH THEORY

An Elementary Construction of Constant-Degree Expanders

Reductions Between Expansion Problems

Sampling Contingency Tables

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

Unique Games Conjecture & Polynomial Optimization. David Steurer

1 Adjacency matrix and eigenvalues

Testing Expansion in Bounded-Degree Graphs

Lecture 17: D-Stable Polynomials and Lee-Yang Theorems

A Schur Complement Cheeger Inequality

Lecture 8: Spectral Graph Theory III

Testing the Expansion of a Graph

Polynomial-time counting and sampling of two-rowed contingency tables

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems

Lecture 9: Low Rank Approximation

ORIE 6334 Spectral Graph Theory September 22, Lecture 11

Lecture 5: Random Walks and Markov Chain

ANNALES. SHAYAN OVEIS GHARAN Spectral Graph Theory via Higher Order Eigenvalues and Applications to the Analysis of Random Walks

Vertex and edge expansion properties for rapid mixing

PSRGs via Random Walks on Graphs

Lecture 16: Roots of the Matching Polynomial

On the complexity of approximate multivariate integration

Lecture 12 : Graph Laplacians and Cheeger s Inequality

Lectures 15: Cayley Graphs of Abelian Groups

Eigenvalues, random walks and Ramanujan graphs

Personal PageRank and Spilling Paint

Lecture: Local Spectral Methods (1 of 4)

Approximation & Complexity

Approximate Counting and Markov Chain Monte Carlo

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora

LOCAL CHEEGER INEQUALITIES AND SPARSE CUTS

Spectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

Spectral Algorithms for Graph Mining and Analysis. Yiannis Kou:s. University of Puerto Rico - Rio Piedras NSF CAREER CCF

Lecture 7. 1 Normalized Adjacency and Laplacian Matrices

Lecture 4. Random graphs II. 4.1 Automorphisms. 4.2 The number of simple graphs

Graph Clustering Algorithms

A spectral Turán theorem

Lower Bounds for Testing Bipartiteness in Dense Graphs

Dissertation Defense

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Notes for Lecture 14

A NOTE ON THE POINCARÉ AND CHEEGER INEQUALITIES FOR SIMPLE RANDOM WALK ON A CONNECTED GRAPH. John Pike

1 Matrix notation and preliminaries from spectral graph theory

Laplacian Matrices of Graphs: Spectral and Electrical Theory

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover

Spanning and Independence Properties of Finite Frames

CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory

linear programming and approximate constraint satisfaction

Subexponential Algorithms for Unique Games and Related problems

Random Lifts of Graphs

Learning convex bodies is hard

Lecture Introduction. 2 Brief Recap of Lecture 10. CS-621 Theory Gems October 24, 2012

Constructive bounds for a Ramsey-type problem

25.1 Markov Chain Monte Carlo (MCMC)

Convergence of Random Walks

Maximum cut and related problems

ON TESTING HAMILTONICITY OF GRAPHS. Alexander Barvinok. July 15, 2014

6.842 Randomness and Computation March 3, Lecture 8

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

The Logarithmic Sobolev Constant of Some Finite Markov Chains. Wai Wai Liu

The Matrix-Tree Theorem

GRAPH PARTITIONING USING SINGLE COMMODITY FLOWS [KRV 06] 1. Preliminaries

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods. Lecture 7: November 11, 2003 Estimating the permanent Eric Vigoda

A simple condition implying rapid mixing of single-site dynamics on spin systems

Lecture 7: Advanced Coupling & Mixing Time via Eigenvalues

approximation algorithms I

arxiv: v1 [quant-ph] 7 Jan 2013

Lecture 14: October 22

High-dimensional distributions with convexity properties

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Discrepancy inequalities for directed graphs

Lecture 19: November 10

Rapid Mixing of the Switch Markov Chain for Strongly Stable Degree Sequences and 2-Class Joint Degree Matrices

Data Analysis and Manifold Learning Lecture 9: Diffusion on Manifolds and on Graphs

Lecture 28: April 26

Improved Quantum Algorithm for Triangle Finding via Combinatorial Arguments

Transcription:

Markov chain methods for small-set expansion Ryan O Donnell David Witmer November 4, 03 Abstract Consider a finite irreducible Markov chain with invariant distribution π. We use the inner product induced by π and the associated heat operator to simplify and generalize some results related to graph partitioning and the small-set expansion problem. For example, Steurer showed a tight connection between the number of small eigenvalues of a graph s Laplacian and the expansion of small sets in that graph. We give a simplified proof which generalizes to the nonregular, directed case. This result implies an approximation algorithm for an analytic version of the Small-Set Expansion Problem, which, in turn, immediately gives an approximation algorithm for Small-Set Expansion. We also give a simpler proof of a lower bound on the probability that a random walk stays within a set; this result was used in some recent works on finding small sparse cuts. Overview Graph partitioning using spectral methods has recently been the subject of intensive study. Many results in this area have been proven using discrete-time random walks. However, these techniques work best when applied to regular graphs with nonnegative eigenvalues. As a result, it has become standard to move to a lazy version of a graph by adding self-loops, i.e. using (I + K)/ instead of K as the adjacency matrix. Much work has also focused on regular graphs only or considered the normalized Laplacian D / LD /. In this work we show that these problems can be avoided using Markov chain techniques, leading to simpler and more general proofs of results related to spectral graph partitioning. Rather than using discrete-time random walks, we consider continuous-time random walks and the associated heat operator. Smoothing out the random walk makes the eigenvalues nonnegative, avoiding the need to move to lazy graphs and allowing our techniques to be directly applied to the original instance. In addition, we use the inner product defined with respect the invariant distribution π of the Markov chain representing a random walk on the graph. We are then able to use our methods directly on nonregular graphs. We will now give a brief description of some previous results in spectral graph partitioning. Let G = (V, E) be a graph on n vertices. Let K be its (normalized) adjacency matrix, let L be its (normalized) Laplacian matrix (namely I K), and let 0 = λ λ λ n be the eigenvalues of L. The conductance Φ[S] of a set S V is defined to be conductance profile of G, denoted Φ G, was defined by Lovász and Kannan [LK99] as Φ G (r) = min{φ[s] : S V, µ[s] r}. E(S,S) v S deg(v). The Department of Computer Science, Carnegie Mellon University. CCF-095893, and by a Sloan fellowship. Department of Computer Science, Carnegie Mellon University. Supported by NSF grants CCF-074750 and

Cheeger s inequality for graphs [AM85, Alo86, SJ89] states that V can be partitioned into nonempty S, S such that Φ[S i ] O( λ ). Very recently, Louis, Raghavendra, Tetali, and Vempala [LRTV] and Lee, Oveis Gharan, and Trevisan [LOT] have given a higher order Cheeger inequality involving higher eigenvalues. Specifically, the two results show that for any k, one can partition V into Ω(k) disjoint nonempty sets S i, each of which has conductance Φ[S i ] O( λ k log k). Since one of these parts has volume µ[s i ] := S i / V O(/k) we may conclude that Φ G ( const k ) O( λ k log k). () As noted in these works, for a fixed k the extra factor of Θ( log k) in () is necessary; indeed this is true [LOT] for all k log n. However, somewhat intriguingly, the extra factor becomes unnecessary once k is as large as n Ω() at least, if one is willing to compromise somewhat on the volume parameter. Specifically, Arora, Barak, and Steurer [ABS0] showed for regular graphs that In his thesis, Steurer [Ste0] improved this bound to Φ G (O(k /00 )) O( λ k log k n). () Φ G (k +/A ) O( Aλ k log k n) for any (sufficiently large) constant A. (3) Using Markov chain methods, we give what we feel is a much simpler proof of this result, which also works for the nonregular (and also directed) case. Our result also implies an approximation algorithm for an analytic version of the Small-Set Expansion problem. This, in turn, immediately gives an approximation algorithm for Small-Set Expansion by a standard version of Cheeger s Inequality, In somewhat related recent work, Oveis Gharan and Trevisan [OT] proved a weaker version of this bound with k /3 in place of k +/A. The main point of that work, along with the independent work of Kwok and Lau [KL] give a polynomial-time algorithm for the Small-Set Expansion problem in an unweighted (nonregular) graph G = (V, E) with the following guarantee: if there exists S V with µ[s] δ and Φ[S] ɛ, the algorithm finds T V with µ[t ] O(δ) (δ E ) α and µ[s] O( ɛ/α) (for any small α > 0). To achieve this, both papers prove a theorem stating that for any S V and integer t > 0, the probability ( that ) a t-step random walk starting from a t. random x S stays entirely within S is at least Φ[S] We also give a simpler proof of this result for continuous-time random walks.. Our results.. Bounding the spectral profile In this work we provide a different, simple proof of Steurer s improved result using continuous-time random walks instead of lazy discrete-time random walks: Theorem.. In any strongly connected graph G, Φ G (6k +/A ) A λ k log k n for any real A 3. For example, Φ G (k.999 ) O( λ k log k n) for k sufficiently large. See Section for the appropriate definitions of Φ G, L, λ i, etc. in the context of general graphs G. In fact, our result is stronger than this in that we are able to directly bound the spectral profile of G. (The same is true of the result in Arora Barak Steurer [ABS0] and in Steurer s

thesis [Ste0].) Recall that the spectral profile Λ G of G, introduced by Goel, Montenegro, and Tetali [GMT06], is defined by { } Λ G (r) = min f,lf : nonzero f : V R 0 with π(supp(f)) r. f Goel, Montenegro, and Tetali showed that the Cheeger rounding analysis yields the following relationship with conductance profile: Φ G (r) Λ G (r) for all r. As in [ABS0] we work with a slightly different definition of spectral profile, for technical convenience: Λ G(r) = min{φ[f] : µ[f] r}, where Φ[f] = f, Lf f, µ[f] = f f are appropriate generalizations of boundary size and volume to functions f : V R. (These definitions agree with our earlier ones when f is the 0- indicator of a set S V.) As noted in [ABS0, Lemma A.] we have Λ G (4r) Λ G (r) for all r. (A similar reverse connection also holds.) Thus: Theorem.. (Essentially from [GMT06].) Φ G (4r) Λ G (r) for all r. We use this connection to obtain Theorem.; our main theorem is in fact: Theorem.3. In any strongly connected graph G, Λ G (4k +/A ) A λ k log k n for any real A 3. This route to bounding the conductance profile is somewhat in contrast to the works [LRTV, LOT], both of which combine their spectral analysis and rounding algorithm. Indeed, in this work we consider the analytic version of the Raghavendra Steurer [RS0] Small-Set Expansion problem: given a graph G = (V, E) with the promise that there is a function f : V R which has µ[f] δ and Φ[f] ɛ, find a function g : V R with µ[g] O(δ) and Φ[g] as small as possible. Following [ABS0], we provide an eigenspace enumeration lemma which, when combined with Theorem.3, yields the following: Theorem.4. For any α 3 and C, there exists an algorithm running in time exp(o(nα ) log(c/δ)) with the following guarantee: If there exists f : V R with µ[f] δ / and δ Φ[f] ɛ /4, the algorithm finds g : V R with µ[g] δ ( + /C) and Φ[g] O( C αδ ) ɛ. As a byproduct, using Theorem. we can immediately deduce the following approximation algorithm for Small-Set Expansion: Corollary.5. Fix any small constants α, δ > 0. Then there is an algorithm running in time exp(o(n α )) with the following guarantee: If there exists S V with µ[s] δ and Φ[S] ɛ, the algorithm finds T V with µ[t ] 5δ and Φ[T ] O( ɛ). More generally, one can obtain Φ[T ] O(ɛ β/ ) in time exp(o(n αɛ β )) for any 0 < β. This result is incomparable with the Arora Barak Steurer Small-Set Expansion algorithm: their work had O(ɛ β/3 ) in place of O(ɛ β/ ) and was analyzed only for regular graphs. On the other hand, our Corollary.5 holds only for δ a constant, whereas their algorithm works for δ as small as n ɛ β (which is the more interesting parameter range). Actually, [GMT06] defined Λ G(r) as the minimization of through. f,lf. But their proof of this relationship still goes f f 3

.. Continuous-time random walks In [OT], Oveis Gharan and Trevisan prove a lower bound on the probability that a random walk stays within a set. (Kwok and Lau [KL] prove a similar but somewhat weaker bound.) Specifically, they show: Theorem.6. Let G = (V, E) be an undirected graph with invariant distribution π. Let S V and let t > 0 be an integer. Choose x π conditioned on x S, and then perform a t-step discrete-time ( random ) walk from x. Then the probability that the walk stays entirely within S is at t. least Φ[S] We provide a simple proof of a similar theorem using Markov chain methods. Theorem.7. In the setting of Theorem.6, if we instead perform a time-t continuous-time random walk, the probability that the walk stays entirely within S is at least exp( tφ[s]). Preliminaries Instead of directed graphs, we will use the language of Markov chains; for background, see e.g. [DSC96, MT06]. Throughout this work, G will denote an irreducible Markov chain on state space V of cardinality n, with no isolated states. We will be considering elements f in the vector space of functions V R. We write K for the adjacency matrix operator: Kf(x) = E y x [f(y)], where y x denotes that y is obtained by taking one step from x in the chain. K has a unique invariant probability distribution π on V which is nowhere 0. It gives rise to an inner product on functions, f, g = E [f(x)g(x)]. We write L = id K for the Laplacian operator and H t = exp( tl) for the heat kernel (continuous-time transition) operator. Definition.. Given nonzero f : V R we define its analytic boundary size/conductance to be Φ[f] = f, Lf f, f f, Kf f, f. Note that if f is the 0- indicator of a set S V then Φ[f] = Pr,y x [y S x S]. We will also write Φ[S] in this case. Definition.. Given a nonzero f : V R we define its analytic sparsity to be µ[f] = f f. Note that if f is the 0- indicator of a set S V then µ[f] = π(s). These definitions motivate consideration of an analytic version of the Small-Set Expansion Problem: Assuming there is an analytically sparse f with small analytic boundary, find such an f. More precisely: 4

Analytic Small-Set Expansion Problem: Given as input G with the promise that there exists f : V R with µ[f] δ / and Φ[f] ɛ, find f : V R with µ[f ] δ and Φ[f ] ɛ. In this bicriteria problem, we typically insist that δ = O(δ) and then try to minimize ɛ. Note that the standard Small-Set Expansion problem is the above problem with the additional restriction that f and f should be 0--valued functions. For the remainder of this work we will assume that G is reversible. However, this is without loss of generality since, given a non-reversible Markov chain G with adjacency matrix operator K, we can replace it with the reversible Markov chain G having adjacency matrix operator K = K +K. The chain G has the same invariant distribution π as G which means that the notion of analytic sparsity is unchanged. Further, if L and L are the Laplacians of G and G, respectively, then f, Lf = f, L f for any f : V R; hence the notion of analytic boundary is also unchanged. Given a reversible chain G, the operators K, L, and H t have a common orthogonal basis of eigenfunctions. We will write 0 = λ λ λ n for the eigenvalues of L; note that the ith eigenvalue of K is λ i and the ith eigenvalue of H t is exp( tλ i ). All of our theorems which mention the eigenvalues λ i hold also for non-reversible chains G, with the λ i s being those for the associated reversible chain G. Following [ABS0], our algorithm for the Analytic Small-Set Expansion problem (Theorem.4) breaks into two cases, depending on the analytic nullity of L (called threshold rank in [ABS0]): Definition.3. We define nullity η (L) = #{i : λ i η}. Note that nullity 0 (L) is the usual nullity. Remark.4. Throughout we will present algorithms in the model of exact arithmetic. E.g., we will assume that given G, the eigenvalues and eigenfunctions of L can be computed exactly. We believe (but have not verified) that our results can be extended to standard computational models (e.g., Turing machines). 3 A new bound on the spectral profile Here we give our new spectral criterion, based on the trace of the heat kernel, which ensures the existence of an analytically sparse function with small analytic boundary. Theorem 3.. Fix 0 < γ and suppose there exists t > 0 such that tr(h t ) γ tr(lh t). (4) Then in poly(n) time one can find g : V R 0 satisfying µ[g] / and Φ[g] γ. Proof. Let φ x π(x) x for x V, so E[φ x ]. Write φ x = π(x) φ x, so the collection (φ x) x V forms an orthonormal basis. Since trace is the sum of the diagonal entries, we have tr(h t ) = x V φ x, H t φ x = x V π(x) φ x, H t φ x = E H t/ φ x, H t/ φ x. Similarly, tr(lh t ) = E [ H t/ φ x, LH t/ φ x ]. Thus the assumption (4) implies E [ H t/φ x, H t/ φ x γ H t/φ x, LH t/ φ x ]. 5

Select (in poly(n) time) a particular x 0 V achieving at least in this expectation. We define g = H t/ φ x0 and therefore we have g, g γ g, Lg. (5) Note that g 0 since φ x0 0 and H t/ is positivity-preserving. Thus g = E[g] = E[φ x0 ]. Further, from (5) we deduce g, g ; thus µ[g] / as desired. Finally, (5) certainly implies g, g γ g, Lg 0, which is equivalent to Φ[g] γ. A straightforward calculation now shows that if L has large analytic nullity then we can get good bounds from Theorem 3.: Corollary 3.. Fix 0 < γ. Let 0 < α 3 and let k = nullity αγ(l). Assume k nα ln n. Then in poly(n) time one can find g : V R 0 satisfying Φ[g] γ and µ[g] /, where = k Proof. We show that (4) from Theorem 3. holds with γ,, and t γ ln n. We have tr(h t ) γ tr(lh t) = n ( λ i γ ) exp( tλ i) = n ( λ i γ )n λ i/γ. (6) i= The expression ( r)n r is decreasing for r [0, ]; for larger r, it attains its minimum at r + ln n, where it has value en ln n. Thus by distinguishing r = λ i γ α in (6) we may obtain Using α 3 i= 4n α. (6) #{i : λ i αγ} ( α)n α #{i : λ i > αγ} en ln n k ( α) nα e ln n. nα and k ln n, the above is indeed at least = k Restating the parameters yields: Corollary 3.3. Let 0 < δ. If there exists α 3 such that nullity αγ(l) 4 δ nα, then in poly(n) time one can find g : V R 0 satisfying µ[g] δ and Φ[g] γ. An alternative restatement of the parameters yields our main Theorem.3: simply take α = A log k n and γ = Aλ k log k n in Corollary 3.. 4n α. 4 An algorithm for Analytic Small-Set Expansion In [ABS0] it is shown that when L has small analytic nullity, one can find sparse sets by bruteforce search through low-eigenvalue eigenspace. We present a very similar algorithm for finding analytically sparse sets. Lemma 4.. Suppose there exists f : V R with Let ɛ η. satisfying µ[f] δ /, Φ[f] ɛ /4. Then in time exp(o(nullity η (L) log(η/ɛ))) poly(n) one can find g : V R µ[g] δ + O(ɛ/η + δɛ/η) O(δ + ɛ/η), Φ[g] η. Remark 4.. It is also quite easy to show g will satisfy Φ[g] O( ɛ/η), which is useful if η ɛ /3. We will not need this parameter setting, so we omit the proof. 6

Proof. Let ψ,..., ψ m be an orthonormal basis of eigenfunctions for L, corresponding to eigenvalues λ,..., λ n. Without loss of generality, assume f. Write m = nullity η (L) and write U for the dimension-m subspace spanned by ψ,..., ψ m. Express f = n i= c iψ i, so c i by the orthonormality of the ψ i s. We have ɛ Φ[f] = f, Lf = n i= λ i c i i>m λ i c i η i>m c i. In other words, if f U denotes i m c iψ i then f f U ɛ/η (which is at most / by the assumption on η). If we define u U to be the unit vector f U / f U, it follows that f u ɛ/η. As in [ABS0] we can now consider all g in a.5 ɛ/η-net for the unit sphere of U. The cardinality of this net is exp(o(m log(η/ɛ))). One such g will satisfy For this g we have u g.5 ɛ/η and hence f g ɛ/η. g f + f g µ[f] + f g δ + ɛ/η and hence µ[g] δ + O(ɛ/η + δɛ/η), as desired. immediately conclude Φ[g] η. Since g is a unit vector in U we may also From Corollary 3.3 we know that if L has large analytic nullity then there is automatically an (easily findable) f : V R which is analytically sparse and has small analytic boundary. On the other hand, if L has small analytic nullity, the above lemma can solve the Analytic Small-Set Expansion problem in not too much time. Combining these facts lets us prove our Theorem.4, restated here for convenience: Theorem.4. For any α 3 and C, there exists an algorithm running in time exp(o(nα ) log(c/δ)) with the following guarantee: If there exists f : V R with µ[f] δ / and δ Φ[f] ɛ /4, the algorithm finds g : V R with µ[g] δ ( + /C) and Φ[g] O( C αδ ) ɛ. Proof. Set γ = B αδ ɛ; we will eventually take B = O(C ). If nullity αγ (L) 4 δ nα then from Corollary 3.3 we can find g with µ[g] δ, Φ[g] γ in poly(n) time; in fact, here we don t even need to assume the existence of f. Otherwise, Lemma 4. tells us that in time exp(o(n α ) δ log(b/δ)) we can find a g satisfying µ[g] δ + O( δ B + δ B ) = δ ( + O(/ B)), Φ[g] αγ γ. Thus the result follows by taking B = O(C ). 5 The probability a random walk stays entirely within a set In [OT] the authors show that a t-step discrete time random walk starting from a random vertex ( ) t. in S V stays entirely within S with probability at least Φ[S] We give a proof of a similar result for continuous-time random walks using Markov chain methods. 7

Theorem.7 restated. For any S V and real t > 0, let C(t, S) denote the probability that a continuous-time-t random walk, started from a random x S, stays entirely within S. Then C(t, S) exp( tφ[s]). Proof. Let us define an operator K S on functions f : V R as follows: K S f(x) = E y x [ S (x) S (y)f(y)], where S is the indicator function for S. It is easy to see that K S is self-adjoint; thus it has n real eigenvalues and n linearly independent eigenvectors. We also define L S = id K S and H t,s = exp( tl S ). Let v,..., v n be an orthonormal basis of eigenvectors of L S (which are also eigenvectors of K S and H t,s ) and let λ,..., λ n be the corresponding eigenvalues of L S. Finally, we define φ S = S and write φ π(s) S = i c iv i for some constants c i. Since φ S it follows that i c i : First, we will show that Φ[S] = i c i λ i. Φ[S] = Pr [y / S x S] y x = Pr [y / S x S]/ Pr [x S] y x π(s) E y x π(s) E y x π(s) π(s) [ S (x)( S (x) S (y))] [ S (x)( S (x) S (x) S (y))] E [ S (x)( S (x) S (x) E [ S (y)])] y x E [ S (x)(id S (x) K S S (x))] = φ S, L S φ S = i c i λ i. Now we show that C(t, S) = i c i exp( tλ i). Let w 0,..., w τ be the states of a time-t continuous-time random walk in G; note that this is the same as a τ -step discrete-time random walk, where τ Poisson(t). Let W denote the set of all states visited. Then: C(t, S) = Pr[W S w 0 S] π(s) E[ S(w 0 ) S (w )... S (w τ )] π(s) π(s) E E [ S(x)KS τ S (x)] τ Poisson(t) E [ S (x)h t,s S (x)] = φ S, H t,s φ S = i c i exp( tλ i ). To complete the proof, we need to show that i c i exp( tλ i) exp( t i c i λ i). This follows immediately by the convexity of the exponential function and Jensen s inequality. Acknowledgments We thank James Lee, David Steurer, Yu Wu, and Yuan Zhou for helpful discussions. 8

References [ABS0] Sanjeev Arora, Boaz Barak, and David Steurer. Subexponential algorithms for Unique Games and related problems. In Proceedings of the 5st Annual IEEE Symposium on Foundations of Computer Science, pages 563 57, 00.,..,..,, 4, 4 [Alo86] Noga Alon. Eigenvalues and expanders. Combinatorica, 6():83 96, 986. [AM85] Noga Alon and Vitali Milman. λ, isoperimetric inequalities for graphs, and superconcentrators. Journal of Combinatorial Theory, Series B, 38():73 88, 985. [DSC96] Persi Diaconis and Laurent Saloff-Coste. Logarithmic Sobolev inequalities for finite Markov chains. Annals of Applied Probability, 6(3):695 750, 996. [GMT06] [KL] [LK99] [LOT] Sharad Goel, Ravi Montenegro, and Prasad Tetali. Mixing time bounds via the spectral profile. Electronic Journal of Probability, (): 6, 006...,., Tsz Chiu Kow and Lap Chi Lau. Finding small sparse cuts by random walk. In Proceedings of the 6th Annual International Workshop on Randomized Techniques in Computation, pages 65 66, 0.,.. Lászlo Lovász and Ravi Kannan. Faster mixing via average conductance. In Proceedings of the 3st Annual ACM Symposium on Theory of Computing, pages 8 87, 999. James Lee, Shayan Oveis Gharan, and Luca Trevisan. Multi-way spectral partitioning and higher-order Cheeger inequalities. In Proceedings of the 44th Annual ACM Symposium on Theory of Computing, 0.,,.. [LRTV] Anand Louis, Prasad Raghavendra, Prasad Tetali, and Santosh Vempala. Many sparse cuts via higher eigenvalues. In Proceedings of the 44th Annual ACM Symposium on Theory of Computing, 0.,.. [MT06] Ravid Montenegro and Prasad Tetali. Mathematical aspects of mixing times in Markov chains. Foundations and Trends in Theoretical Computer Science, (3):37 354, 006. [OT] Shayan Oveis Gharan and Luca Trevisan. Approximating the expansion profile and almost optimal local graph clustering. http://arxiv.org/abs/04.0, 0.,.., 5 [RS0] Prasad Raghavendra and David Steurer. Graph expansion and the Unique Games Conjecture. In Proceedings of the 4nd Annual ACM Symposium on Theory of Computing, pages 755 764, 00... [SJ89] Alistair Sinclair and Mark Jerrum. Approximate counting, uniform generation and rapidly mixing Markov chains. Information and Computation, 8():93 33, 989. [Ste0] David Steurer. On the Complexity of Unique Games and Graph Expansion. PhD thesis, Princeton University, 00.,.. 9