Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Similar documents
U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Stanford University Graph Partitioning and Expanders Handout 3 Luca Trevisan May 8, 2013

Finding Dense Subgraphs in G(n, 1/2)

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8

Lecture 10: May 6, 2013

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 6 Luca Trevisan September 12, 2017

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Complete subgraphs in multipartite graphs

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Edge Isoperimetric Inequalities

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Eigenvalues of Random Graphs

Lecture Notes on Linear Regression

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

COS 521: Advanced Algorithms Game Theory and Linear Programming

Problem Set 9 Solutions

Maximizing the number of nonnegative subsets

Lecture Space-Bounded Derandomization

Some basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C

Spectral Graph Theory and its Applications September 16, Lecture 5

Errors for Linear Systems

Notes on Frequency Estimation in Data Streams

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Section 8.3 Polar Form of Complex Numbers

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014

Spectral Clustering. Shannon Quinn

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

An Explicit Construction of an Expander Family (Margulis-Gaber-Galil)

Lecture 12: Discrete Laplacian

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Calculation of time complexity (3%)

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

Min Cut, Fast Cut, Polynomial Identities

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

Assortment Optimization under MNL

The Second Eigenvalue of Planar Graphs

1 Matrix representations of canonical matrices

18.1 Introduction and Recap

1 Convex Optimization

Math 702 Midterm Exam Solutions

APPENDIX A Some Linear Algebra

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

The lower and upper bounds on Perron root of nonnegative irreducible matrices

CSCE 790S Background Results

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

Lecture 4. Instructor: Haipeng Luo

n ). This is tight for all admissible values of t, k and n. k t + + n t

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities

Communication Complexity 16:198: February Lecture 4. x ij y ij

Lecture 10 Support Vector Machines II

Convergence of random processes

Foundations of Arithmetic

Linear Approximation with Regularization and Moving Least Squares

The Order Relation and Trace Inequalities for. Hermitian Operators

Differentiating Gaussian Processes

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Lecture 5 September 17, 2015

Feature Selection: Part 1

2.3 Nilpotent endomorphisms

Math1110 (Spring 2009) Prelim 3 - Solutions

Feb 14: Spatial analysis of data fields

Lecture 3. Ax x i a i. i i

Vapnik-Chervonenkis theory

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

Error Probability for M Signals

Section 3.6 Complex Zeros

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Lecture 2: Prelude to the big shrink

Week 5: Neural Networks

Randić Energy and Randić Estrada Index of a Graph

Computing π with Bouncing Balls

Lecture 4: Constant Time SVD Approximation

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

1 Definition of Rademacher Complexity

Introduction to Algorithms

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Norms, Condition Numbers, Eigenvalues and Eigenvectors

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Lecture 14: Bandits with Budget Constraints

Norm Bounds for a Transformed Activity Level. Vector in Sraffian Systems: A Dual Exercise

More metrics on cartesian products

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13]

Singular Value Decomposition: Theory and Applications

Lecture 3 January 31, 2017

Finding Primitive Roots Pseudo-Deterministically

e - c o m p a n i o n

Math 217 Fall 2013 Homework 2 Solutions

Homework Assignment 3 Due in class, Thursday October 15

How Strong Are Weak Patents? Joseph Farrell and Carl Shapiro. Supplementary Material Licensing Probabilistic Patents to Cournot Oligopolists *

The Expectation-Maximization Algorithm

arxiv: v2 [math.ca] 24 Sep 2010

Transcription:

Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected d-regular graph G = (V, E), call A ts adjacency matrx, and M := d A ts scaled adjacency matrx. Let λ λ n be the egenvalues of M, wth multplctes, n non-ncreasng order. We have been studyng the edge expanson of a graph, whch s the mnmum of h(s) over all nontrval cuts (S, V S) of the vertex set (a cut s trval f S = or S = V ), where the expanson h(s) of a cut s h(s) := Edges(S, V S) d mn{ S, V S } We have also been studyng the (unform) sparsest cut problem, whch s the problem of fndng the non-trval cut that mnmzes φ(s), where the sparssty φ(s) of a cut s φ(s) := We are provng Cheeger s nequaltes: Edges(S, V S) d S V S n λ h(g) ( λ ) () and we establshed the left-hand sde nequalty n the prevous lecture, showng that the quantty λ can be seen as the optmum of a contnuous relaxaton of φ(g), so that λ φ(g), and φ(g) h(g) follows by the defnton. Today we prove the more dffcult, and nterestng, drecton. The proof wll be constructve and algorthmc. The proof can be seen as an analyss of the followng algorthm.

Algorthm: SpectralParttonng Input: graph G = (V, E) and vector x R V Sort the vertces of V n non-decreasng order of values of entres n x, that s let V = {v,..., v n } where x v x v... x vn Let {,..., n } be such that h({v,..., v }) s mnmal Output S = {v,..., v } We note that the algorthm can be mplemented to run n tme O( V + E ), assumng arthmetc operatons and comparsons take constant tme, because once we have computed h({v,..., v }) t only takes tme O(degree(v + )) to compute h({v,..., v + }). We have the followng analyss of the qualty of the soluton: Lemma (Analyss of Spectral Parttonng) Let G = (V, E) be a d-regular graph, x R V be a vector such that x, let M be the normalzed adjacency matrx of G, defne δ :=,j M,j x x j n,j x x j and let S be the output of algorthm SpectralParttonng on nput G and x. Then h(s) δ Remark If we apply the lemma to the case n whch x s an egenvector of λ, then δ = λ, and so we have h(g) h(s) ( λ ) whch s the dffcult drecton of Cheeger s nequaltes. Remark 3 If we run the SpectralParttonng algorthm wth the egenvector x of the second egenvalue λ, we fnd a set S whose expanson s h(s) ( λ ) h(g) Even though ths doesn t gve a constant-factor approxmaton to the edge expanson, t gves a very effcent, and non-trval, approxmaton. As we wll see n a later lecture, there s a nearly lnear tme algorthm that fnds a vector x for whch the expresson δ n the lemma s very close to λ, so, overall, for any graph G we can fnd a cut of expanson O( h(g)) n nearly lnear tme.

Proof of Lemma In the past lecture, we saw that λ can be seen as the optmum of a contnuous relaxaton of sparsest cut. Lemma provdes a roundng algorthm for the real vectors whch are solutons of the relaxaton. In ths secton we wll thnk of t as a form of randomzed roundng. Later, when we talk about the Leghton-Rao sparsest cut algorthm, we wll revst ths proof and thnk of t n terms of metrc embeddngs. To smplfy notaton, we wll assume that V = {,..., n} and that x x x n. Thus our goal s to prove that there s an such that h({,..., }) δ We wll derve Lemma by showng that there s a dstrbuton D over sets S of the form {,..., } such that ES D Edges(S, V S) d ES D mn{ S, V S } δ () We need to be a bt careful n dervng the Lemma from (). In general, t s not true that a rato of averages s equal to the average of the ratos, so () does not mply that E h(s) δ. We can, however, apply lnearty of expectaton and derve from () the nequalty E S D d Edges(S, V S) δ mn{ S, V S } 0 So there must exst a set S n the sample space such that d Edges(S, V S) δ mn{ S, V S } 0 meanng that, for that set S, we have h(s) δ. (Bascally we are usng the fact that, for random varables X, Y over the same sample space, although t mght not be true that E X = E Y E X, we always have Y P[ X E X ] > 0, provded that Y > 0 over Y E Y the entre sample space.) From now on, we wll assume that. x n/ = 0, that s, the medan of the entres of x s zero. x + x n = whch can be done wthout loss of generalty because addng a fxed constant c to all entres of x, or multplyng all the entres by a fxed constant does not change the 3

value of δ nor does t change the property that x x n. The reason for these choces s that they allow us to defne a dstrbuton D over sets such that E mn{ S, V S } = S D x (3) We defne the dstrbuton D over sets of the form {,..., }, n, as the outcome of the followng probablstc process: We pck a real value t n the range [x, x n ] wth probably densty functon f(t) = t. That s, for x a b x n, P[a t b] = b a t dt. Dong the calculaton, ths means that P[a t b] = a b f a, b have the same sgn, and P[a t b] = a + b f they have dfferent sgns. We let S := { : x t} Accordng to ths defnton, the probablty that an element n/ belongs to the smallest of the sets S, V S s the same as the probablty that t belongs to S, whch s the probablty that the threshold t s n the range [x, 0], and that probablty s x. Smlarly, the probablty that an element > n/ belongs to the smallest of S, V S s the same as the probablty that t belongs to V S, whch s the probablty that t s n the range [0, x ], whch s agan x. So we have establshed (3). We wll now estmate the expected number of edges between S and V S. E d Edges(S, V S) = M,j P[(, j) s cut by (S, V S)],j The event that the edge (, j) s cut by the partton (S, V S) happens when the value t falls n the range between x and x j. Ths means that If x, x j have the same sgn, If x, x j have dfferent sgn, P[(, j) s cut by (S, V S)] = x x j P[(, j) s cut by (S, V S)] = x + x j Some attempts, show that a good expresson to upper bound both cases s P[(, j) s cut by (S, V S)] x x j ( x + x j ) 4

Pluggng nto our expresson for the expected number of cut edges, and applyng Cauchy-Schwarz E d Edges(S, V S) M,j x x j ( x + x j ),j M (x x j ) M ( x + x j ) The assumpton of the Lemma tell us that And we can rewrte M (x x j ) = δ (x x j ) n (x x j ) = n x x x j = n ( ) x x n x whch gves us M (x x j ) δ x Fnally, t remans to study the expresson M ( x + x j ). By applyng the nequalty (a + b) a + b (whch follows by notng that a + b (a + b) = (a b) 0), we derve M ( x + x j ) Puttng all the peces together we have M (x + x j) = 4 E d Edges(S, V S) δ x (4) whch, together wth (3) gves (), whch, as we already dscussed, mples the Man Lemma. x 5