Lecture 10: May 6, 2013

Similar documents
U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 6 Luca Trevisan September 12, 2017

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8

Problem Set 9 Solutions

18.1 Introduction and Recap

Lecture 12: Discrete Laplacian

Lecture 4: Universal Hash Functions/Streaming Cont d

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Singular Value Decomposition: Theory and Applications

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

The Order Relation and Trace Inequalities for. Hermitian Operators

Non-negative Matrices and Distributed Control

6.842 Randomness and Computation February 18, Lecture 4

2.3 Nilpotent endomorphisms

Randić Energy and Randić Estrada Index of a Graph

Calculation of time complexity (3%)

Errors for Linear Systems

Spectral Graph Theory and its Applications September 16, Lecture 5

Lecture 4: Constant Time SVD Approximation

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lecture 4. Instructor: Haipeng Luo

Finding Dense Subgraphs in G(n, 1/2)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Randomness and Computation

Edge Isoperimetric Inequalities

Finding Primitive Roots Pseudo-Deterministically

Lecture 3. Ax x i a i. i i

Lecture Space-Bounded Derandomization

An Explicit Construction of an Expander Family (Margulis-Gaber-Galil)

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

Google PageRank with Stochastic Matrix

Learning Theory: Lecture Notes

APPENDIX A Some Linear Algebra

Communication Complexity 16:198: February Lecture 4. x ij y ij

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness.

1 The Mistake Bound Model

Dynamic Systems on Graphs

Estimation: Part 2. Chapter GREG estimation

NP-Completeness : Proofs

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

The Second Eigenvalue of Planar Graphs

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Lecture Notes on Linear Regression

Lecture 3: Shannon s Theorem

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

10-701/ Machine Learning, Fall 2005 Homework 3

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MTH 263 Practice Test #1 Spring 1999

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Maximizing the number of nonnegative subsets

Vapnik-Chervonenkis theory

Eigenvalues of Random Graphs

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

Lecture 5 Decoding Binary BCH Codes

Math 217 Fall 2013 Homework 2 Solutions

Online Classification: Perceptron and Winnow

Min Cut, Fast Cut, Polynomial Identities

6. Stochastic processes (2)

CSCE 790S Background Results

6. Stochastic processes (2)

Math 261 Exercise sheet 2

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

8.1 Arc Length. What is the length of a curve? How can we approximate it? We could do it following the pattern we ve used before

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

Lecture 2: Prelude to the big shrink

arxiv: v1 [quant-ph] 6 Sep 2007

Statistical Mechanics and Combinatorics : Lecture III

Generalized Linear Methods

Homework Assignment 3 Due in class, Thursday October 15

arxiv: v1 [math.co] 1 Mar 2014

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Graph Reconstruction by Permutations

PHYS 215C: Quantum Mechanics (Spring 2017) Problem Set 3 Solutions

Pulse Coded Modulation

Some Consequences. Example of Extended Euclidean Algorithm. The Fundamental Theorem of Arithmetic, II. Characterizing the GCD and LCM

Spectral Clustering. Shannon Quinn

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

Random Walks on Digraphs

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1

Section 8.3 Polar Form of Complex Numbers

Sampling Self Avoiding Walks

Expected Value and Variance

Notes on Frequency Estimation in Data Streams

Affine transformations and convexity

More metrics on cartesian products

Bernoulli Numbers and Polynomials

Homework Notes Week 7

Linear Feature Engineering 11

From Biot-Savart Law to Divergence of B (1)

Solutions to Exercises in Astrophysical Gas Dynamics

The Minimum Universal Cost Flow in an Infeasible Flow Network

Transcription:

TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander, as well as an applcaton of random walk to show ts effectveness. 1 Cheerger s Inequalty Recap Gven a d-regular graph wth ts adjacent matrx A, we defne the Laplacan of ths graph wth adjacent matrx N = I 1 d A. Assume A has egenvalues: µ 1, µ,..., µ n, N has egenvalues: λ 1, λ,..., λ n, we know: Then the Cheeger s Inequalty gves that: where the Φ G s the expanson of the graph. d = µ 1 µ... µ n d 0 = λ 1 λ... λ n λ Φ G λ Random Walk on Graphs.1 Basc Idea Frst of all, we defne the random walk as follows: Have a startng vertex. At every step, go to a random neghbor of the current vertex. we use x R n to represent the current status, e.g., f we are at th vertex, than x = 1 wth other tems beng 0. Then we defne the random walk matrx to be M, s.t. x t+1 = Mx t. For a d-regular graph, t s easy to check that M = 1 da. We can see that ths defnton also apples to x beng a dstrbuton. For general graph, we have the random walk matrx to be: M = D 1 A, where D = deg(), D j = 0 j. Notce that ths matrx has smlar egenvalues to D 1/ AD 1/. Now consderng more about d-regular graph, we can see that unform dstrbuton s a statonary status: M 1 n1 = 1 n1 snce 1 s a egenvector of M. So the queston now s: If we start wth any random dstrbuton, how quckly do we converge to the unform dstrbuton(statonary dstrbuton)? Explctly, t progresses as follows: x Mx... M x 1

We splt x to be x = x µ + x where x µ share the drecton wth statonary dstrbuton and x s orthogonal to t. Then we have: x µ = 1 < x, 1 > 1 1 = 1 n n n Mx = M(x µ + x ) = x µ + Mx Thus, the quantty we care about s M x x µ. When t s gettng close to 0, we are convergng to statonary dstrbuton. Consder one step, we have: Mx x µ = Mx µ d x = µ d x x µ, where µ = max{µ, µ n }. Then after l steps: M l (µ) l x x µ x xµ ( µ ) l d d If we set a up-bound: M l x x µ ε, then ( µ ) ε l = Ω ( log ε ) d log µ d So wthn l steps, we can get the dstrbuton very close to statonary dstrbuton.. Lazy Random Walk The prevous random walk process wll go to a new status n each step. Now let s look at another lazy one, where t wll stay n the same dstrbuton wth 0.5 probablty and walk to a new dstrbuton wth 0.5 probablty. Then the random walk matrx s thus when we have egenvalues for M: we have egenvalues for M to be: M = 1 I + 1 M 1 = µ 1 d µ 1 d... µ 1 d 1 1 = 1 + µ 1 d 1 + µ d... 1 + µ n d 0 One convenence of ths s that we don t need to consder about both µ and µ n, snce ths tme: µ = max{µ, µ n } = 1 d + µ

.3 Expanders Now we ntroduce the concept of Expander Graph : an expander graph s a sparse graph that has strong connectvty propertes, quantfed usng vertex, edge or spectral expanson. Expander constructons have spawned research n pure and appled mathematcs, wth several applcatons to complexty theory, desgn of robust computer networks, and the theory of error-correctng codes[wkpeda]. For example, here we use expanson of a graph Φ G 0.1, then we have: Φ G 0.1 λ 1 10 λ 1 00 thus 1 µ d 1 00 µ 199 00 d whch s equvalent to say µ c d, (c < 1). We hereby gve another defnton of Cheeger s Inequalty: µ = max{µ, µ n } c d, (c < 1) For more nformaton about expander graph and ther applcatons, please refer to ths survey[hoory06]. 3 Applcaton of Random Walk Here we ll use random walk to desgn an algorthm wth less random bts but equvalent performance. 3.1 Problem Setup Frst suppose we now have a randomzed algorthm whch can output whether x s n L for any gven any x as: wth nput: x, r x L, P r [Algo(x, r) = YES] 1/ x / L, P r [Algo(x, r) = YES] = 0 *class of L for whch above algorthm exst s called RP(Randomzed Polynomal Tme). Then our objectve s to apply expander graph to mprove the above nequalty of 1/ to somethng close to 1. 3. Basc Idea A basc dea for dong that s Run the algorthm wth l ndependent r 1, r,.. r n 3

Output YES f any run says YES, else output NO. Ths algorthm can gve us the followng concluson: x L, x / L, P [Algo (x, r) = YES] 1 1 r 1,...,r n l P [Algo (x, r) = YES] = 0 r 1,...,r n Then f r = R, Algo uses l R random bts. Actually we can select those r more wsely so that we can use less random bts to get the same concluson. We can acheve the same concluson wth just O(l + R) random bts. 3.3 Apply Random Walk on Graph The algorthm works as follows: Frst assume we have an access to an expander graph G wth R vertces and d = O(1), for example d = 10, thus µ = max{µ, µ n } 9d 10. Then we sample r as follows: r 1 : random vertex of G r : random neghbor of r 1. r n : random neghbor of r n 1 Thus the number of random bts we use s: R + log d (l 1) Now we need to prove that we acheve the same level of accuracy here(also equvalent to say random walk on expander graph s as good as unform samplng). Lemma 3.1 For adjacent matrx A, A l j = # walks of length l form j Proof: Frst of all, for l =, t s obvous that A j = k A ka kj Then by nducton, we can fnd that: A l j = A k1 A k1 k...a kl 1 j k 1,k,...,k l 1 wth rght hand sde to be exactly the number of walks of length l Now, let S V be a set s.t. S n Lemma 3. P [Random walk of length l never vsts S] = Ω(l) 4

Proof: Gven x L, defne that: S = {r : Algo (x, r) = YES} From lemma 3.1, we can see the total # walks of length l: (total number) l =,j A l j = 1A l 1 = d l 1 1 = d l n Then defne a matrx A s.t. { 0 f S or j S A j = otherwse. A j then # walks that avod S s 1 A l 1. If we can prove that all egenvalues of A are less than d, we are done. For any x, consder x Ax = z Az, where { 0 f S z = otherwse. x then x Ax = j A jx x j = j A jx x j = z Az. Smlarly as what we dd n secton.1, let z = z µ + z, then z µ = z n 1. Now: z Az = (z µ + z ) A(z µ + z ) = (z µ + z ) (dz µ + Az ) = z µ d+ < z, Az > z µ d + µ z = z µ d + µ ( z z µ ) whle at the same tme, we have: z µ = z m u = (n S ) ( z ) n (1 S n )( z ) ( z ) n recall: S n 1 1 z n thus, together we get: z Az = z µ d + µ ( z z µ ) ( 1 d + 1 µ) z d + µ x 5

whch s to say: x Ax d+µ x. Recall that µ = max{µ, µ n } c d (c < 1). Then we have: P [Random walk of length l never vsts S] = 1A l 1 1A l 1 ( d+µ ) l n d l = Ω(l) n Snce we know P [Algo answers NO] = P [Random walk never vsts S] thus from lemma 3., we can conclude that x L, P [Algo (x, r) = YES] 1 Ω(l) r 1,...,r n.e. we can acheve the same accuracy as before usng only R + log d (l 1) random bts. References [HOORY06] S. HOORY, N. LINIAL and A. WIGDERSON, Expander Graphs and Ther Applcatons, Bulletn of the Amercan Mathematcal Socety, Oct. 006, Vol. 43, pp. 439-561. 6