Eigenvalues of Random Graphs

Similar documents
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

Spectral Graph Theory and its Applications September 16, Lecture 5

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Lecture 3. Ax x i a i. i i

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 6 Luca Trevisan September 12, 2017

APPENDIX A Some Linear Algebra

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Finding Dense Subgraphs in G(n, 1/2)

MATH Homework #2

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

Lecture Space-Bounded Derandomization

More metrics on cartesian products

Edge Isoperimetric Inequalities

The Second Eigenvalue of Planar Graphs

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

CSCE 790S Background Results

Complete subgraphs in multipartite graphs

Lecture 4: September 12

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS

Stanford University Graph Partitioning and Expanders Handout 3 Luca Trevisan May 8, 2013

Vapnik-Chervonenkis theory

Notes on Frequency Estimation in Data Streams

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities

The Expectation-Maximization Algorithm

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13]

e - c o m p a n i o n

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

The Order Relation and Trace Inequalities for. Hermitian Operators

DISCRIMINANTS AND RAMIFIED PRIMES. 1. Introduction A prime number p is said to be ramified in a number field K if the prime ideal factorization

2.3 Nilpotent endomorphisms

Lecture 4: Universal Hash Functions/Streaming Cont d

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8

THE WEIGHTED WEAK TYPE INEQUALITY FOR THE STRONG MAXIMAL FUNCTION

Singular Value Decomposition: Theory and Applications

E Tail Inequalities. E.1 Markov s Inequality. Non-Lecture E: Tail Inequalities

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1

Analysis of Discrete Time Queues (Section 4.6)

Random Walks on Digraphs

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Maximizing the number of nonnegative subsets

Dimensionality Reduction Notes 1

An Inequality for the trace of matrix products, using absolute values

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

The Geometry of Logit and Probit

Composite Hypotheses testing

Randić Energy and Randić Estrada Index of a Graph

Randomness and Computation

Lecture 3: Shannon s Theorem

Lecture 3: Probability Distributions

Lecture 10: May 6, 2013

arxiv: v1 [math.co] 1 Mar 2014

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor

find (x): given element x, return the canonical element of the set containing x;

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Math 217 Fall 2013 Homework 2 Solutions

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

Error Probability for M Signals

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Solution 1 for USTC class Physics of Quantum Information

Differentiating Gaussian Processes

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Feature Selection: Part 1

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

18.1 Introduction and Recap

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

The lower and upper bounds on Perron root of nonnegative irreducible matrices

Exercises of Chapter 2

Google PageRank with Stochastic Matrix

Dynamic Systems on Graphs

Solutions to the 71st William Lowell Putnam Mathematical Competition Saturday, December 4, 2010

P exp(tx) = 1 + t 2k M 2k. k N

9 Characteristic classes

HANSON-WRIGHT INEQUALITY AND SUB-GAUSSIAN CONCENTRATION

Lecture Notes on Linear Regression

Lecture 5 September 17, 2015

Lecture 17: Lee-Sidford Barrier

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Statistical pattern recognition

Feb 14: Spatial analysis of data fields

Lecture 4. Instructor: Haipeng Luo

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

Announcements EWA with ɛ-exploration (recap) Lecture 20: EXP3 Algorithm. EECS598: Prediction and Learning: It s Only a Game Fall 2013.

The Second Anti-Mathima on Game Theory

First day August 1, Problems and Solutions

The Feynman path integral

Transcription:

Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the graph wth probablty one-half, ndependently of course. We wll show that the egenvalues of the adjacency matrx of such a graph are tghtly concentrated. Curously, the adjacency matrx egenvalues are much more tghtly concentrated than the Laplacan matrx egenvalues. The adjacency matrx of such a random graph may be descrbed by choosng the values of A(, j) to be zero wth probablty /2 and wth probablty /2, subject to A(, j) = A(j, ). Of course, we fx A(, ) = 0 for all. The expectaton of every off-dagonal entry of the matrx s /2. Let M denote ths expected matrx, and observe that M = 2 A K n = 2 (J n I n ), where A Kn s the adjacency matrx of the compelete graph on n vertces, J n s the all-s matrx and I n s of course the dentty. From ths formula, we see that M has one egenvalue of (n )/2 and n egenvalues of /2. We wll show that the egenvalues of A are very close to ths. In partcular, we wll prove that A M.34 n, wth exponentally hgh probablty. So, we wll really focus on boundng the norm of A M. As A M s a symmetrc matrx, we have A M = max λ (A M) = max x T Ax x x T x. Our analyss wll focus on ths last term. Set R = A M, and let r,j = R(, j), for < j. Each r,j s a random varable that s ndependently and unformly dstrbuted n ±/2. 2.2 One Raylegh Quotent To begn, we fx any unt vector x, and consder x T Rx = <j 2r,j x ()x (j). 2-

Lecture 2: November 4, 202 2-2 Ths s a sum of ndependent random varables, and so may be proved to be tghly concentrated around ts expectaton, whch n ths case s zero. There are many types of concentraton bounds, wth the most popular beng the Chernoff and Hoeffdng bounds. In ths case we wll apply Hoeffdng s nequalty. Theorem 2.2. (Hoeffdng s Inequalty). Let a,..., a m and b,..., b m be real numbers and let X,..., X m be ndependent random varables such that X takes values between a and b. Let µ = E X ]. Then, for every t > 0, To apply ths theorem, we vew ] ( Pr X µ + t exp X,j = 2r,j x ()x (j) 2t 2 ) (b a ) 2. as our random varables. As r,j takes values n ±/2, we can set We then compute a,j = x ()x (j) and b,j = x ()x (j). (b a ) 2 = 4x () 2 x (j) 2 = 2 ( ) ( x () 2 x (j) 2 2 x () 2 <j <j j x (j) 2 ) = 2, as x s a unt vector. We thereby obtan the followng bound on x T Rx. Lemma 2.2.2. For every unt vector x, Pr R x T Rx t ] 2e t 2. Proof. The expectaton of x T Rx s 0. The preceedng argument tells us that Pr x T Rx t ] Pr x T Rx t ] + Pr x T Rx t ] Pr x T Rx t ] + Pr x T ( R)x t ] 2e t2, where we have exploted the fact that R and R are dentcally dstrbuted. 2.3 Vectors near v You mght be wonderng what good the prevous argument wll do us. We have shown that t s unlkely that the Raylegh quotent of any gven x s large. But, we have to reason about all x of unt norm.

Lecture 2: November 4, 202 2-3 Lemma 2.3.. Let R be a symmetrc matrx and let v be a unt egenvector of R whose egenvalue has absolute value R. If x s another unt vector such that v T x 3/2, then x T Rx 2 R. Proof. Let λ λ 2 λ n be the egenvalues of R and let v,..., v n be a correspondng set of orthonormal egenvectors. Assume wthout loss of generalty that λ λ n and that v = v. Expand x n the egenbass as x = c v. We know that c 3/2 and c2 x T Rx = c 2 λ c 2 λ 2 =. Ths mples that c 2 λ = λ c 2 2 c 2 = λ (2c 2 ) λ /2. We wll bound the probablty that R s large by takng Raylegh quotents wth random unt vectors. Let s examne the probablty that a random unt vector x satsfes the condtons of Lemma 2.3.. Lemma 2.3.2. Let v be an arbtrary unt vector, and let x be a random unt vector. Then, Pr v T x ] 3/2 πn2 n Proof. Let B n denote the unt ball n IR n, and let C denote the cap on the surface of B n contanng all vectors x such that v T x 3/2. We need to lower bound the rato of the surface area of the cap C to the surface area of B n. Recall that the surface area of B n s where I recall that for postve ntegers n nπ n/2 Γ( n 2 + ), Γ(n) = (n )!, and that Γ(x) s an ncreasng functon for real x. Now, consder the (n )-dmensonal hypersphere whose boundary s the boundary of the cap C. As the cap C les above ths hypersphere, the (n )-dmensonal volume of ths hypersphere s a

Lecture 2: November 4, 202 2-4 lower bound on the surface area of the cap C. Recall that the volume of a sphere n IR n of radus r s r n π n/2 Γ( n 2 + ). In our case, the radus of the hypersphere s r = sn(acos 3/2) = /2. So, the rato of the (n )-dmensonal volume of the hypersphere to the surface area of B n s at least rn π(n )/2 Γ( n 2 +) = rn Γ( n 2 + ) nπ n/2 πn Γ( n Γ( n 2 +) 2 + ) rn πn πn2 n. 2.4 The Probablstc Argument I m gong to do the followng argument very slowly, because t s both very powerful and very subtle. Theorem 2.4.. Let R be a symmetrc matrx wth zero dagonal and off-dagonal entres unformly chose from ±/2. Then, Pr R t] πn2 n e t2 /4. Proof. Let R be a fxed symmetrc matrx. By applyng Lemma 2.3.2 to any egenvector of R whose egenvalue has maxmal absolute value, we fnd Pr x x T Rx 2 R ] πn2 n. Thus, for a random R we fnd Pr R,x R t and x T Rx ] 2 R = Pr R R t] Pr R,x x T Rx ] 2 R R t Pr R R t] πn2 n. On the other hand, Pr R,x R t and x T Rx 2 R ] Pr R,x R t and x T Rx t/2 ] Pr R,x x T Rx t/2 ] = E x PrR x T Rx t/2 ]] 2e (t/2)2,

Lecture 2: November 4, 202 2-5 where the last nequalty follows from Lemma 2.2.2. Combnng these nequaltes, we obtan whch mples the clamed result. Pr R R t] πn2 n e (t/2)2, The probablty n Theorem 2.4. becomes small once e t2 /4 exceeds πn2 n. As n grows large, ths happens for t > 2 ln 2 n (5/3) n. Ths s a lttle worse than the bound that I clamed at the begnnng of the lecture. The reason s that I have optmzed ths proof so that all the numbers that appear are nce. To get the tghtest bound possble, we should choose an nner product other than 3/2 n Lemma 2.3., and then propagatng the change throught the proof. If we replace 3/2 by 0.957, we obtan an upper bound on the norm of.34 n. It s known that the norm of R s unlkely to be much more than n. Ths s proved by Füred and Komlós FK8] and Vu Vu07], usng a very dfferent technque. The dea behnd these papers s to consder Tr ( R k) for a hgh power of k. They show that the expectaton of ths varable s unlkely to be large, and explot the fact that R ( ( Tr R k)) /k. 2.5 Queston Is there a varant of ths proof that yelds good bounds when the entres of A have a lower probablty of beng? For example, consder the case n whch there s a number p < 0.5 such that each entry of A s wth probablty p. To get a good proof n ths regme, one needs a concentraton nequalty that s stronger than Hoeffdng s. References FK8] Z. Füred and J. Komlós. The egenvalues of random symmetrc matrces. Combnatorca, (3):233 24, 98. Vu07] Van Vu. Spectral norm of random matrces. Combnatorca, 27(6):72 736, 2007.