Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families

Similar documents
Graph Sparsification II: Rank one updates, Interlacing, and Barriers

Bipartite Ramanujan Graphs of Every Degree

Graphs, Vectors, and Matrices Daniel A. Spielman Yale University. AMS Josiah Willard Gibbs Lecture January 6, 2016

Graph Sparsification I : Effective Resistance Sampling

Lecture 19. The Kadison-Singer problem

Ramanujan Graphs of Every Size

Sparsification of Graphs and Matrices

Matching Polynomials of Graphs

Eigenvalues, random walks and Ramanujan graphs

Lecture 18. Ramanujan Graphs continued

arxiv: v3 [math.co] 3 Dec 2017

Constant Arboricity Spectral Sparsifiers

Random Lifts of Graphs

Coverings of Graphs and Tiered Trees

Properties of Expanders Expanders as Approximations of the Complete Graph

STABLE MULTIVARIATE GENERALIZATIONS OF MATCHING POLYNOMIALS

arxiv: v1 [math.co] 21 Dec 2014

PSRGs via Random Walks on Graphs

Invertibility and Largest Eigenvalue of Symmetric Matrix Signings

Lecture 2: Stable Polynomials

X-Ramanujan graphs. Sidhanth Mohanty* Ryan O Donnell. November 2, 2018

EXPANDERS, EIGENVALUES, AND ALL THAT

Ramanujan Graphs, Ramanujan Complexes and Zeta Functions. Emerging Applications of Finite Fields Linz, Dec. 13, 2013

An Elementary Construction of Constant-Degree Expanders

U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21

Lecture 17: Cheeger s Inequality and the Sparsest Cut Problem

Graph Sparsifiers. Smaller graph that (approximately) preserves the values of some set of graph parameters. Graph Sparsification

Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving

Graph coloring, perfect graphs

18.312: Algebraic Combinatorics Lionel Levine. Lecture 19

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

arxiv: v4 [math.co] 12 Feb 2017

Lecture 13: Spectral Graph Theory

Real Stable Polynomials: Description and Application

EIGENVECTOR NORMS MATTER IN SPECTRAL GRAPH THEORY

The Method of Interlacing Polynomials

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

A New Spectral Technique Using Normalized Adjacency Matrices for Graph Matching 1

Graph Sparsifiers: A Survey

arxiv: v3 [math.pr] 27 Nov 2015

The Hierarchical Product of Graphs

CPSC 536N: Randomized Algorithms Term 2. Lecture 9

Hypergraph expanders of all uniformities from Cayley graphs

Algebraic Constructions of Graphs

Lecture 7: Schwartz-Zippel Lemma, Perfect Matching. 1.1 Polynomial Identity Testing and Schwartz-Zippel Lemma

Average Mixing Matrix of Trees

Introduction. 1.1 First Things. 1.2 Introduction. 1.3 Mechanics. Spectral Graph Theory Lecture 1. Daniel A. Spielman August 29, 2018

The Kadison-Singer Problem for Strongly Rayleigh Measures and Applications to Asymmetric TSP

Probabilistic Proofs of Existence of Rare Events. Noga Alon

Zeta functions in Number Theory and Combinatorics

Notes 6 : First and second moment methods

High-dimensional permutations and discrepancy

Characteristic polynomials of skew-adjacency matrices of oriented graphs

Lecture 15: Expanders

Spectral gaps and geometric representations. Amir Yehudayoff (Technion)

Lecture 16: Roots of the Matching Polynomial

Connections between exponential time and polynomial time problem Lecture Notes for CS294

Eigenvalues and edge-connectivity of regular graphs

Lecture 10 February 4, 2013

Lecture 13 Spectral Graph Algorithms

Eigenvalues of 2-edge-coverings

Advanced Combinatorial Optimization Feb 13 and 25, Lectures 4 and 6

Algorithmic Primitives for Network Analysis: Through the Lens of the Laplacian Paradigm

Approximating the Largest Root and Applications to Interlacing Families

Lecture 4. 1 Circuit Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, Jonathan Katz

Lecture 17: D-Stable Polynomials and Lee-Yang Theorems

On the mean connected induced subgraph order of cographs

Lecture 12: Constant Degree Lossless Expanders

Algorithms, Graph Theory, and Linear Equa9ons in Laplacians. Daniel A. Spielman Yale University

The Number of Independent Sets in a Regular Graph

Affine extractors over large fields with exponential error

Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics

Lecture 1 and 2: Random Spanning Trees

2.1 Laplacian Variants

Semidefinite programs and combinatorial optimization

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

Local correctability of expander codes

TOPOLOGY FOR GLOBAL AVERAGE CONSENSUS. Soummya Kar and José M. F. Moura

Spectral and Electrical Graph Theory. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

2 Introduction to Network Theory

Edge-Disjoint Spanning Trees and Eigenvalues of Regular Graphs

Combinatorial Optimization

High-dimensional permutations

On the adjacency matrix of a block graph

Applications of the Sparse Regularity Lemma

Real-rooted h -polynomials

Zero controllability in discrete-time structured systems

Office of Naval Research contract no. N K-0377 (9) FINAL TECHNICAL REPORT

The Hierarchical Product of Graphs

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

Random regular graphs. Nick Wormald University of Waterloo

Minimum number of non-zero-entries in a 7 7 stable matrix

Regular Honest Graphs, Isoperimetric Numbers, and Bisection of Weighted Graphs

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality

A New Space for Comparing Graphs

THE Q-SPECTRUM AND SPANNING TREES OF TENSOR PRODUCTS OF BIPARTITE GRAPHS

The expansion of random regular graphs

Spectral Graph Theory

Transcription:

Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families Nikhil Srivastava Microsoft Research India Simons Institute, August 27, 2014

The Last Two Lectures Lecture 1. Every weighted undirected G has a weighted subgraph H with O edges n log n ε 2 which satisfies L G L H 1 + ε L G random sampling

The Last Two Lectures Lecture 1. Every weighted undirected G has a weighted subgraph H with O edges n log n ε 2 which satisfies L G L H 1 + ε L G Lecture 2. Improved this to 4 n ε 2. greedy

The Last Two Lectures Lecture 1. Every weighted undirected G has a weighted subgraph H with O edges n log n ε 2 which satisfies L G L H 1 + ε L G Lecture 2. Improved this to 4 n ε 2. Suboptimal for K n in two ways: weights, and 2n/ε 2.

Good Sparsifiers of K n G=K n H = random d-regular x (n/d) E G = O(n 2 ) E H = O(dn) L Kn (1 ε) L H L Kn 1 + ε

Good Sparsifiers of K n G=K n H = random d-regular x (n/d) E G = O(n 2 ) E H = O(dn) n(1 ε) L H n 1 + ε

Regular Unweighted Sparsifiers of K n G=K n Rescale weights back to 1 H = random d-regular E G = O(n 2 ) E H = dn/2 d(1 ε) L H d 1 + ε

Regular Unweighted Sparsifiers of K n G=K n H = random d-regular E G = O(n 2 ) E H = dn/2 [Friedman 08] d 2 d 1 L H d + 2 d 1 +o(1)

Regular Unweighted Sparsifiers of K n G=K n H = random d-regular will try to match this E G = O(n 2 ) E H = dn/2 [Friedman 08] d 2 d 1 L H d + 2 d 1 +o(1)

Why do we care so much about K n? Unweighted d-regular approximations of K n are called expanders. They behave like random graphs: the right # edges across cuts fast mixing of random walks Prototypical pseudorandom object. Many uses in CS and math (Routing, Coding, Complexity )

Switch to Adjacency Matrix Let G be a graph and A be its adjacency matrix b c a d e eigenvalues λ 1 λ 2 λ n 0 1 0 0 1 1 0 1 0 1 0 1 0 1 0 0 0 1 0 1 1 1 0 1 0 L = di A

Switch to Adjacency Matrix Let G be a graph and A be its adjacency matrix b c a d e eigenvalues λ 1 λ 2 λ n 0 1 0 0 1 1 0 1 0 1 0 1 0 1 0 0 0 1 0 1 1 1 0 1 0 If d-regular, then A1 = d1 so If bipartite then eigs are symmetric about zero so λ 1 = d λ n = d trivial

Spectral Expanders Definition: G is a good expander if all non-trivial eigenvalues are small 0 [ ] -d d

Spectral Expanders Definition: G is a good expander if all non-trivial eigenvalues are small [ ] -d 0 d e.g. K d and K d,d have all nontrivial eigs 0.

Spectral Expanders Definition: G is a good expander if all non-trivial eigenvalues are small [ ] -d 0 d e.g. K d and K d,d have all nontrivial eigs 0. Challenge: construct infinite families.

Spectral Expanders Definition: G is a good expander if all non-trivial eigenvalues are small [ ] -d 0 d e.g. K d and K d,d have all nontrivial eigs 0. Challenge: construct infinite families. Alon-Boppana 86: Can t beat [ 2 d 1, 2 d 1]

The meaning of 2 d 1 The infinite d-ary tree λ A T = [ 2 d 1, 2 d 1]

The meaning of 2 d 1 The infinite d-ary tree λ A T = [ 2 d 1, 2 d 1] Alon-Boppana 86: This is the best possible spectral expander.

Ramanujan Graphs: Definition: G is Ramanujan if all non-trivial eigs have absolute value at most 2 d 1 [ [ ] ] -d 0 d -2 d 1 2 d 1

Ramanujan Graphs: Definition: G is Ramanujan if all non-trivial eigs have absolute value at most 2 d 1 [ [ ] ] -d 0 d -2 d 1 2 d 1 Friedman 08: A random d-regular graph is almost Ramanujan : 2 d 1 + o(1)

Ramanujan Graphs: Definition: G is Ramanujan if all non-trivial eigs have absolute value at most 2 d 1 [ [ ] ] -d 0 d -2 d 1 2 d 1 Friedman 08: A random d-regular graph is almost Ramanujan : 2 d 1 + o(1) Margulis, Lubotzky-Phillips-Sarnak 88: Infinite sequences of Ramanujan graphs exist for d = p + 1

Ramanujan Graphs: Definition: G is Ramanujan if all non-trivial eigs have absolute value at most 2 d 1 [ [ ] ] -d 0 d -2 d 1 2 d 1 Friedman 08: A random d-regular What graph about is almost Ramanujan : 2 d 1 + o(1) d p + 1? Margulis, Lubotzky-Phillips-Sarnak 88: Infinite sequences of Ramanujan graphs exist for d = p + 1

[Marcus-Spielman-S 13] Theorem. Infinite families of bipartite Ramanujan graphs exist for every d 3.

[Marcus-Spielman-S 13] Theorem. Infinite families of bipartite Ramanujan graphs exist for every d 3. Proof is elementary, doesn t use number theory. Not explicit. Based on a new existence argument: method of interlacing families of polynomials.

[Marcus-Spielman-S 13] Theorem. Infinite families of bipartite Ramanujan graphs exist for every d 3. Ep k = 1 1 k x n m x Proof is elementary, doesn t use number theory. Not explicit. Based on a new existence argument: method of interlacing families of polynomials.

Bilu-Linial 06 Approach Find an operation which doubles the size of a graph without blowing up its eigenvalues. [ [ ] ] -d 0 d -2 d 1 2 d 1

Bilu-Linial 06 Approach Find an operation which doubles the size of a graph without blowing up its eigenvalues. [ [ ] ] -d 0 d -2 d 1 2 d 1

Bilu-Linial 06 Approach Find an operation which doubles the size of a graph without blowing up its eigenvalues. [ [ ] ] -d 0 d -2 d 1 2 d 1

2-lifts of graphs b c a d e

2-lifts of graphs a a b b c e c e d d duplicate every vertex

2-lifts of graphs a 0 a 1 b 0 b 1 c 0 e 0 c 1 e 1 d 0 d 1 duplicate every vertex

2-lifts of graphs a 0 a 1 b 0 b 1 c 0 e 0 c 1 e 1 d 0 d 1 for every pair of edges: leave on either side (parallel), or make both cross

2-lifts of graphs a 0 a 1 b 0 b 1 c 0 e 0 c 1 e 1 d 0 d 1 for every pair of edges: leave on either side (parallel), or make both cross

2-lifts of graphs 2 m possibilities a 0 a 1 b 0 b 1 c 0 e 0 c 1 e 1 d 0 d 1 for every pair of edges: leave on either side (parallel), or make both cross

Eigenvalues of 2-lifts (Bilu-Linial) Given a 2-lift of G, create a signed adjacency matrix A s with a -1 for crossing edges and a 1 for parallel edges b 0 c 0 a 0 e 0 b 1 c 1 a 1 e 1 0-1 0 0 1-1 0 1 0 1 0 1 0-1 0 0 0-1 0 1 1 1 0 1 0 d 0 d 1

Eigenvalues of 2-lifts (Bilu-Linial) Theorem: The eigenvalues of the 2-lift are: λ 1,, λ n = eigs A λ 1 λ n = eigs(a s ) A s = 0-1 0 0 1-1 0 1 0 1 0 1 0-1 0 0 0-1 0 1 1 1 0 1 0

Eigenvalues of 2-lifts (Bilu-Linial) Theorem: The eigenvalues of the 2-lift are the union of the eigenvalues of A (old) and the eigenvalues of A s (new) Conjecture: Every d-regular graph has a 2-lift in which all the new eigenvalues have absolute value at most

Eigenvalues of 2-lifts (Bilu-Linial) Theorem: The eigenvalues of the 2-lift are the union of the eigenvalues of A (old) and the eigenvalues of A s (new) Conjecture: Every d-regular adjacency matrix A has a signing A s with A S 2 d 1

Eigenvalues of 2-lifts (Bilu-Linial) Theorem: The eigenvalues of the 2-lift are the union of the eigenvalues of A (old) and the eigenvalues of A s (new) Conjecture: Every d-regular adjacency matrix A has a signing A s with A S 2 d 1 Bilu-Linial 06: This is true with O( d log 3 d)

Eigenvalues of 2-lifts (Bilu-Linial) Conjecture: Every d-regular adjacency matrix A has a signing A s with A S 2 d 1 We prove this in the bipartite case.

Eigenvalues of 2-lifts (Bilu-Linial) Theorem: Every d-regular adjacency matrix A has a signing A s with λ 1 (A S ) 2 d 1

Eigenvalues of 2-lifts (Bilu-Linial) Theorem: Every d-regular bipartite adjacency matrix A has a signing A s with A S 2 d 1 Trick: eigenvalues of bipartite graphs are symmetric about 0, so only need to bound largest

Random Signings Idea 1: Choose s 1,1 m randomly.

Random Signings Idea 1: Choose s 1,1 m randomly. Unfortunately, (Bilu-Linial showed A is nearly Ramanujan ) when

Random Signings Idea 2: Observe that where Consider Usually useless, but not here! is an interlacing family. such that

Random Signings Idea 2: Observe that where Consider Usually useless, but not here! is an interlacing family. such that

Random Signings Idea 2: Observe that where Consider Usually useless, but not here! coefficient-wise average such that is an interlacing family.

Random Signings Idea 2: Observe that where Consider Usually useless, but not here! is an interlacing family. such that

3-Step Proof Strategy 1. Show that some poly does as well as the. such that

NOT λ max χ As Eλ max (χ As ) 3-Step Proof Strategy 1. Show that some poly does as well as the. such that

3-Step Proof Strategy 1. Show that some poly does as well as the. such that 2. Calculate the expected polynomial.

3-Step Proof Strategy 1. Show that some poly does as well as the. such that 2. Calculate the expected polynomial. 3. Bound the largest root of the expected poly.

3-Step Proof Strategy 1. Show that some poly does as well as the. such that 2. Calculate the expected polynomial. 3. Bound the largest root of the expected poly.

3-Step Proof Strategy 1. Show that some poly does as well as the. such that 2. Calculate the expected polynomial. 3. Bound the largest root of the expected poly.

Step 2: The expected polynomial Theorem [Godsil-Gutman 81] For any graph G, the matching polynomial of G

The matching polynomial (Heilmann-Lieb 72) m i = the number of matchings with i edges

one matching with 0 edges

7 matchings with 1 edge

Proof that Expand using permutations x ±1 0 0 ±1 ±1 ±1 x ±1 0 0 0 0 ±1 x ±1 0 0 0 0 ±1 x ±1 0 ±1 0 0 ±1 x ±1 ±1 0 0 0 ±1 x

Proof that Expand using permutations same edge: same value x ±1 0 0 ±1 ±1 ±1 x ±1 0 0 0 0 ±1 x ±1 0 0 0 0 ±1 x ±1 0 ±1 0 0 ±1 x ±1 ±1 0 0 0 ±1 x

Proof that Expand using permutations same edge: same value x ±1 0 0 ±1 ±1 ±1 x ±1 0 0 0 0 ±1 x ±1 0 0 0 0 ±1 x ±1 0 ±1 0 0 ±1 x ±1 ±1 0 0 0 ±1 x

Proof that Expand using permutations Get 0 if hit any 0s x ±1 0 0 ±1 ±1 ±1 x ±1 0 0 0 0 ±1 x ±1 0 0 0 0 ±1 x ±1 0 ±1 0 0 ±1 x ±1 ±1 0 0 0 ±1 x

Proof that Expand using permutations x ±1 0 0 ±1 ±1 ±1 x ±1 0 0 0 0 ±1 x ±1 0 0 0 0 ±1 x ±1 0 ±1 0 0 ±1 x ±1 ±1 0 0 0 ±1 x Get 0 if take just one entry for any edge

Proof that Expand using permutations x ±1 0 0 ±1 ±1 ±1 x ±1 0 0 0 0 ±1 x ±1 0 0 0 0 ±1 x ±1 0 ±1 0 0 ±1 x ±1 ±1 0 0 0 ±1 x Only permutations that count are involutions

Proof that Expand using permutations x ±1 0 0 ±1 ±1 ±1 x ±1 0 0 0 0 ±1 x ±1 0 0 0 0 ±1 x ±1 0 ±1 0 0 ±1 x ±1 ±1 0 0 0 ±1 x Only permutations that count are involutions

Proof that Expand using permutations x ±1 0 0 ±1 ±1 ±1 x ±1 0 0 0 0 ±1 x ±1 0 0 0 0 ±1 x ±1 0 ±1 0 0 ±1 x ±1 ±1 0 0 0 ±1 x Only permutations that count are involutions Correspond to matchings

Proof that Expand using permutations x ±1 0 0 ±1 ±1 ±1 x ±1 0 0 0 0 ±1 x ±1 0 0 0 0 ±1 x ±1 0 ±1 0 0 ±1 x ±1 ±1 0 0 0 ±1 x Only permutations that count are involutions Correspond to matchings

3-Step Proof Strategy 1. Show that some poly does as well as the. such that 2. Calculate the expected polynomial. [Godsil-Gutman 81] 3. Bound the largest root of the expected poly.

3-Step Proof Strategy 1. Show that some poly does as well as the. such that 2. Calculate the expected polynomial. [Godsil-Gutman 81] 3. Bound the largest root of the expected poly.

The matching polynomial (Heilmann-Lieb 72) Theorem (Heilmann-Lieb) all the roots are real

The matching polynomial (Heilmann-Lieb 72) Theorem (Heilmann-Lieb) all the roots are real and have absolute value at most

The matching polynomial (Heilmann-Lieb 72) Theorem (Heilmann-Lieb) all the roots are real and have absolute value at most The number 2 d 1 comes by comparing to an infinite d ary tree [Godsil].

3-Step Proof Strategy 1. Show that some poly does as well as the. such that 2. Calculate the expected polynomial. [Godsil-Gutman 81] 3. Bound the largest root of the expected poly. [Heilmann-Lieb 72]

3-Step Proof Strategy 1. Show that some poly does as well as the. such that 2. Calculate the expected polynomial. [Godsil-Gutman 81] 3. Bound the largest root of the expected poly. [Heilmann-Lieb 72]

3-Step Proof Strategy 1. Show that some poly does as well as the. such that Implied by: is an interlacing family.

Averaging Polynomials Basic Question: Given when are the roots of the related to roots of?

Averaging Polynomials Basic Question: Given when are the roots of the related to roots of? Answer: Certainly not always

Averaging Polynomials Basic Question: Given when are the roots of the related to roots of? Answer: Certainly not always 1 2 1 2 p x = x 1 x 2 = x 2 3x + 2 q x = x 3 x 4 = x 2 7x + 12 x 2.5 + 3i x 2.5 3i = x 2 5x + 7

Averaging Polynomials Basic Question: Given when are the roots of the related to roots of? But sometimes it works:

A Sufficient Condition Basic Question: Given when are the roots of the related to roots of? Answer: When they have a common interlacing. Definition. interlaces if

Theorem. If interlacing, have a common

Theorem. If interlacing, have a common Proof.

Theorem. If interlacing, have a common Proof.

Theorem. If interlacing, have a common Proof.

Theorem. If interlacing, have a common Proof.

Theorem. If interlacing, have a common Proof.

Theorem. If interlacing, have a common Proof.

Interlacing Family of Polynomials Definition: is an interlacing family if can be placed on the leaves of a tree so that when every node is the sum of leaves below & sets of siblings have common interlacings

Interlacing Family of Polynomials Definition: is an interlacing family if can be placed on the leaves of a tree so that when every node is the sum of leaves below & sets of siblings have common interlacings

Interlacing Family of Polynomials Definition: is an interlacing family if can be placed on the leaves of a tree so that when every node is the sum of leaves below & sets of siblings have common interlacings

Interlacing Family of Polynomials Definition: is an interlacing family if can be placed on the leaves of a tree so that when every node is the sum of leaves below & sets of siblings have common interlacings

Interlacing Family of Polynomials Definition: is an interlacing family if can be placed on the leaves of a tree so that when every node is the sum of leaves below & sets of siblings have common interlacings

Interlacing Family of Polynomials Theorem: There is an s so that

Interlacing Family of Polynomials Theorem: There is an s so that Proof: By common interlacing, one of, has

Interlacing Family of Polynomials Theorem: There is an s so that Proof: By common interlacing, one of, has

Interlacing Family of Polynomials Theorem: There is an s so that Proof: By common interlacing, one of, has

Interlacing Family of Polynomials Theorem: There is an s so that Proof: By common interlacing, one of, has.

Interlacing Family of Polynomials Theorem: There is an s so that Proof: By common interlacing, one of, has.

An interlacing family Theorem: Let is an interlacing family

To prove interlacing family Let Leaves of tree = signings s 1,, s m Internal nodes = partial signings s 1,, s k

To prove interlacing family Let Leaves of tree = signings s 1,, s m Internal nodes = partial signings s 1,, s k Need to find common interlacing for every internal node

How to Prove Common Interlacing Lemma (Fisk 08, folklore): Suppose p(x) and q(x) are monic and real-rooted. Then: a common interlacing r of p and q convex combinations, αp + 1 α q has real roots.

To prove interlacing family Let Need to prove that for all, is real rooted

To prove interlacing family Let Need to prove that for all, is real rooted are fixed is 1 with probability, -1 with are uniformly

Generalization of Heilmann-Lieb Suffices to prove that is real rooted for every product distribution on the entries of s

Generalization of Heilmann-Lieb Suffices to prove that is real rooted for every product distribution on the entries of s:

Transformation to PSD Matrices Suffices to show real rootedness of

Transformation to PSD Matrices Suffices to show real rootedness of Why is this useful?

Transformation to PSD Matrices Suffices to show real rootedness of Why is this useful?

Transformation to PSD Matrices

Transformation to PSD Matrices E s det xi di A s = Edet xi v ij v ij T ij E where v ij = δ i δ j with probability λ ij δ i + δ j with probability (1 λ ij )

Master Real-Rootedness Theorem Given any independent random vectors v 1,, v m R d, their expected characteristic polymomial Edet has real roots. xi i v i v i T

Master Real-Rootedness Theorem Given any independent random vectors v 1,, v m R d, their expected characteristic polymomial Edet xi i v i v i T has real roots. How to prove this?

The Multivariate Method A. Sokal, 90 s-2005: it is often useful to consider the multivariate polynomial even if one is ultimately interested in a particular one-variable specialization Borcea-Branden 2007+: prove that univariate polynomials are real-rooted by showing that they are nice transformations of real-rooted multivariate polynomials.

Real Stable Polynomials Definition. p R x 1,, x n is real stable if every univariate restriction in the strictly positive orthant: p t f x + t y y > 0 is real-rooted. If it has real coefficients, it is called real stable.

Real Stable Polynomials Definition. p C x 1,, x n is real stable if every univariate restriction in the strictly positive orthant: p t f x + t y y > 0 is real-rooted.

Real Stable Polynomials Definition. p R x 1,, x n is real stable if every univariate restriction in the strictly positive orthant: p t f x + t y y > 0 is real-rooted.

Real Stable Polynomials Definition. p R x 1,, x n is real stable if every univariate restriction in the strictly positive orthant: p t f x + t y y > 0 is real-rooted.

Real Stable Polynomials Definition. p R x 1,, x n is real stable if every univariate restriction in the strictly positive orthant: p t f x + t y y > 0 is real-rooted.

Real Stable Polynomials Definition. p R x 1,, x n is real stable if every univariate restriction in the strictly positive orthant: p t f x + t y y > 0 is real-rooted. Not positive

A Useful Real Stable Poly Borcea-Brändén 08: For PSD matrices is real stable

A Useful Real Stable Poly Borcea-Brändén 08: For PSD matrices is real stable Proof: Every positive univariate restriction is the characteristic polynomial of a symmetric matrix. det x i A i + t y i A i = det(ti + S) i i

Excellent Closure Properties Definition: is real stable if Implies. for all i If is real stable, then so is 1. p(α, z 2,, z n ) for any α R 2. 1 zi p(z 1, z n ) [Lieb-Sokal 81]

A Useful Real Stable Poly Borcea-Brändén 08: For PSD matrices is real stable Plan: apply closure properties to this to show that Edet xi i v i v i T is real stable.

Central Identity Suppose v 1,, v m are independent random vectors with A i Ev i v i T. Then Edet xi v i v i T = m i=1 i 1 z i det xi + i z i A i z 1 = =z m =0

Central Identity Suppose v 1,, v m are independent random vectors with A i Ev i v i T. Then Edet xi v i v i T = m i=1 i 1 z i det xi + i z i A i z 1 = =z m =0 Key Principle: random rank one updates (1 z ) operators.

Proof of Master Real- Rootedness Theorem Suppose v 1,, v m are independent random vectors with A i Ev i v i T. Then Edet xi v i v i T = m i=1 i 1 z i det xi + i z i A i z 1 = =z m =0

Proof of Master Real- Rootedness Theorem Suppose v 1,, v m are independent random vectors with A i Ev i v i T. Then Edet xi v i v i T = m i=1 i 1 z i det xi + i z i A i z 1 = =z m =0

Proof of Master Real- Rootedness Theorem Suppose v 1,, v m are independent random vectors with A i Ev i v i T. Then Edet xi v i v i T = m i=1 i 1 z i det xi + i z i A i z 1 = =z m =0

Proof of Master Real- Rootedness Theorem Suppose v 1,, v m are independent random vectors with A i Ev i v i T. Then Edet xi v i v i T = m i=1 i 1 z i det xi + i z i A i z 1 = =z m =0

Proof of Master Real- Rootedness Theorem Suppose v 1,, v m are independent random vectors with A i Ev i v i T. Then Edet xi v i v i T = m i=1 i 1 z i det xi + i z i A i z 1 = =z m =0

The Whole Proof Edet xi i v i v i T is real-rooted for all indep. v i.

The Whole Proof rank one structure naturally reveals interlacing. Eχ As (d x) is real-rooted for all product distributions on signings. Edet xi i v i v i T is real-rooted for all indep. v i.

The Whole Proof Eχ As (x) is real-rooted for all product distributions on signings. Edet xi i v i v i T is real-rooted for all indep. v i.

The Whole Proof χ As x x ±1 m is an interlacing family Eχ As (x) is real-rooted for all product distributions on signings. Edet xi i v i v i T is real-rooted for all indep. v i.

The Whole Proof such that χ As x x ±1 m is an interlacing family Eχ As (x) is real-rooted for all product distributions on signings. Edet xi i v i v i T is real-rooted for all indep. v i.

3-Step Proof Strategy 1. Show that some poly does as well as the. such that 2. Calculate the expected polynomial. 3. Bound the largest root of the expected poly.

Infinite Sequences of Bipartite Ramanujan Graphs Find an operation which doubles the size of a graph without blowing up its eigenvalues. [ [ ] ] -d 0 d -2 d 1 2 d 1

Main Theme Reduced the existence of a good matrix to: 1. Proving real-rootedness of an expected polynomial. 2. Bounding roots of the expected polynomial.

Main Theme Reduced the existence of a good matrix to: 1. Proving real-rootedness of an expected polynomial. (rank-1 structure + real stability) 2. Bounding roots of the expected polynomial. (matching poly + combinatorics)

Beyond complete graphs Unweighted sparsifiers of general graphs?

Beyond complete graphs G H 1 O(n)

Weights are Required in General G This edge has high resistance. H 1 O(n)

What if all edges are equally important? G?

Unweighted Decomposition Thm. G H 1 H 2 Theorem [MSS 13]: If all edges have resistance O(n/m), there is a partition of G into unweighted 1 + εsparsifiers, each with O n ε 2 edges.

Unweighted Decomposition Thm. G H 1 H 2 Theorem [MSS 13]: If all edges have resistance α, there is a partition of G into unweighted O(1)- sparsifiers, each with O mα edges.

Unweighted Decomposition Thm. G H 1 H 2 Theorem [MSS 13]: If all edges have resistance α, there is a partition of G into two unweighted 1 + αapproximations, each with half as many edges.

Unweighted Decomposition Thm. v i v i T I 2 i i v i v i T = I i v i v i T I 2 Theorem [MSS 13]: Given any vectors i v i v T i = I and ε, there is a partition into approximately v i 1/2-spherical quadratic forms, each I 2 ± O ε.

Unweighted Decomposition Thm. Proof: Analyze expected charpoly of a random partition: Edet xi i T i I i v i v T T i det xi i v i v i 2 i i v i v i T = I i v i v i T I 2 Theorem [MSS 13]: Given any vectors i v i v T i = I and ε, there is a partition into approximately v i 1/2-spherical quadratic forms, each I 2 ± O ε.

Unweighted Decomposition Thm. Other applications: Kadison-Singer Problem Uncertainty principles. i v i v i T I 2 i v i v i T = I i v i v i T I 2 Theorem [MSS 13]: Given any vectors i v i v T i = I and ε, there is a partition into approximately v i 1/2-spherical quadratic forms, each I 2 ± O ε.

Summary of Algorithms Result Edges Weights Time Spielman-S 08 O(nlogn) Yes O~(m) Random Sampling with Effective Resistances

Summary of Algorithms Result Edges Weights Time Spielman-S 08 O(nlogn) Yes O~(m) Batson-Spielman-S 09 O(n) Yes O(n 4 )

Summary of Algorithms Result Edges Weights Time Spielman-S 08 O(nlogn) Yes O~(m) Batson-Spielman-S 09 O(n) Yes O(n 4 ) Marcus-Spielman-S 13 O(n) No O(2 n ) Edet xi v i v i T det xi v i v i T i i

Open Questions Non-bipartite graphs Algorithmic construction (computing generalized μ G is hard) More general uses of interlacing families

Open Questions Nearly linear time algorithm for 4n/ε 2 size sparsifiers Improve to 2n/ε 2 for graphs? Fast combinatorial algorithm for approximating resistances