Algorithms, Graph Theory, and Linear Equa9ons in Laplacians. Daniel A. Spielman Yale University

Similar documents
Algorithms, Graph Theory, and the Solu7on of Laplacian Linear Equa7ons. Daniel A. Spielman Yale University

Spectral and Electrical Graph Theory. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

Graphs, Vectors, and Matrices Daniel A. Spielman Yale University. AMS Josiah Willard Gibbs Lecture January 6, 2016

Sparsification of Graphs and Matrices

Algorithmic Primitives for Network Analysis: Through the Lens of the Laplacian Paradigm

Laplacian Matrices of Graphs: Spectral and Electrical Theory

Algorithms, Graph Theory, and Linear Equations in Laplacian Matrices

Spectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

Spectral Algorithms for Graph Mining and Analysis. Yiannis Kou:s. University of Puerto Rico - Rio Piedras NSF CAREER CCF

Constant Arboricity Spectral Sparsifiers

Graph Sparsifiers: A Survey

Electrical Flow Primi7ve and Fast Graph Algorithms. Aleksander Mądry

Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving

Graph Sparsification I : Effective Resistance Sampling

Physical Metaphors for Graphs

Four graph partitioning algorithms. Fan Chung University of California, San Diego

Con$nuous Op$miza$on: The "Right" Language for Fast Graph Algorithms? Aleksander Mądry

ORIE 6334 Spectral Graph Theory October 13, Lecture 15

Lecture 13: Spectral Graph Theory

Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families

Random Walks and Evolving Sets: Faster Convergences and Limitations

Walks, Springs, and Resistor Networks

Fitting a Graph to Vector Data. Samuel I. Daitch (Yale) Jonathan A. Kelner (MIT) Daniel A. Spielman (Yale)

Properties of Expanders Expanders as Approximations of the Complete Graph

Graph Partitioning Using Random Walks

Conductance, the Normalized Laplacian, and Cheeger s Inequality

Algebraic Constructions of Graphs

An Empirical Comparison of Graph Laplacian Solvers

Probability & Computing

Eigenvalues, random walks and Ramanujan graphs

Random Lifts of Graphs

Random Matrices: Invertibility, Structure, and Applications

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality

Fast Linear Solvers for Laplacian Systems

Cutting Graphs, Personal PageRank and Spilling Paint

Diffusion and random walks on graphs

Effective Resistance and Schur Complements

Spectral Analysis of Random Walks

Math 307 Learning Goals. March 23, 2010

Lecture 17: Cheeger s Inequality and the Sparsest Cut Problem

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings

Data Analysis and Manifold Learning Lecture 7: Spectral Clustering

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Graph Partitioning Algorithms and Laplacian Eigenvalues

Spectral Graph Theory

1 Matrix notation and preliminaries from spectral graph theory

eigenvalue bounds and metric uniformization Punya Biswal & James R. Lee University of Washington Satish Rao U. C. Berkeley

PSRGs via Random Walks on Graphs

Graphs in Machine Learning

Fiedler s Theorems on Nodal Domains

1 Adjacency matrix and eigenvalues

TOPOLOGY FOR GLOBAL AVERAGE CONSENSUS. Soummya Kar and José M. F. Moura

Dissertation Defense

Graph Sparsifiers. Smaller graph that (approximately) preserves the values of some set of graph parameters. Graph Sparsification

Markov Chains, Random Walks on Graphs, and the Laplacian

Fiedler s Theorems on Nodal Domains

Bellman s Curse of Dimensionality

Eigenvalues of graphs

Facebook Friends! and Matrix Functions

Math 307 Learning Goals

Lecture Introduction. 2 Brief Recap of Lecture 10. CS-621 Theory Gems October 24, 2012

Unit 2. Projec.ons and Subspaces

Personal PageRank and Spilling Paint

Sparsification by Effective Resistance Sampling

U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21

Fast Linear Solvers for Laplacian Systems UCSD Research Exam

Iterative solvers for linear equations

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.]

On Finding Planted Cliques and Solving Random Linear Equa9ons. Math Colloquium. Lev Reyzin UIC

Lower-Stretch Spanning Trees

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Pseudospectral Methods For Op2mal Control. Jus2n Ruths March 27, 2009

CMPSCI 791BB: Advanced ML: Laplacian Learning

The Simplest Construction of Expanders

Lecture: Local Spectral Methods (3 of 4) 20 An optimization perspective on local spectral methods

Ramanujan Graphs, Ramanujan Complexes and Zeta Functions. Emerging Applications of Finite Fields Linz, Dec. 13, 2013

Solving systems of linear equations

Clustering Algorithms for Random and Pseudo-random Structures

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Lecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis

Graphical Models. Lecture 10: Variable Elimina:on, con:nued. Andrew McCallum

Vectorized Laplacians for dealing with high-dimensional data sets

Lecture 1. 1 if (i, j) E a i,j = 0 otherwise, l i,j = d i if i = j, and

Convergence of Random Walks and Conductance - Draft

Super-strong Approximation I

LOCAL CHEEGER INEQUALITIES AND SPARSE CUTS

Lecture 12 : Graph Laplacians and Cheeger s Inequality

Methodological Foundations of Biomedical Informatics (BMSC-GA 4449) Optimization

Cycle lengths in sparse graphs

Introduction. 1.1 First Things. 1.2 Introduction. 1.3 Mechanics. Spectral Graph Theory Lecture 1. Daniel A. Spielman August 29, 2018

Parameter Es*ma*on: Cracking Incomplete Data

Quantum walk algorithms

1 Review: symmetric matrices, their eigenvalues and eigenvectors

Invertibility of random matrices

What is the Riemann Hypothesis for Zeta Functions of Irregular Graphs?

Matching Polynomials of Graphs

Networks and Their Spectra

Laplacian eigenvalues and optimality: II. The Laplacian of a graph. R. A. Bailey and Peter Cameron

Algorithm Design Using Spectral Graph Theory

Convergence of Random Walks

Transcription:

Algorithms, Graph Theory, and Linear Equa9ons in Laplacians Daniel A. Spielman Yale University

Shang- Hua Teng

Carter Fussell TPS David Harbater UPenn Pavel Cur9s Todd Knoblock CTY

Serge Lang 1927-2005 Gian- Carlo Rota 1932-1999

Algorithms, Graph Theory, and Linear Equa9ons in Laplacians Daniel A. Spielman Yale University

Solving Linear Equa9ons Ax = b, Quickly Goal: In 9me linear in the number of non- zeros entries of A Special case: A is the Laplacian Matrix of a Graph

Solving Linear Equa9ons Ax = b, Quickly Goal: In 9me linear in the number of non- zeros entries of A Special case: A is the Laplacian Matrix of a Graph

Solving Linear Equa9ons Ax = b, Quickly Goal: In 9me linear in the number of non- zeros entries of A Special case: A is the Laplacian Matrix of a Graph 1. Why 2. Classic Approaches 3. Recent Developments 4. Connec9ons

Graphs Set of ver9ces V. Set of edges E of pairs {u,v} V

Graphs Set of ver9ces V. Set of edges E of pairs {u,v} V Erdös Alon Lovász Karp 1 2 3 5 7 4 6 9 10 8

Laplacian Quadra9c Form of G = (V,E) For x : V IR x T L G x = (u,v) E (x (u) x (v)) 2

Laplacian Quadra9c Form of G = (V,E) For x : V IR x T L G x = (u,v) E (x (u) x (v)) 2 x : 1 3 0 1 3

Laplacian Quadra9c Form of G = (V,E) For x : V IR x T L G x = (u,v) E (x (u) x (v)) 2 x : 1 3 0 x T L G x = 15 2 2 1 2 1 2 3 2 1 3

Laplacian Quadra9c Form for Weighted Graph G =(V,E,w) w : E IR + assigns a posi9ve weight to every edge x T L G x = (u,v) E w (u,v) (x (u) x (v)) 2 Matrix L G is posi9ve semi- definite nullspace spanned by const vector, if connected

Laplacian Matrix of a Weighted Graph L G (u, v) = w(u, v) if (u, v) E d(u) if u = v 0 otherwise d(u) = (v,u) E w(u, v) the weighted degree of u 1! 1 2 4 2! 1! 5 1! 3 3! 4-1 0-1 -2! -1 4-3 0 0! 0-3 4-1 0! -1 0-1 2 0! -2 0 0 0 2!

Laplacian Matrix of a Weighted Graph L G (u, v) = w(u, v) if (u, v) E d(u) if u = v 0 otherwise d(u) = (v,u) E w(u, v) 3 2 1! 1 2 4 2! 1! 5 1! 3 3! the weighted degree of u combinatorial degree is # of aaached edges 1 2 2 4-1 0-1 -2! -1 4-3 0 0! 0-3 4-1 0! -1 0-1 2 0! -2 0 0 0 2!

Networks of Resistors Ohm s laws gives i = v/r In general, i = L G v with w (u,v) =1/r (u,v) Minimize dissipated energy v T L G v 1V 0V

Networks of Resistors Ohm s laws gives i = v/r In general, i = L G v with w (u,v) =1/r (u,v) Minimize dissipated energy By solving Laplacian v T L G v 0.5V 0.5V 1V 0V 0.375V 0.625V

Learning on Graphs Infer values of a func9on at all ver9ces from known values at a few ver9ces. Minimize x T L G x = Subject to known values 0 (u,v) E w (u,v) (x (u) x (v)) 2 1

Learning on Graphs Infer values of a func9on at all ver9ces from known values at a few ver9ces. Minimize x T L G x = Subject to known values 0 0.5 (u,v) E 0.5 w (u,v) (x (u) x (v)) 2 1 0.375 0.625 By solving Laplacian

Spectral Graph Theory Combinatorial proper9es of G are revealed by eigenvalues and eigenvectors of L G Compute the most important ones by solving equa9ons in the Laplacian.

Solving Linear Programs in Op9miza9on Interior Point Methods for Linear Programming: network flow problems Laplacian systems Numerical solu9on of Ellip9c PDEs Finite Element Method

How to Solve Linear Equa9ons Quickly Fast when graph is simple, by elimina9on. Fast when graph is complicated*, by Conjugate Gradient (Hestenes 51, S9efel 52)

Cholesky Factoriza9on of Laplacians 1! 1 2 4 1! 1! 5 1! 3 1! 3-1 0-1 -1! -1 2-1 0 0! 0-1 2-1 0! -1 0-1 2 0! -1 0 0 0 1! When eliminate a vertex, connect its neighbors. Also known as Y- Δ

Cholesky Factoriza9on of Laplacians 1! 1 2 4 1!.33! 1! 5.33! 1!.33! 3 1! 3-1 0-1 -1! -1 2-1 0 0! 0-1 2-1 0! -1 0-1 2 0! -1 0 0 0 1! When eliminate a vertex, connect its neighbors. Also known as Y- Δ 3 0 0 0 0! 0 1.67-1.00-0.33-0.33! 0-1.00 2.00-1.00 0! 0-0.33-1.00 1.67-0.33! 0-0.33 0-0.33 0.67!

Cholesky Factoriza9on of Laplacians 1 2 4.33! 5.33! 1!.33! 3 1! 3-1 0-1 -1! -1 2-1 0 0! 0-1 2-1 0! -1 0-1 2 0! -1 0 0 0 1! When eliminate a vertex, connect its neighbors. Also known as Y- Δ 3 0 0 0 0! 0 1.67-1.00-0.33-0.33! 0-1.00 2.00-1.00 0! 0-0.33-1.00 1.67-0.33! 0-0.33 0-0.33 0.67!

1! 1 2 4 1! 1! 5 1! 3 1.33! 2 4 5.33! 1! 3 1!.33! 1! 1 2 4 5.2!.4! 1.2! 3 3-1 0-1 -1! -1 2-1 0 0! 0-1 2-1 0! -1 0-1 2 0! -1 0 0 0 1! 3 0 0 0 0! 0 1.67-1.00-0.33-0.33! 0-1.00 2.00-1.00 0! 0-0.33-1.00 1.67-0.33! 0-0.33 0-0.33 0.67! 3 0 0 0 0! 0 1.67 0 0 0! 0 0 1.4-1.2-0.2! 0 0-1.2 1.6-0.4! 0 0-0.2-0.4 0.6!

1! 2 3 5 1! 1! 1 1! 4 The order maaers 1! 1-1 0 0 0! -1 3-1 0-1! 0-1 2-1 0! 0 0-1 2-1! 0-1 0-1 2! 1! 2 3 5 1! 1 1! 4 2 3 5 1 0.5! 1! 4 1! 1! 1 0 0 0 0! 0 2-1 0-1! 0-1 2-1 0! 0 0-1 2-1! 0-1 0-1 2! 1 0 0 0 0! 0 2 0 0 0! 0 0 1.5-1 -0.5! 0 0-1.0 2-1.0! 0 0-0.5-1 1.5!

Complexity of Cholesky Factoriza9on #ops ~ Σ v (degree of v when eliminate) 2 Tree (connected, no cycles) #ops ~ O( V )

Complexity of Cholesky Factoriza9on #ops ~ Σ v (degree of v when eliminate) 2 Tree (connected, no cycles) #ops ~ O( V )

Complexity of Cholesky Factoriza9on #ops ~ Σ v (degree of v when eliminate) 2 Tree #ops ~ O( V )

Complexity of Cholesky Factoriza9on #ops ~ Σ v (degree of v when eliminate) 2 Tree #ops ~ O( V )

Complexity of Cholesky Factoriza9on #ops ~ Σ v (degree of v when eliminate) 2 Tree #ops ~ O( V )

Complexity of Cholesky Factoriza9on #ops ~ Σ v (degree of v when eliminate) 2 Tree #ops ~ O( V )

Complexity of Cholesky Factoriza9on #ops ~ Σ v (degree of v when eliminate) 2 Tree #ops ~ O( V ) Planar #ops ~ O( V 3/2 ) Lipton- Rose- Tarjan 79

Complexity of Cholesky Factoriza9on #ops ~ Σ v (degree of v when eliminate) 2 Tree #ops ~ O( V ) Planar #ops ~ O( V 3/2 ) Lipton- Rose- Tarjan 79

Complexity of Cholesky Factoriza9on #ops ~ Σ v (degree of v when eliminate) 2 Tree #ops ~ O( V ) Planar #ops ~ O( V 3/2 ) Lipton- Rose- Tarjan 79 Expander like random, but O( V ) edges #ops Ω( V 3 ) Lipton- Rose- Tarjan 79

Conductance and Cholesky Factoriza9on Cholesky slow when conductance high Cholesky fast when low for G and all subgraphs For S V S Φ(S) = #edgesleavings sum degrees on smaller side, S or V S Φ G =min S V Φ(S)

Conductance S

Conductance S Φ(S) =3/5

Conductance S

Cheeger s Inequality and the Conjugate Gradient Cheeger s inequality (degree- d unwted case) 1 λ 2 2 d Φ G 2 λ 2 d λ 2 = second- smallest eigenvalue of L G ~ d/mixing 9me of random walk

Cheeger s Inequality and the Conjugate Gradient Cheeger s inequality (degree- d unwted case) 1 λ 2 2 d Φ G 2 λ 2 d λ 2 = second- smallest eigenvalue of L G ~ d/mixing 9me of random walk Conjugate Gradient finds - approx solu9on to L G x = b in O( d/λ 2 log 1 ) mults by L G in O( E d/λ 2 log 1 ) ops

Fast solu9on of linear equa9ons CG fast when conductance high. Elimina9on fast when low for G and all subgraphs.

Fast solu9on of linear equa9ons CG fast when conductance high. Planar graphs Elimina9on fast when low for G and all subgraphs. Want speed of extremes in the middle

Fast solu9on of linear equa9ons CG fast when conductance high. Planar graphs Elimina9on fast when low for G and all subgraphs. Want speed of extremes in the middle Not all graphs fit into these categories!

Precondi9oned Conjugate Gradient Solve L G x = b by Approxima9ng L G by L H (the precondi9oner) In each itera9on solve a system in L H mul9ply a vector by L G - approx solu9on ater O( κ(l G,L H ) log 1 ) itera9ons

Precondi9oned Conjugate Gradient Solve L G x = b by Approxima9ng L G by L H (the precondi9oner) In each itera9on solve a system in L H mul9ply a vector by L G - approx solu9on ater O( κ(l G,L H ) log 1 ) itera9ons rela6ve condi6on number

Inequali9es and Approxima9on L H L G L G L H if is positive semi-definite, " i.e. for all x, x T L H x x T L G x Example: if H is a subgraph of G x T L G x = (u,v) E w (u,v) (x (u) x (v)) 2

Inequali9es and Approxima9on L H L G L G L H if is positive semi-definite, " i.e. for all x, κ(l G,L H ) t x T L H x x T L G x if iff L H L G tl H cl H L G ctl H for some c

Inequali9es and Approxima9on L H L G L G L H if is positive semi-definite, " i.e. for all x, κ(l G,L H ) t x T L H x x T L G x if iff L H L G tl H cl H L G ctl H for some c Call H a t- approx of G if κ(l G,L H ) t

Other defini9ons of the condi9on number (Golds9ne, von Neumann 47) κ(l G,L H )= max x Span(L H ) x T L G x x T L H x max x Span(L G ) x T L H x x T L G x κ(l G,L H )= λ max(l G L + H ) λ min (L G L + H ) pseudo-inverse min non-zero eigenvalue

Vaidya s Subgraph Precondi9oners Precondi9on G by a subgraph H L H L G so just need t for which L G tl H Easy to bound t if H is a spanning tree H And, easy to solve equa9ons in L H by elimina9on

The Stretch of Spanning Trees Boman- Hendrickson 01: L G st G (T )L T Where st T (G) = path-length T (u, v) (u,v) E

The Stretch of Spanning Trees Boman- Hendrickson 01: L G st G (T )L T Where st T (G) = path-length T (u, v) (u,v) E

The Stretch of Spanning Trees Boman- Hendrickson 01: L G st G (T )L T Where st T (G) = path-length T (u, v) (u,v) E path- len 3

The Stretch of Spanning Trees Boman- Hendrickson 01: L G st G (T )L T Where st T (G) = path-length T (u, v) (u,v) E path- len 5

The Stretch of Spanning Trees Boman- Hendrickson 01: L G st G (T )L T Where st T (G) = path-length T (u, v) (u,v) E path- len 1

The Stretch of Spanning Trees Boman- Hendrickson 01: L G st G (T )L T Where st T (G) = path-length T (u, v) (u,v) E In weighted case, measure resistances of paths

Low- Stretch Spanning Trees For every G there is a T with st T (G) m 1+o(1) (Alon- Karp- Peleg- West 91) where m = E st T (G) O(m log m log 2 log m) (Elkin- Emek- S- Teng 04, Abraham- Bartal- Neiman 08) Solve linear systems in 9me O(m 3/2 log m)

Low- Stretch Spanning Trees For every G there is a T with st T (G) m 1+o(1) (Alon- Karp- Peleg- West 91) where m = E st T (G) O(m log m log 2 log m) (Elkin- Emek- S- Teng 04, Abraham- Bartal- Neiman 08) If G is an expander st T (G) Ω(m log m)

Expander Graphs Infinite family of d- regular graphs (all degrees d) satsifying λ 2 const > 0 Spectrally best are Ramanujan Graphs (Margulis 88, Lubotzky- Phillips- Sarnak 88) all eigenvalues inside d ± 2 d 1 Fundamental examples Amazing proper9es

Expanders Approximate Complete Graphs Let G be the complete graph on n ver9ces (having all possible edges) All non- zero eigenvalues of L G are n x T L G x = n for all x 1, x =1

Expanders Approximate Complete Graphs Let G be the complete graph on n ver9ces x T L G x = n for all x 1, x =1 Let H be a d- regular Ramanujan Expander (d 2 d 1) x T L H x (d +2 d 1) κ(l G,L H ) d +2 d 1 d 2 d 1 1

Sparsifica9on Goal: find sparse approxima9on for every G S- Teng 04: For every G is an H with O(n log 7 n/ 2 ) edges and κ(l G,L H ) 1+

Sparsifica9on S- Teng 04: For every G is an H with O(n log 7 n/ 2 ) edges and κ(l G,L H ) 1+ Conductance high λ 2 high (Cheeger) random sample good (Füredi- Komlós 81) Conductance not high can split graph while removing few edges

Fast Graph Decomposi9on by local graph clustering Given vertex of interest find nearby cluster, small Φ(S), in 9me

Fast Graph Decomposi9on by local graph clustering Given vertex of interest find nearby cluster, small Φ(S), in 9me S- Teng 04: Lovász- Simonovits Andersen- Chung- Lang 06: PageRank Andersen- Peres 09: evoloving set Markov chain

Sparsifica9on Goal: find sparse approxima9on for every G S- Teng 04: For every G is an H with O(n log 7 n/ 2 ) edges and κ(l G,L H ) 1+ S- Srivastava 08: with O(n log n/ 2 ) edges proof by modern random matrix theory Rudelson s concentra9on for random sums

Sparsifica9on Goal: find sparse approxima9on for every G S- Teng 04: For every G is an H with O(n log 7 n/ 2 ) edges and κ(l G,L H ) 1+ S- Srivastava 08: with O(n log n/ 2 ) edges Batson- S- Srivastava 09 dn edges and κ(l G,L H ) d +1+2 d d +1 2 d

Sparsifica9on by Linear Algebra edges vectors Given vectors v 1,...,v m IR n s.t. v e v T e = I e Find a small subset S {1,...,m} and coefficients c e s.t. e S c e v e v T e I

Sparsifica9on by Linear Algebra Given vectors v 1,...,v m IR n s.t. e v e v T e = I Find a small subset S {1,...,m} and coefficients c e s.t. c e v e ve T I e S Rudelson 99 says can find S O(n log n/ 2 ) if choose S at random* * = Pr [e] 1/ v e 2 In graphs, are effec9ve resistances

Sparsifica9on by Linear Algebra Given vectors v 1,...,v m IR n s.t. e v e v T e = I Find a small subset S {1,...,m} and coefficients c e s.t. c e v e ve T I e S Batson- S- Srivastava: can find S 4n/ 2 by greedy algorithm with S9eltjes poten9al func9on

Rela9on to Kadison- Singer, Paving Conjecture Would be implied by the following strong version of Weaver s conjecture KS 2 Exists constant α s.t. for Exists v e 2 = n m α S {1,...,m}, S = m/2 v 1,...,v m IR n v e ve T = I e v e ve T 1 2 I 1 4 e S s.t. for

Rela9on to Kadison- Singer, Paving Conjecture Exists constant α s.t. for v e 2 = n m α Exists S {1,...,m}, S = m/2 v 1,...,v m IR n s.t. for v e ve T = I e v e ve T 1 2 I 1 4 e S Rudelson 99: α = const/(log n) Batson- S- Srivastava 09: true, but with coefficients in sum

Rela9on to Kadison- Singer, Paving Conjecture Exists constant α s.t. for v e 2 = n m α Exists S {1,...,m}, S = m/2 v 1,...,v m IR n s.t. for v e ve T = I e v e ve T 1 2 I 1 4 e S Rudelson 99: α = const/(log n) Batson- S- Srivastava 09: true, but with coefficients in sum S- Srivastava 10: Bourgain- Tzafriri Restricted Invertability

Sparsifica9on and Solving Linear Equa9ons Can reduce any Laplacian to one with non- zero entries/edges. O( V ) S- Teng 04: Combine Low- Stretch Trees with Sparsifica9on to solve Laplacian systems in 9me O(m log c n log 1 ) m = E n = V

Sparsifica9on and Solving Linear Equa9ons S- Teng 04: Combine Low- Stretch Trees with Sparsifica9on to solve Laplacian systems in 9me O(m log c n log 1 ) m = E n = V Kou9s- Miller- Peng 10: 9me O(m log 2 n log 1 ) Kolla- Makarychev- Saberi- Teng 09: O(m log n log 1 ) ater preprocessing

What s next Other families of linear equa9ons: from directed graphs from physical problems from op9miza9on problems Solving Linear equa9ons as a primi9ve Decomposi9ons of the iden9ty: Understanding the vectors we get from graphs the Ramanujan bound Kadison- Singer?

Conclusion It is all connected