Graph Sparsifiers: A Survey
|
|
- Shana Butler
- 5 years ago
- Views:
Transcription
1 Graph Sparsifiers: A Survey Nick Harvey UBC Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman, Srivastava and Teng
2 Approximating Dense Objects by Floor joists Sparse Ones Image compression
3 Approximating Dense Graphs by Sparse Ones Spanners: Approximate distances to within using only = O(n 1+2/ ) edges (n = # vertices) Low-stretch trees: Approximate most distances to within O(log n) using only n-1 edges
4 Overview Definitions Cut & Spectral Sparsifiers Applications Cut Sparsifiers Spectral Sparsifiers A random sampling construction Derandomization
5 Cut Sparsifiers Input: An undirected graph G=(V,E) with weights u : E! R + Output: A subgraph H=(V,F) of G with weights w : F! R + such that F is small and w(± H (U)) = (1 ²) u(± G (U)) (Karger 94) 8U µ V weight of edges between U and V\U in H weight of edges between U and V\U in G U U
6 Cut Sparsifiers Input: An undirected graph G=(V,E) with weights u : E! R + Output: A subgraph H=(V,F) of G with weights w : F! R + such that F is small and w(± H (U)) = (1 ²) u(± G (U)) (Karger 94) 8U µ V weight of edges between U and V\U in H weight of edges between U and V\U in G
7 Generic Application of Cut Sparsifiers (Dense) Input graph G (Slow) Algorithm A for some problem P Exact/Approx Output (Efficient) Sparsification Algorithm S Min s-t cut, Sparsest cut, Max cut, Sparse graph H approx preserving solution of P Algorithm A (now faster) Approximate Output
8 Relation to Expander Graphs Graph H on V is an expander if, for some constant c, ± H (U) c U 8UµV, U n/2 Let G be the complete graph on V. If we give all edges of H weight w=n, then w(± H (U)) c n U ¼ c ± G (U) 8UµV, U n/2 Expanders are similar to sparsifiers of complete graph G H
9 Relation to Expander Graphs Simple Random Construction: Erdos-Renyi graph G np is an expander if p= (log(n)/n), with high probability. This gives an expander with (n log n) edges with high probability. But aren t there much better expanders? G H
10 Spectral Sparsifiers Input: An undirected graph G=(V,E) with weights u : E! R + Def: The Laplacian is the matrix L G such that x T L G x = st2e u st (x s -x t ) 2 8x2R V. L G is positive semidefinite since this is 0. (Spielman-Teng 04) Example: Electrical Networks View edge st as resistor of resistance 1/u st. Impose voltage x v at every vertex v. Ohm s Power Law: P = V 2 /R. Power consumed on edge st is u st (x s -x t ) 2. Total power consumed is x T L G x.
11 Spectral Sparsifiers Input: An undirected graph G=(V,E) with weights u : E! R + Def: The Laplacian is the matrix L G such that x T L G x = st2e u st (x s -x t ) 2 8x2R V. Output: A subgraph H=(V,F) of G with weights w : F! R such that F is small and x T L H x = (1 ²) x T L G x 8x 2 R V (Spielman-Teng 04) Spectral Sparsifier ) Restrict to {0,1}-vectors w(± H (U)) = (1 ²) u(± G (U)) 8U µ V ) Cut Sparsifier
12 Cut vs Spectral Sparsifiers Number of Constraints: Cut: w(± H (U)) = (1 ²) u(± G (U)) 8UµV (2 n constraints) Spectral: x T L H x = (1 ²) x T L G x 8x2R V (1 constraints) Spectral constraints are SDP feasibility constraints: (1-²) x T L G x x T L H x (1+²) x T L G x 8x2R V, (1-²) L G ¹ L H ¹ (1+²) L G Here X ¹ Y means Y-X is positive semidefinite Spectral constraints are actually easier to handle Checking Is H is a spectral sparsifier of G? is in P Checking Is H is a cut sparsifier of G? is non-uniform sparsest cut, so NP-hard
13 Application of Spectral Sparsifiers Consider the linear system L G x = b. Actual solution is x := L G -1 b. Instead, compute y := L H -1 b, where H is a spectral sparsifier of G. We know: (1-²) L G ¹ L H ¹ (1+²) L G ) y has low multiplicative error: ky-xk LG 2² kxk LG Computing y is fast since H is sparse: conjugate gradient method takes O(n F ) time (where F = # nonzero entries of L H )
14 Application of Spectral Sparsifiers Consider the linear system L G x = b. Actual solution is x := L G -1 b. Instead, compute y := L H -1 b, where H is a spectral sparsifier of G. We know: (1-²) L G ¹ L H ¹ (1+²) L G ) y has low multiplicative error: ky-xk LG 2² kxk LG Theorem: [Spielman-Teng 04, Koutis-Miller-Peng 10] Can compute a vector y with low multiplicative error in O(m log n (log log n) 2 ) time. (m = # edges of G)
15 Results on Sparsifiers Cut Sparsifiers Spectral Sparsifiers Combinatorial Karger 94 Benczur-Karger 96 Fung-Hariharan- Harvey-Panigrahi 11 Spielman-Teng 04 Linear Algebraic Spielman-Srivastava 08 Batson-Spielman-Srivastava 09 de Carli Silva-Harvey-Sato 11 Construct sparsifiers with n log O(1) n / ² 2 edges, in nearly linear time Construct sparsifiers with O(n/² 2 ) edges, in poly(n) time
16 Sparsifiers by Random Sampling The complete graph is easy! Random sampling gives an expander (ie. sparsifier) with O(n log n) edges.
17 Sparsifiers by Random Sampling Eliminate most of these Keep this Can t sample edges with same probability! Idea [BK 96] Sample low-connectivity edges with high probability, and high-connectivity edges with low probability
18 Non-uniform sampling algorithm [BK 96] Input: Graph G=(V,E), weights u : E! R + Output: A subgraph H=(V,F) with weights w : F! R + Choose parameter ½ Compute probabilities { p e : e2e } For i=1 to ½ For each edge e2e With probability p e, Add e to F Increase w e by u e /(½p e ) Note: E[ F ] ½ e p e Note: E[ w e ] = u e 8e2E ) For every UµV, E[ w(± H (U)) ] = u(± G (U)) Can we do this so that the cut values are tightly concentrated and E[ F ]=n log O(1) n?
19 Benczur-Karger 96 Input: Graph G=(V,E), weights u : E! R + Output: A subgraph H=(V,F) with weights w : F! R + Choose parameter ½ Compute probabilities { p e : e2e } For i=1 to ½ For But each what edge is strength? e2e Can t With we probability use connectivity? p e, Add e to F Increase w e by u e /(½p e ) Can we do this so that the cut values are tightly concentrated and E[ F ]=n log O(1) n? Can approximate all values in m log O(1) n time. Set ½ = O(log n/² 2 ). Let p e = 1/ strength of edge e. Cuts are preserved to within (1 ²) and E[ F ] = O(n log n/² 2 )
20 Fung-Hariharan-Harvey-Panigrahi 11 Input: Graph G=(V,E), weights u : E! R + Output: A subgraph H=(V,F) with weights w : F! R + Choose parameter ½ Compute probabilities { p e : e2e } For i=1 to ½ For each edge e2e With probability p e, Add e to F Increase w e by u e /(½p e ) Can we do this so that the cut values are tightly concentrated and E[ F ]=n log O(1) n? Can approximate all values in O(m + n log n) time Set ½ = O(log 2 n/² 2 ). Let p st = 1/(min cut separating s and t) Cuts are preserved to within (1 ²) and E[ F ] = O(n log 2 n/² 2 )
21 Overview of Analysis Most cuts hit a huge number of edges ) extremely concentrated ) whp, most cuts are close to their mean
22 Overview of Analysis Hits only one red edge ) poorly concentrated Hits many red edges ) reasonably concentrated High connectivity This doesn t happen often! Need bound on # small Steiner cuts Fung-Harvey-Hariharan-Panigrahi 11 Low sampling probability This doesn t happen often! Need bound on # small cuts Karger 94 The same cut also hits many green edges ) highly concentrated Low connectivity High sampling probability
23 Summary for Cut Sparsifiers Do non-uniform sampling of edges, with probabilities based on connectivity Decomposes graph into connectivity classes and argue concentration of all cuts Need bounds on # small cuts BK 96 used strength not connectivity Can get sparsifiers with O(n log n / ² 2 ) edges Optimal for any independent sampling algorithm
24 Spectral Sparsification Input: Graph G=(V,E), weights u : E! R + Recall: x T L G x = st2e u st (x s -x t ) 2 Call this x T L st x Goal: Find weights w : E! R + such that most w e are zero, and (1-²) x T L G x e2e w e x T L e x (1+²) x T L G x 8x2R V, (1- ²) L G ¹ e2e w e L e ¹ (1+²) L G General Problem: Given matrices L e satisfying e L e = L G, find coefficients w e, mostly zero, such that (1-²) L G ¹ e w e L e ¹ (1+²) L G
25 The General Problem: Sparsifying Sums of PSD Matrices General Problem: Given PSD matrices L e s.t. e L e = L, find coefficients w e, mostly zero, such that (1-²) L ¹ e w e L e ¹ (1+²) L Theorem: [Ahlswede-Winter 02] Random sampling gives w with O( n log n/² 2 ) non-zeros. Theorem: [de Carli Silva-Harvey-Sato 11], building on [Batson-Spielman-Srivastava 09] Deterministic alg gives w with O( n/² 2 ) non-zeros. Cut & spectral sparsifiers with O(n/² 2 ) edges [BSS 09] Sparsifiers with more properties and O(n/² 2 ) edges [dhs 11]
26 Vector Case Vector General problem: Problem: Given vectors PSD matrices v 1,,v m L2[0,1] e s.t. n e. L e = L, Let find v coefficients = i v i /m. Find w e, coefficients mostly zero, wsuch e, mostly that zero, such that (1-²) k L ¹ e w e e vw e - e Lv e k¹ 1 (1+²) ² L Theorem [Althofer 94, Lipton-Young 94]: There is a w with O(log n/² 2 ) non-zeros. Proof: Random sampling & Hoeffding inequality. ) ²-approx equilibria with O(log n/² 2 ) support in zero-sum games Multiplicative version: There is a w with O(n log n/² 2 ) non-zeros such that (1-²) v e w e v e (1+²) v
27 Concentration Inequalities Theorem: [Chernoff 52, Hoeffding 63] Let Y 1,,Y k be i.i.d. random non-negative real numbers s.t. E[ Y i ] = Z and Y i uz. Then Theorem: [Ahlswede-Winter 02] Let Y 1,,Y k be i.i.d. random PSD nxn matrices s.t. E[ Y i ] = Z and Y i ¹uZ. Then The only difference
28 Balls & Bins Example Problem: Throw k balls into n bins. Want (max load) / (min load) 1+². How big should k be? AW Theorem: Let Y 1,,Y k be i.i.d. random PSD matrices such that E[ Y i ] = Z and Y i ¹uZ. Then Solution: Let Y i be all zeros, except for a single n in a random diagonal entry. Then E[ Y i ] = I, and Y i ¹ ni. Set k = (n log n /² 2 ). Whp, every diagonal entry of i Y i /k is in [1-²,1+²].
29 Solving the General Problem General Problem: Given PSD matrices L e s.t. e L e = L, find coefficients w e, mostly zero, such that (1-²) L ¹ e w e L e ¹ (1+²) L AW Theorem: Let Y 1,,Y k be i.i.d. random PSD matrices such that E[ Y i ] = Z and Y i ¹uZ. Then To solve General Problem with O(n log n/² 2 ) non-zeros Repeat k:= (n log n /² 2 ) times Pick an edge e with probability p e := Tr(L e L -1 ) / n Increment w e by 1/k p e
30 Derandomization Vector problem: Given vectors v e 2[0,1] n s.t. e v e = v, find coefficients w e, mostly zero, such that k e w e v e - v k 1 ² Theorem [Young 94]: The multiplicative weights method deterministically gives w with O(log n/² 2 ) non-zeros Or, use pessimistic estimators on the Hoeffding proof General Problem: Given PSD matrices L e s.t. e L e = L, find coefficients w e, mostly zero, such that (1-²) L ¹ e w e L e ¹ (1+²) L Theorem [de Carli Silva-Harvey-Sato 11]: The matrix multiplicative weights method (Arora-Kale 07) deterministically gives w with O(n log n/² 2 ) non-zeros Or, use matrix pessimistic estimators (Wigderson-Xiao 06)
31 MWUM for Balls & Bins Let i = load in bin i. Initially =0. Want: 1 i and i 1. Introduce penalty functions exp(l- i) and exp( i-u) Find a bin i to throw a ball into such that, increasing l by ± l and u by ± u, the penalties don t grow. i exp(l+± l - i ) i exp(l - i) i exp( i -(u+± u )) i exp( i-u) Careful analysis shows O(n log n/² 2 ) balls is enough values: l 0 u 1
32 MMWUM for General Problem Let A=0 and its eigenvalues. Want: 1 i and i 1. Use penalty functions Tr exp(li-a) and Tr exp(a-ui) Find a matrix L e such that adding L e to A, increasing l by ± l and u by ± u, the penalties don t grow. Tr exp((l+± l )I- (A+ L e )) Tr exp(l I-A) Tr exp((a+ L e )-(u+± u )I) Tr exp(a-ui) Careful analysis shows O(n log n/² 2 ) matrices is enough values: l 0 u 1
33 Beating Sampling & MMWUM To get a better bound, try changing the penalty functions to be steeper! Use penalty functions Tr (A-lI) -1 and Tr (ui-a) -1 Find a matrix L e such that adding L e to A, increasing l by ± l and u by ± u, the penalties don t grow. Tr ((A+ L e )-(l+± l )I) -1 Tr (A-l I) -1 Tr ((u+± u )I-(A+ L e )) -1 Tr (ui-a) -1 All eigenvalues stay within [l, u] values: l 0 u 1
34 Beating Sampling & MMWUM To get a better bound, try changing the penalty functions to be steeper! Use penalty functions Tr (A-lI) -1 and Tr (ui-a) -1 Find a matrix L e such that adding L e to A, increasing l by ± l and u by ± u, the penalties don t grow. Tr ((A+ L e )-(l+± l )I) -1 Tr (A-l I) -1 Tr ((u+± u )I-(A+ L e )) -1 Tr (ui-a) -1 General Problem: Given PSD matrices L e s.t. e L e = L, find coefficients w e, mostly zero, such that (1-²) L ¹ e w e L e ¹ (1+²) L Theorem: [Batson-Spielman-Srivastava 09] in rank-1 case, [de Carli Silva-Harvey-Sato 11] for general case This gives a solution w with O( n/² 2 ) non-zeros.
35 Applications Theorem: [de Carli Silva-Harvey-Sato 11] Given PSD matrices L e s.t. e L e = L, there is an algorithm to find w with O( n/² 2 ) non-zeros such that (1-²) L ¹ e w e L e ¹ (1+²) L Application 1: Spectral Sparsifiers with Costs Given costs on edges of G, can find sparsifier H whose cost is at most (1+²) the cost of G. Application 2: Sparse SDP Solutions min { c T y : i y i A i º B, y 0 } where A i s and B are PSD has nearly optimal solution with O(n/² 2 ) non-zeros.
36 Open Questions Sparsifiers for directed graphs More constructions of sparsifiers with O(n/² 2 ) edges. Perhaps randomized? Iterative construction of expander graphs More control of the weights w e A combinatorial proof of spectral sparsifiers More applications of our general theorem
Graph Sparsifiers. Smaller graph that (approximately) preserves the values of some set of graph parameters. Graph Sparsification
Graph Sparsifiers Smaller graph that (approximately) preserves the values of some set of graph parameters Graph Sparsifiers Spanners Emulators Small stretch spanning trees Vertex sparsifiers Spectral sparsifiers
More informationGraph Sparsification I : Effective Resistance Sampling
Graph Sparsification I : Effective Resistance Sampling Nikhil Srivastava Microsoft Research India Simons Institute, August 26 2014 Graphs G G=(V,E,w) undirected V = n w: E R + Sparsification Approximate
More informationSparsification of Graphs and Matrices
Sparsification of Graphs and Matrices Daniel A. Spielman Yale University joint work with Joshua Batson (MIT) Nikhil Srivastava (MSR) Shang- Hua Teng (USC) HUJI, May 21, 2014 Objective of Sparsification:
More informationMatrix Concentration. Nick Harvey University of British Columbia
Matrix Concentration Nick Harvey University of British Columbia The Problem Given any random nxn, symmetric matrices Y 1,,Y k. Show that i Y i is probably close to E[ i Y i ]. Why? A matrix generalization
More informationAlgorithms, Graph Theory, and the Solu7on of Laplacian Linear Equa7ons. Daniel A. Spielman Yale University
Algorithms, Graph Theory, and the Solu7on of Laplacian Linear Equa7ons Daniel A. Spielman Yale University Rutgers, Dec 6, 2011 Outline Linear Systems in Laplacian Matrices What? Why? Classic ways to solve
More informationGraphs, Vectors, and Matrices Daniel A. Spielman Yale University. AMS Josiah Willard Gibbs Lecture January 6, 2016
Graphs, Vectors, and Matrices Daniel A. Spielman Yale University AMS Josiah Willard Gibbs Lecture January 6, 2016 From Applied to Pure Mathematics Algebraic and Spectral Graph Theory Sparsification: approximating
More informationSparsification by Effective Resistance Sampling
Spectral raph Theory Lecture 17 Sparsification by Effective Resistance Sampling Daniel A. Spielman November 2, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened
More informationORIE 6334 Spectral Graph Theory November 8, Lecture 22
ORIE 6334 Spectral Graph Theory November 8, 206 Lecture 22 Lecturer: David P. Williamson Scribe: Shijin Rajakrishnan Recap In the previous lectures, we explored the problem of finding spectral sparsifiers
More informationLecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving
Stat260/CS294: Randomized Algorithms for Matrices and Data Lecture 24-12/02/2013 Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving Lecturer: Michael Mahoney Scribe: Michael Mahoney
More informationU.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21
U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Scribe: Anupam Last revised Lecture 21 1 Laplacian systems in nearly linear time Building upon the ideas introduced in the
More informationAlgorithmic Primitives for Network Analysis: Through the Lens of the Laplacian Paradigm
Algorithmic Primitives for Network Analysis: Through the Lens of the Laplacian Paradigm Shang-Hua Teng Computer Science, Viterbi School of Engineering USC Massive Data and Massive Graphs 500 billions web
More informationSpectral Sparsification in Dynamic Graph Streams
Spectral Sparsification in Dynamic Graph Streams Kook Jin Ahn 1, Sudipto Guha 1, and Andrew McGregor 2 1 University of Pennsylvania {kookjin,sudipto}@seas.upenn.edu 2 University of Massachusetts Amherst
More informationA Tutorial on Matrix Approximation by Row Sampling
A Tutorial on Matrix Approximation by Row Sampling Rasmus Kyng June 11, 018 Contents 1 Fast Linear Algebra Talk 1.1 Matrix Concentration................................... 1. Algorithms for ɛ-approximation
More informationLecture 13: Spectral Graph Theory
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 13: Spectral Graph Theory Lecturer: Shayan Oveis Gharan 11/14/18 Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationLaplacian Matrices of Graphs: Spectral and Electrical Theory
Laplacian Matrices of Graphs: Spectral and Electrical Theory Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale University Toronto, Sep. 28, 2 Outline Introduction to graphs
More informationAlgorithms, Graph Theory, and Linear Equa9ons in Laplacians. Daniel A. Spielman Yale University
Algorithms, Graph Theory, and Linear Equa9ons in Laplacians Daniel A. Spielman Yale University Shang- Hua Teng Carter Fussell TPS David Harbater UPenn Pavel Cur9s Todd Knoblock CTY Serge Lang 1927-2005
More informationEffective Resistance and Schur Complements
Spectral Graph Theory Lecture 8 Effective Resistance and Schur Complements Daniel A Spielman September 28, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in
More informationAn SDP-Based Algorithm for Linear-Sized Spectral Sparsification
An SDP-Based Algorithm for Linear-Sized Spectral Sparsification ABSTRACT Yin Tat Lee Microsoft Research Redmond, USA yile@microsoft.com For any undirected and weighted graph G = V, E, w with n vertices
More informationGraph Sparsification II: Rank one updates, Interlacing, and Barriers
Graph Sparsification II: Rank one updates, Interlacing, and Barriers Nikhil Srivastava Simons Institute August 26, 2014 Previous Lecture Definition. H = (V, F, u) is a κ approximation of G = V, E, w if:
More informationSpectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity
Spectral Graph Theory and its Applications Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Outline Adjacency matrix and Laplacian Intuition, spectral graph drawing
More informationAn Empirical Comparison of Graph Laplacian Solvers
An Empirical Comparison of Graph Laplacian Solvers Kevin Deweese 1 Erik Boman 2 John Gilbert 1 1 Department of Computer Science University of California, Santa Barbara 2 Scalable Algorithms Department
More informationTowards Practically-Efficient Spectral Sparsification of Graphs. Zhuo Feng
Design Automation Group Towards Practically-Efficient Spectral Sparsification of Graphs Zhuo Feng Acknowledgements: PhD students: Xueqian Zhao, Lengfei Han, Zhiqiang Zhao, Yongyu Wang NSF CCF CAREER grant
More informationApproximate Gaussian Elimination for Laplacians
CSC 221H : Graphs, Matrices, and Optimization Lecture 9 : 19 November 2018 Approximate Gaussian Elimination for Laplacians Lecturer: Sushant Sachdeva Scribe: Yasaman Mahdaviyeh 1 Introduction Recall sparsifiers
More informationWalks, Springs, and Resistor Networks
Spectral Graph Theory Lecture 12 Walks, Springs, and Resistor Networks Daniel A. Spielman October 8, 2018 12.1 Overview In this lecture we will see how the analysis of random walks, spring networks, and
More informationSpectral and Electrical Graph Theory. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity
Spectral and Electrical Graph Theory Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Outline Spectral Graph Theory: Understand graphs through eigenvectors and
More informationRandom Matrices: Invertibility, Structure, and Applications
Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37
More informationORIE 6334 Spectral Graph Theory October 13, Lecture 15
ORIE 6334 Spectral Graph heory October 3, 206 Lecture 5 Lecturer: David P. Williamson Scribe: Shijin Rajakrishnan Iterative Methods We have seen in the previous lectures that given an electrical network,
More informationGraph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families
Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families Nikhil Srivastava Microsoft Research India Simons Institute, August 27, 2014 The Last Two Lectures Lecture 1. Every weighted
More informationHierarchies. 1. Lovasz-Schrijver (LS), LS+ 2. Sherali Adams 3. Lasserre 4. Mixed Hierarchy (recently used) Idea: P = conv(subset S of 0,1 n )
Hierarchies Today 1. Some more familiarity with Hierarchies 2. Examples of some basic upper and lower bounds 3. Survey of recent results (possible material for future talks) Hierarchies 1. Lovasz-Schrijver
More informationRandomized sparsification in NP-hard norms
58 Chapter 3 Randomized sparsification in NP-hard norms Massive matrices are ubiquitous in modern data processing. Classical dense matrix algorithms are poorly suited to such problems because their running
More informationLecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the
More informationOn the Spectra of General Random Graphs
On the Spectra of General Random Graphs Fan Chung Mary Radcliffe University of California, San Diego La Jolla, CA 92093 Abstract We consider random graphs such that each edge is determined by an independent
More information1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)
CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) Final Review Session 03/20/17 1. Let G = (V, E) be an unweighted, undirected graph. Let λ 1 be the maximum eigenvalue
More informationGraph Partitioning Using Random Walks
Graph Partitioning Using Random Walks A Convex Optimization Perspective Lorenzo Orecchia Computer Science Why Spectral Algorithms for Graph Problems in practice? Simple to implement Can exploit very efficient
More informationConstructing Linear-Sized Spectral Sparsification in Almost-Linear Time
Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time Yin Tat Lee MIT Cambridge, USA yintat@mit.edu He Sun The University of Bristol Bristol, UK h.sun@bristol.ac.uk Abstract We present
More informationConstant Arboricity Spectral Sparsifiers
Constant Arboricity Spectral Sparsifiers arxiv:1808.0566v1 [cs.ds] 16 Aug 018 Timothy Chu oogle timchu@google.com Jakub W. Pachocki Carnegie Mellon University pachocki@cs.cmu.edu August 0, 018 Abstract
More informationA Matrix Expander Chernoff Bound
A Matrix Expander Chernoff Bound Ankit Garg, Yin Tat Lee, Zhao Song, Nikhil Srivastava UC Berkeley Vanilla Chernoff Bound Thm [Hoeffding, Chernoff]. If X 1,, X k are independent mean zero random variables
More informationIterative solvers for linear equations
Spectral Graph Theory Lecture 17 Iterative solvers for linear equations Daniel A. Spielman October 31, 2012 17.1 About these notes These notes are not necessarily an accurate representation of what happened
More informationIterative solvers for linear equations
Spectral Graph Theory Lecture 15 Iterative solvers for linear equations Daniel A. Spielman October 1, 009 15.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving linear
More informationLecture 10 February 4, 2013
UBC CPSC 536N: Sparse Approximations Winter 2013 Prof Nick Harvey Lecture 10 February 4, 2013 Scribe: Alexandre Fréchette This lecture is about spanning trees and their polyhedral representation Throughout
More informationVectorized Laplacians for dealing with high-dimensional data sets
Vectorized Laplacians for dealing with high-dimensional data sets Fan Chung Graham University of California, San Diego The Power Grid Graph drawn by Derrick Bezanson 2010 Vectorized Laplacians ----- Graph
More informationAn Efficient Graph Sparsification Approach to Scalable Harmonic Balance (HB) Analysis of Strongly Nonlinear RF Circuits
Design Automation Group An Efficient Graph Sparsification Approach to Scalable Harmonic Balance (HB) Analysis of Strongly Nonlinear RF Circuits Authors : Lengfei Han (Speaker) Xueqian Zhao Dr. Zhuo Feng
More informationSolving systems of linear equations
Chapter 5 Solving systems of linear equations Let A R n n and b R n be given. We want to solve Ax = b. What is the fastest method you know? We usually learn how to solve systems of linear equations via
More informationSDP Relaxations for MAXCUT
SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP
More informationRandomized Algorithms
Randomized Algorithms 南京大学 尹一通 Martingales Definition: A sequence of random variables X 0, X 1,... is a martingale if for all i > 0, E[X i X 0,...,X i1 ] = X i1 x 0, x 1,...,x i1, E[X i X 0 = x 0, X 1
More information1 Adjacency matrix and eigenvalues
CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016
U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest
More informationRanking and sparsifying a connection graph
Ranking and sparsifying a connection graph Fan Chung 1, and Wenbo Zhao 1 Department of Mathematics Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 9093,
More informationApproximating the Exponential, the Lanczos Method and an Õ(m)-Time Spectral Algorithm for Balanced Separator
Approximating the Exponential, the Lanczos Method and an Õm-Time Spectral Algorithm for Balanced Separator Lorenzo Orecchia MIT Cambridge, MA, USA. orecchia@mit.edu Sushant Sachdeva Princeton University
More informationLecture 6 January 21, 2013
UBC CPSC 536N: Sparse Approximations Winter 03 Prof. Nick Harvey Lecture 6 January, 03 Scribe: Zachary Drudi In the previous lecture, we discussed max flow problems. Today, we consider the Travelling Salesman
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationApproximation & Complexity
Summer school on semidefinite optimization Approximation & Complexity David Steurer Cornell University Part 1 September 6, 2012 Overview Part 1 Unique Games Conjecture & Basic SDP Part 2 SDP Hierarchies:
More informationAverage-case Analysis for Combinatorial Problems,
Average-case Analysis for Combinatorial Problems, with s and Stochastic Spanning Trees Mathematical Sciences, Carnegie Mellon University February 2, 2006 Outline Introduction 1 Introduction Combinatorial
More informationOn the efficient approximability of constraint satisfaction problems
On the efficient approximability of constraint satisfaction problems July 13, 2007 My world Max-CSP Efficient computation. P Polynomial time BPP Probabilistic Polynomial time (still efficient) NP Non-deterministic
More informationFiedler s Theorems on Nodal Domains
Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors
More informationThe Simplest Construction of Expanders
Spectral Graph Theory Lecture 14 The Simplest Construction of Expanders Daniel A. Spielman October 16, 2009 14.1 Overview I am going to present the simplest construction of expanders that I have been able
More informationarxiv: v3 [cs.ds] 20 Jul 2010
Spectral Sparsification of Graphs arxiv:0808.4134v3 [cs.ds] 20 Jul 2010 Daniel A. Spielman Department of Computer Science Program in Applied Mathematics Yale University July 22, 2010 Abstract Shang-Hua
More informationEECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+
EECS 495: Randomized Algorithms Lecture 14 Random Walks Reading: Motwani-Raghavan Chapter 6 Powerful tool for sampling complicated distributions since use only local moves Given: to explore state space.
More informationApproximate Spectral Clustering via Randomized Sketching
Approximate Spectral Clustering via Randomized Sketching Christos Boutsidis Yahoo! Labs, New York Joint work with Alex Gittens (Ebay), Anju Kambadur (IBM) The big picture: sketch and solve Tradeoff: Speed
More informationIterative solvers for linear equations
Spectral Graph Theory Lecture 23 Iterative solvers for linear equations Daniel A. Spielman November 26, 2018 23.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving
More informationFitting a Graph to Vector Data. Samuel I. Daitch (Yale) Jonathan A. Kelner (MIT) Daniel A. Spielman (Yale)
Fitting a Graph to Vector Data Samuel I. Daitch (Yale) Jonathan A. Kelner (MIT) Daniel A. Spielman (Yale) Machine Learning Summer School, June 2009 Given a collection of vectors x 1,..., x n IR d 1,, n
More informationGRAPH PARTITIONING USING SINGLE COMMODITY FLOWS [KRV 06] 1. Preliminaries
GRAPH PARTITIONING USING SINGLE COMMODITY FLOWS [KRV 06] notes by Petar MAYMOUNKOV Theme The algorithmic problem of finding a sparsest cut is related to the combinatorial problem of building expander graphs
More informationto be more efficient on enormous scale, in a stream, or in distributed settings.
16 Matrix Sketching The singular value decomposition (SVD) can be interpreted as finding the most dominant directions in an (n d) matrix A (or n points in R d ). Typically n > d. It is typically easy to
More informationAssignment 4: Solutions
Math 340: Discrete Structures II Assignment 4: Solutions. Random Walks. Consider a random walk on an connected, non-bipartite, undirected graph G. Show that, in the long run, the walk will traverse each
More informationU.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018
U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze
More informationEfficiently Implementing Sparsity in Learning
Efficiently Implementing Sparsity in Learning M. Magdon-Ismail Rensselaer Polytechnic Institute (Joint Work) December 9, 2013. Out-of-Sample is What Counts NO YES A pattern exists We don t know it We have
More informationOptimisation and Operations Research
Optimisation and Operations Research Lecture 22: Linear Programming Revisited Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/ School
More informationPowerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.
Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.
More informationSparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ.
Sparsity Matters Robert J. Vanderbei 2017 September 20 http://www.princeton.edu/ rvdb IDA: Center for Communications Research Princeton NJ The simplex method is 200 times faster... The simplex method is
More information1 Matrix notation and preliminaries from spectral graph theory
Graph clustering (or community detection or graph partitioning) is one of the most studied problems in network analysis. One reason for this is that there are a variety of ways to define a cluster or community.
More informationRandNLA: Randomized Numerical Linear Algebra
RandNLA: Randomized Numerical Linear Algebra Petros Drineas Rensselaer Polytechnic Institute Computer Science Department To access my web page: drineas RandNLA: sketch a matrix by row/ column sampling
More informationSVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)
Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More informationPhysical Metaphors for Graphs
Graphs and Networks Lecture 3 Physical Metaphors for Graphs Daniel A. Spielman October 9, 203 3. Overview We will examine physical metaphors for graphs. We begin with a spring model and then discuss resistor
More informationx 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2
Lecture 1 LPs: Algebraic View 1.1 Introduction to Linear Programming Linear programs began to get a lot of attention in 1940 s, when people were interested in minimizing costs of various systems while
More informationOnline Semidefinite Programming
Online Semidefinite Programming Noa Elad 1, Satyen Kale 2, and Joseph Seffi Naor 3 1 Computer Science Dept, Technion, Haifa, Israel noako@cstechnionacil 2 Yahoo Research, New York, NY, USA satyen@yahoo-inccom
More informationEquivalent relaxations of optimal power flow
Equivalent relaxations of optimal power flow 1 Subhonmesh Bose 1, Steven H. Low 2,1, Thanchanok Teeraratkul 1, Babak Hassibi 1 1 Electrical Engineering, 2 Computational and Mathematical Sciences California
More informationCS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003
CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming
More informationLecture 13 March 7, 2017
CS 224: Advanced Algorithms Spring 2017 Prof. Jelani Nelson Lecture 13 March 7, 2017 Scribe: Hongyao Ma Today PTAS/FPTAS/FPRAS examples PTAS: knapsack FPTAS: knapsack FPRAS: DNF counting Approximation
More informationSubsampling Semidefinite Programs and Max-Cut on the Sphere
Electronic Colloquium on Computational Complexity, Report No. 129 (2009) Subsampling Semidefinite Programs and Max-Cut on the Sphere Boaz Barak Moritz Hardt Thomas Holenstein David Steurer November 29,
More informationDissertation Defense
Clustering Algorithms for Random and Pseudo-random Structures Dissertation Defense Pradipta Mitra 1 1 Department of Computer Science Yale University April 23, 2008 Mitra (Yale University) Dissertation
More informationConvex Optimization of Graph Laplacian Eigenvalues
Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the
More informationApproximating Submodular Functions. Nick Harvey University of British Columbia
Approximating Submodular Functions Nick Harvey University of British Columbia Approximating Submodular Functions Part 1 Nick Harvey University of British Columbia Department of Computer Science July 11th,
More informationSparse Matrix Theory and Semidefinite Optimization
Sparse Matrix Theory and Semidefinite Optimization Lieven Vandenberghe Department of Electrical Engineering University of California, Los Angeles Joint work with Martin S. Andersen and Yifan Sun Third
More informationApplication: Bucket Sort
5.2.2. Application: Bucket Sort Bucket sort breaks the log) lower bound for standard comparison-based sorting, under certain assumptions on the input We want to sort a set of =2 integers chosen I+U@R from
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationMatrix Rank Minimization with Applications
Matrix Rank Minimization with Applications Maryam Fazel Haitham Hindi Stephen Boyd Information Systems Lab Electrical Engineering Department Stanford University 8/2001 ACC 01 Outline Rank Minimization
More informationCS60021: Scalable Data Mining. Dimensionality Reduction
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Dimensionality Reduction Sourangshu Bhattacharya Assumption: Data lies on or near a
More informationAlgorithm Design Using Spectral Graph Theory
Algorithm Design Using Spectral Graph Theory Richard Peng CMU-CS-13-121 August 2013 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee: Gary L. Miller, Chair Guy
More informationQuantum walk algorithms
Quantum walk algorithms Andrew Childs Institute for Quantum Computing University of Waterloo 28 September 2011 Randomized algorithms Randomness is an important tool in computer science Black-box problems
More informationSpectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min
Spectral Graph Theory Lecture 2 The Laplacian Daniel A. Spielman September 4, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before
More informationOutline. Martingales. Piotr Wojciechowski 1. 1 Lane Department of Computer Science and Electrical Engineering West Virginia University.
Outline Piotr 1 1 Lane Department of Computer Science and Electrical Engineering West Virginia University 8 April, 01 Outline Outline 1 Tail Inequalities Outline Outline 1 Tail Inequalities General Outline
More informationMAA507, Power method, QR-method and sparse matrix representation.
,, and representation. February 11, 2014 Lecture 7: Overview, Today we will look at:.. If time: A look at representation and fill in. Why do we need numerical s? I think everyone have seen how time consuming
More information1 Review: symmetric matrices, their eigenvalues and eigenvectors
Cornell University, Fall 2012 Lecture notes on spectral methods in algorithm design CS 6820: Algorithms Studying the eigenvalues and eigenvectors of matrices has powerful consequences for at least three
More informationGraph Sparsification by Edge-Connectivity and Random Spanning Trees
Graph Sparsification by Edge-Connectivity and Random Spanning Trees Wai Shing Fung Nicholas J. A. Harvey Abstract We present new approaches to constructing graph sparsifiers weighted subgraphs for which
More informationExpanders via Random Spanning Trees
Expanders via Random Spanning Trees Navin Goyal Luis Rademacher Santosh Vempala Abstract Motivated by the problem of routing reliably and scalably in a graph, we introduce the notion of a splicer, the
More informationLecture Approximate Potentials from Approximate Flow
ORIE 6334 Spectral Graph Theory October 20, 2016 Lecturer: David P. Williamson Lecture 17 Scribe: Yingjie Bi 1 Approximate Potentials from Approximate Flow In the last lecture, we presented a combinatorial
More information18.312: Algebraic Combinatorics Lionel Levine. Lecture 19
832: Algebraic Combinatorics Lionel Levine Lecture date: April 2, 20 Lecture 9 Notes by: David Witmer Matrix-Tree Theorem Undirected Graphs Let G = (V, E) be a connected, undirected graph with n vertices,
More information