ORIE 6334 Spectral Graph Theory November 8, Lecture 22
|
|
- Dina Nicholson
- 6 years ago
- Views:
Transcription
1 ORIE 6334 Spectral Graph Theory November 8, 206 Lecture 22 Lecturer: David P. Williamson Scribe: Shijin Rajakrishnan Recap In the previous lectures, we explored the problem of finding spectral sparsifiers saw both deterministic romized algorithms that given a graph G with n vertices finds a spectral sparsifier with O ( ) n log n ɛ edges. In this lecture, we improve 2 upon this result to find linear-sized spectral sparsifiers, or more concretely, spectral sparsifiers with O ( ) n ɛ edges. 2 This is an interesting result since we noted in the past that a spectral sparsifier is a generalization of the cut sparsifier of a graph, the best results known previous to this one for cut sparsifiers were that a cut sparsifier with n log O() n/ɛ 2 edges can be found in nearly linear time []. Recall that given a graph G = (V, E), a weighted graph H = (V, E ) with weights w(i, j) is an ɛ-spectral sparsifier of G if ( ɛ)l G L H ( + ɛ)l G. In the last lecture, we showed that ( ɛ)l G w(i, j)(e i e j )(e i e j ) T ( + ɛ)l G (i,j) E if only if ( ɛ)i (i,j) E w(i, j)x ij x ij T ( + ɛ)i, where the vectors x ij = L 2 G (e i e j ), that (i,j) E x ijx ij T = I (recall that I is our continuing fudge of an identity matrix, which is actually L G L G ; for any vector v such that v T e = 0, I v = v). Our goal is, given vectors v,..., v m such that m v iv T i = I, to show that there exists some weights w i 0 a subset S [m] s.t S n ɛ 2 ( ɛ) 2 I i S w i v i v i T ( + ɛ) 2 I. 0 This lecture is derived from Spielman 205, Lecture 8, spielman/56/lect8-5.pdf. 22-
2 Definition The vectors v i are said to be in isotropic position if v i v T i = I. Note that for vectors v i in isotropic position, for any A, v T i Av i = (v i v T i ) A = ( v i v T i ) A = I A = tr(a). In this lecture, we prove a weaker version of the theorem first then mention how to extend it to the general case. The version we prove is as follows. Theorem Given vectors v,..., v m that are in isotropic postion, we can find a subset S [m], weights w i 0 such that S 6n 2 The Algorithm I w i v i v T i 3I. 3 3n i S The basic idea behind the algorithm is that we greedily pick vectors weights such that we control how the maximum minimum eigenvalues change. Algorithm : Linear sized spectral sparsifier l n; u n l /3; u 2 w i 0 i A 0 for k to 6n do Pick v i, c s.t. λ min (A + cv i v T i ) l + l λ max (A + cv i v T i ) u + u w i w i + c; T A A + cv i v i l l + l u u + u end return A Note that initially λ min (A) = λ max (A) = 0, which satisfies l = n, u = n as bounds. At the end of the algorithm, λ min (A) n + (6n) 3 = n, λ max(a) n + (6n)(2) = 3n. So we divide the matrix by 3n to get the bound as given in the theorem. 22-2
3 3 Barrier Functions The genius of the result is that rather than working with λ min λ max directly, we use barrier functions, L(l, A) U(u, A), which we now define. We let L(l, A) = U(u, A) = λ i l = tr((a li) ), u λ i = tr((ui A) ), for l λ... λ n u where the λ i are the eigenvalues of A. Initially, l 0 λ min (A) λ max (A) u 0. This implies that U(u, A) L(l, A) are both positive bounded. During the course of the algorithm, we pick vectors increase l u so that the barrier functions don t increase in value, so we can ensure that the eigenvalues lie in the range [l, u]. Initially U(u, 0) = n = L( n, 0) = n n =. n What we want in each iteration is to find a vector v a weight c such that U(u + u, A + cvv T ) U(u, A) L(l + l, A + cvv T ) L(l, A), so that the barrier functions do not increase as we update A, u, l through the run of the algorithm. 4 The Analysis We now turn towards proving that such a selection of v c can be made in each iteration of the loop. We will show that there exists matrices U A L A such that the following two lemmas hold. Lemma 2 The barrier functions do not increase in an iteration; that is, U(u + u, A + cvv T ) U(u, A) holds for c, v, if L(l + l, A + cvv T ) L(l, A) v T U A v c vt L A v. In the original paper due to Batson, Spielman, Srivastava, as well as other notes on their result, the notation Φ u (A) is used for U(a, A) Φ l (A) is used for L(l, A). 22-3
4 Lemma 3 v T i U A v i u + U(u, A) m l /L(l, A) l v T i L A v i. From the choice of our parameters, note that we get u + U(u, A) 2 + = 3 2, l /L(l, A) l 3 /3 = 3 2. From Lemma 3, we see that this implies vi T U A v i 3 m 2 vi T L A v i, therefore, there exists some c, v i such that v i T U A v i c v i T L A v i. Then Lemma 2 ensures that there exists a vector v i weight c so that we can add cv i vi T to A without increasing the barrier functions. The only remaining part is to prove the two lemmata. To prove Lemma 2, we first analyze what the addition of the vector cvv T does to the matrix A, by using the following formula. Theorem 4 (Sherman-Morrison formula) For a nonsingular symmetric matrix X a vector v (X vv T ) = X + X vv T X v T X v. The formula expresses a rank- update to the inverse of a matrix. Proof of Lemma 2: On adding the matrix cvv T to A, the barrier function changes to U(u, A + cvv T ) = tr((ui (A + cvv T )) ) = tr((ui A cvv T ) ) = tr((ui A) ) + c tr((ui A) vv T (ui A) ) cv T (ui A) v = U(u, A) + c tr((ui A) vv T (ui A) ) cv T (ui A) v = U(u, A) + c tr(vt (ui A) 2 v) cv T (ui A) v v T (ui A) 2 v = U(u, A) + c cv T (ui A) v 22-4
5 The third inequality follows from the Sherman-Morrison formula, the fourth by the definition of U(u, A), the fifth by the cyclic property of the trace. So as we add cvv T, the barrier function increases. To counteract this, we increase the value of u so to keep the barrier function constant. Let û u + u. Then we want to see under what values of c v the barrier function doesn t increase. So then U(û, A + cvv T v T (ûi A) 2 v ) = U(û, A) + c cv T (ûi A) v v T (ûi A) 2 v = U(u, A) (U(u, A) U(û, A)) + c cv T (ûi A) v We want U(û, A + cvv T ) U(u, A), which will be true if v T (ûi A) 2 v U(u, A) U(û, A) c cv T (ûi A) v, which holds if c vt (ûi A) 2 v U(u, A) U(û, A) + vt (ûi A ) v = v T U A v, where we define U A (ûi A) 2 U(u, A) U(û, A) + (ûi A). This proves one half of the Lemma. The other half can be proved similarly, but with ˆl = l + l, 2 (A ˆlI) L A = L(ˆl, A) L(l, A) (A ˆlI). We now turn to the proof of the second Lemma. Proof of Lemma 3, upper bound: Notice that if u > λ max (A), δ > 0, then the barrier function is decreasing in u, ie, U(u + δ, A) < U(u, A). Now, since the vectors v i are in isotropic position, v T i U A v i = tr(u A ) = tr((ûi A) 2 ) U(u, A) U(û, A) + tr((ûi A) ) tr((ûi A) 2 ) + U(u, A). U(u, A) U(û, A) 22-5
6 To bound the first term, we note that d d U(u, A) = du du u λ i = d 2 du U(u, A) = 2 2 (u λ i ) > 0, 3 thus U(u, A) is decreasing convex in u. In particular, convexity implies that which yields leading to (u λ i ) 2 = tr((ui A) 2 ), U(u, A) U(û, A) ( u) d du U(û, A) = u tr((ûi A) 2 ), tr((ui A) 2 ) U(u, A) U(û, A) u, tr((ûi A) 2 ) U(u, A) U(û, A) u. We can prove the lower bound for Lemma 3 in a similar way. Proof of Lemma 3, lower bound: In this case, we have Now, v T i L A v i = tr(l A ) d dl L(l, A) = d dl = tr((a ˆlI) 2 ) L(ˆl, A) L(l, A) tr((a ˆlI) ) = tr((a ˆlI) 2 ) L(ˆl, A) L(l, A) L(ˆl, A). λ i l = (λ i l) 2 = tr((a li) 2 ), d 2 dl L(l, A) = 2 2 (λ i l) > 0, 3 thus L(l, A) is increasing convex in l. Then by convexity, L(l, A) L(ˆl, A) l d dl L(ˆl, A) l tr((a ˆlI) 2 ). 22-6
7 Rearranging terms, we get tr((a ˆlI) 2 ) L(ˆl, A) L(l, A) l. To bound the second term, we claim that L(ˆl, A) L(l, A) l L(l, A)L(ˆl, A). If the claim is true, then by rearranging terms we get L(ˆl, A) /L(l, A) l, as desired. To prove the claim we observe that L(ˆl, A) L(l, A) = [ l l λ i ˆl ] λ i l [ = λ i l (λ i ˆl) ] l (λ i l)(λ i ˆl) [ ] = l l (λ i l)(λ i ˆl) [ ] = (λ i l)(λ i ˆl) [ ] [ ] λ i l λ i ˆl = L(l, A)L(ˆl, A), as long as l λ min, as we have been guaranteeing. To get a stronger (more general) result, we change the parameters u, l, then prove instead of tr(l A ) L(l, A) l tr(l A ) l /L(l, A) l. The proof of this is messier involves more algebra, but is not really bad. Having seen the correctness of the algorithm, we turn towards the question of its runtime. The general usage of a spectral sparsifier is to reduce the dependence of the runtime of algorithms that depend on the number of edges by creating a sparse approximation. The algorithm described runs far too slowly to realize gains using a sparsifier for cut algorithms (for instance). The faster algorithm due to Lee Sun 205, constructs a sparsifier with O( qn ɛ 2 ) edges in Õ(qmn 2 ) time. 22-7
8 References [] Wai Shing Fung, Ramesh Hariharan, Nicholas JA Harvey, Debmalya Panigrahi, A General Framework for Graph Sparsification. Proceedings of the Forty- Third Annual ACM Symposium on the Theory of Computing, pp 7-80,
ORIE 6334 Spectral Graph Theory October 13, Lecture 15
ORIE 6334 Spectral Graph heory October 3, 206 Lecture 5 Lecturer: David P. Williamson Scribe: Shijin Rajakrishnan Iterative Methods We have seen in the previous lectures that given an electrical network,
More informationGraph Sparsification II: Rank one updates, Interlacing, and Barriers
Graph Sparsification II: Rank one updates, Interlacing, and Barriers Nikhil Srivastava Simons Institute August 26, 2014 Previous Lecture Definition. H = (V, F, u) is a κ approximation of G = V, E, w if:
More informationSparsification by Effective Resistance Sampling
Spectral raph Theory Lecture 17 Sparsification by Effective Resistance Sampling Daniel A. Spielman November 2, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened
More informationGraph Sparsifiers: A Survey
Graph Sparsifiers: A Survey Nick Harvey UBC Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman, Srivastava and Teng Approximating Dense Objects
More informationU.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21
U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Scribe: Anupam Last revised Lecture 21 1 Laplacian systems in nearly linear time Building upon the ideas introduced in the
More informationSparsification of Graphs and Matrices
Sparsification of Graphs and Matrices Daniel A. Spielman Yale University joint work with Joshua Batson (MIT) Nikhil Srivastava (MSR) Shang- Hua Teng (USC) HUJI, May 21, 2014 Objective of Sparsification:
More informationLecture 18. Ramanujan Graphs continued
Stanford University Winter 218 Math 233A: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: March 8, 218 Original scribe: László Miklós Lovász Lecture 18 Ramanujan Graphs
More informationLecture 9: Low Rank Approximation
CSE 521: Design and Analysis of Algorithms I Fall 2018 Lecture 9: Low Rank Approximation Lecturer: Shayan Oveis Gharan February 8th Scribe: Jun Qi Disclaimer: These notes have not been subjected to the
More informationConstructing Linear-Sized Spectral Sparsification in Almost-Linear Time
Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time Yin Tat Lee MIT Cambridge, USA yintat@mit.edu He Sun The University of Bristol Bristol, UK h.sun@bristol.ac.uk Abstract We present
More informationApproximate Gaussian Elimination for Laplacians
CSC 221H : Graphs, Matrices, and Optimization Lecture 9 : 19 November 2018 Approximate Gaussian Elimination for Laplacians Lecturer: Sushant Sachdeva Scribe: Yasaman Mahdaviyeh 1 Introduction Recall sparsifiers
More informationConvergence of Random Walks and Conductance - Draft
Graphs and Networks Lecture 0 Convergence of Random Walks and Conductance - Draft Daniel A. Spielman October, 03 0. Overview I present a bound on the rate of convergence of random walks in graphs that
More informationSVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)
Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular
More information1 Matrix notation and preliminaries from spectral graph theory
Graph clustering (or community detection or graph partitioning) is one of the most studied problems in network analysis. One reason for this is that there are a variety of ways to define a cluster or community.
More informationLecture 3. 1 Polynomial-time algorithms for the maximum flow problem
ORIE 633 Network Flows August 30, 2007 Lecturer: David P. Williamson Lecture 3 Scribe: Gema Plaza-Martínez 1 Polynomial-time algorithms for the maximum flow problem 1.1 Introduction Let s turn now to considering
More informationLecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the
More informationLinear algebra for computational statistics
University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j
More informationsublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)
sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) 0 overview Our Contributions: 1 overview Our Contributions: A near optimal low-rank
More informationCS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs
CS26: A Second Course in Algorithms Lecture #2: Applications of Multiplicative Weights to Games and Linear Programs Tim Roughgarden February, 206 Extensions of the Multiplicative Weights Guarantee Last
More informationU.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018
U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze
More informationORIE 6334 Spectral Graph Theory November 22, Lecture 25
ORIE 64 Spectral Graph Theory November 22, 206 Lecture 25 Lecturer: David P. Williamson Scribe: Pu Yang In the remaining three lectures, we will cover a prominent result by Arora, Rao, and Vazirani for
More information2.1 Laplacian Variants
-3 MS&E 337: Spectral Graph heory and Algorithmic Applications Spring 2015 Lecturer: Prof. Amin Saberi Lecture 2-3: 4/7/2015 Scribe: Simon Anastasiadis and Nolan Skochdopole Disclaimer: hese notes have
More informationLecture 19. The Kadison-Singer problem
Stanford University Spring 208 Math 233A: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: March 3, 208 Scribe: Ben Plaut Lecture 9. The Kadison-Singer problem The Kadison-Singer
More informationLecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving
Stat260/CS294: Randomized Algorithms for Matrices and Data Lecture 24-12/02/2013 Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving Lecturer: Michael Mahoney Scribe: Michael Mahoney
More informationNotes taken by Graham Taylor. January 22, 2005
CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first
More informationA Tutorial on Matrix Approximation by Row Sampling
A Tutorial on Matrix Approximation by Row Sampling Rasmus Kyng June 11, 018 Contents 1 Fast Linear Algebra Talk 1.1 Matrix Concentration................................... 1. Algorithms for ɛ-approximation
More informationComputing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today
More informationCutting Graphs, Personal PageRank and Spilling Paint
Graphs and Networks Lecture 11 Cutting Graphs, Personal PageRank and Spilling Paint Daniel A. Spielman October 3, 2013 11.1 Disclaimer These notes are not necessarily an accurate representation of what
More informationLecture 9: Matrix approximation continued
0368-348-01-Algorithms in Data Mining Fall 013 Lecturer: Edo Liberty Lecture 9: Matrix approximation continued Warning: This note may contain typos and other inaccuracies which are usually discussed during
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationFast Approximate Matrix Multiplication by Solving Linear Systems
Electronic Colloquium on Computational Complexity, Report No. 117 (2014) Fast Approximate Matrix Multiplication by Solving Linear Systems Shiva Manne 1 and Manjish Pal 2 1 Birla Institute of Technology,
More informationto be more efficient on enormous scale, in a stream, or in distributed settings.
16 Matrix Sketching The singular value decomposition (SVD) can be interpreted as finding the most dominant directions in an (n d) matrix A (or n points in R d ). Typically n > d. It is typically easy to
More informationLecture Approximate Potentials from Approximate Flow
ORIE 6334 Spectral Graph Theory October 20, 2016 Lecturer: David P. Williamson Lecture 17 Scribe: Yingjie Bi 1 Approximate Potentials from Approximate Flow In the last lecture, we presented a combinatorial
More informationLecture 21 November 11, 2014
CS 224: Advanced Algorithms Fall 2-14 Prof. Jelani Nelson Lecture 21 November 11, 2014 Scribe: Nithin Tumma 1 Overview In the previous lecture we finished covering the multiplicative weights method and
More informationORIE 6334 Spectral Graph Theory September 8, Lecture 6. In order to do the first proof, we need to use the following fact.
ORIE 6334 Spectral Graph Theory September 8, 2016 Lecture 6 Lecturer: David P. Williamson Scribe: Faisal Alkaabneh 1 The Matrix-Tree Theorem In this lecture, we continue to see the usefulness of the graph
More informationGraph Sparsifiers. Smaller graph that (approximately) preserves the values of some set of graph parameters. Graph Sparsification
Graph Sparsifiers Smaller graph that (approximately) preserves the values of some set of graph parameters Graph Sparsifiers Spanners Emulators Small stretch spanning trees Vertex sparsifiers Spectral sparsifiers
More informationNorms of Random Matrices & Low-Rank via Sampling
CS369M: Algorithms for Modern Massive Data Set Analysis Lecture 4-10/05/2009 Norms of Random Matrices & Low-Rank via Sampling Lecturer: Michael Mahoney Scribes: Jacob Bien and Noah Youngs *Unedited Notes
More informationORIE 6334 Spectral Graph Theory December 1, Lecture 27 Remix
ORIE 6334 Spectral Graph Theory December, 06 Lecturer: David P. Williamson Lecture 7 Remix Scribe: Qinru Shi Note: This is an altered version of the lecture I actually gave, which followed the structure
More informationTopic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.
CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Primal-Dual Algorithms Date: 10-17-07 14.1 Last Time We finished our discussion of randomized rounding and
More informationThe Method of Interlacing Polynomials
The Method of Interlacing Polynomials Kuikui Liu June 2017 Abstract We survey the method of interlacing polynomials, the heart of the recent solution to the Kadison-Singer problem as well as breakthrough
More informationPersonal PageRank and Spilling Paint
Graphs and Networks Lecture 11 Personal PageRank and Spilling Paint Daniel A. Spielman October 7, 2010 11.1 Overview These lecture notes are not complete. The paint spilling metaphor is due to Berkhin
More informationHolistic Convergence of Random Walks
Graphs and Networks Lecture 1 Holistic Convergence of Random Walks Daniel A. Spielman October 5, 1 1.1 Overview There are two things that I want to do in this lecture. The first is the holistic proof of
More informationLecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8
CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8 Lecture 20: LP Relaxation and Approximation Algorithms Lecturer: Yuan Zhou Scribe: Syed Mahbub Hafiz 1 Introduction When variables of constraints of an
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More informationLecture 22: Hyperplane Rounding for Max-Cut SDP
CSCI-B609: A Theorist s Toolkit, Fall 016 Nov 17 Lecture : Hyperplane Rounding for Max-Cut SDP Lecturer: Yuan Zhou Scribe: Adithya Vadapalli 1 Introduction In the previous lectures we had seen the max-cut
More informationRandom Matrices: Invertibility, Structure, and Applications
Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37
More informationLecture 5 : Projections
Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization
More informationLecture 4. 1 Estimating the number of connected components and minimum spanning tree
CS 59000 CTT Current Topics in Theoretical CS Aug 30, 2012 Lecturer: Elena Grigorescu Lecture 4 Scribe: Pinar Yanardag Delul 1 Estimating the number of connected components and minimum spanning tree In
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016
U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest
More informationLecture 6: Random Walks versus Independent Sampling
Spectral Graph Theory and Applications WS 011/01 Lecture 6: Random Walks versus Independent Sampling Lecturer: Thomas Sauerwald & He Sun For many problems it is necessary to draw samples from some distribution
More informationProperties of Expanders Expanders as Approximations of the Complete Graph
Spectral Graph Theory Lecture 10 Properties of Expanders Daniel A. Spielman October 2, 2009 10.1 Expanders as Approximations of the Complete Graph Last lecture, we defined an expander graph to be a d-regular
More informationConstant Arboricity Spectral Sparsifiers
Constant Arboricity Spectral Sparsifiers arxiv:1808.0566v1 [cs.ds] 16 Aug 018 Timothy Chu oogle timchu@google.com Jakub W. Pachocki Carnegie Mellon University pachocki@cs.cmu.edu August 0, 018 Abstract
More informationLecture 13 March 7, 2017
CS 224: Advanced Algorithms Spring 2017 Prof. Jelani Nelson Lecture 13 March 7, 2017 Scribe: Hongyao Ma Today PTAS/FPTAS/FPRAS examples PTAS: knapsack FPTAS: knapsack FPRAS: DNF counting Approximation
More informationGraph Sparsification I : Effective Resistance Sampling
Graph Sparsification I : Effective Resistance Sampling Nikhil Srivastava Microsoft Research India Simons Institute, August 26 2014 Graphs G G=(V,E,w) undirected V = n w: E R + Sparsification Approximate
More informationSpectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min
Spectral Graph Theory Lecture 2 The Laplacian Daniel A. Spielman September 4, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before
More informationGraphs, Vectors, and Matrices Daniel A. Spielman Yale University. AMS Josiah Willard Gibbs Lecture January 6, 2016
Graphs, Vectors, and Matrices Daniel A. Spielman Yale University AMS Josiah Willard Gibbs Lecture January 6, 2016 From Applied to Pure Mathematics Algebraic and Spectral Graph Theory Sparsification: approximating
More informationCS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms
CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms Tim Roughgarden November 5, 2014 1 Preamble Previous lectures on smoothed analysis sought a better
More informationLecture 23: 6.1 Inner Products
Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such
More informationAn SDP-Based Algorithm for Linear-Sized Spectral Sparsification
An SDP-Based Algorithm for Linear-Sized Spectral Sparsification ABSTRACT Yin Tat Lee Microsoft Research Redmond, USA yile@microsoft.com For any undirected and weighted graph G = V, E, w with n vertices
More informationLecture 13 Spectral Graph Algorithms
COMS 995-3: Advanced Algorithms March 6, 7 Lecture 3 Spectral Graph Algorithms Instructor: Alex Andoni Scribe: Srikar Varadaraj Introduction Today s topics: Finish proof from last lecture Example of random
More informationLecture 6: September 22
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 6: September 22 Lecturer: Prof. Alistair Sinclair Scribes: Alistair Sinclair Disclaimer: These notes have not been subjected
More informationRamanujan Graphs of Every Size
Spectral Graph Theory Lecture 24 Ramanujan Graphs of Every Size Daniel A. Spielman December 2, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The
More informationLecture 7. 1 Normalized Adjacency and Laplacian Matrices
ORIE 6334 Spectral Graph Theory September 3, 206 Lecturer: David P. Williamson Lecture 7 Scribe: Sam Gutekunst In this lecture, we introduce normalized adjacency and Laplacian matrices. We state and begin
More informationEigenvalues, random walks and Ramanujan graphs
Eigenvalues, random walks and Ramanujan graphs David Ellis 1 The Expander Mixing lemma We have seen that a bounded-degree graph is a good edge-expander if and only if if has large spectral gap If G = (V,
More informationMatrix Secant Methods
Equation Solving g(x) = 0 Newton-Lie Iterations: x +1 := x J g(x ), where J g (x ). Newton-Lie Iterations: x +1 := x J g(x ), where J g (x ). 3700 years ago the Babylonians used the secant method in 1D:
More informationMatrices and Linear transformations
Matrices and Linear transformations We have been thinking of matrices in connection with solutions to linear systems of equations like Ax = b. It is time to broaden our horizons a bit and start thinking
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationLecture 18 Oct. 30, 2014
CS 224: Advanced Algorithms Fall 214 Lecture 18 Oct. 3, 214 Prof. Jelani Nelson Scribe: Xiaoyu He 1 Overview In this lecture we will describe a path-following implementation of the Interior Point Method
More informationLecture 14: Random Walks, Local Graph Clustering, Linear Programming
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 14: Random Walks, Local Graph Clustering, Linear Programming Lecturer: Shayan Oveis Gharan 3/01/17 Scribe: Laura Vonessen Disclaimer: These
More informationCS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms
CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms Tim Roughgarden March 9, 2017 1 Preamble Our first lecture on smoothed analysis sought a better theoretical
More informationORIE 6334 Spectral Graph Theory October 25, Lecture 18
ORIE 6334 Spectral Graph Theory October 25, 2016 Lecturer: David P Williamson Lecture 18 Scribe: Venus Lo 1 Max Flow in Undirected Graphs 11 Max flow We start by reviewing the maximum flow problem We are
More information1 Kernel methods & optimization
Machine Learning Class Notes 9-26-13 Prof. David Sontag 1 Kernel methods & optimization One eample of a kernel that is frequently used in practice and which allows for highly non-linear discriminant functions
More informationORIE 633 Network Flows October 4, Lecture 10
ORIE 633 Network Flows October 4, 2007 Lecturer: David P. Williamson Lecture 10 Scribe: Kathleen King 1 Efficient algorithms for max flows 1.1 The Goldberg-Rao algorithm Recall from last time: Dinic s
More informationTheoretical Cryptography, Lecture 13
Theoretical Cryptography, Lecture 13 Instructor: Manuel Blum Scribe: Ryan Williams March 1, 2006 1 Today Proof that Z p has a generator Overview of Integer Factoring Discrete Logarithm and Quadratic Residues
More informationSDP Relaxations for MAXCUT
SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP
More information1 Quantum states and von Neumann entropy
Lecture 9: Quantum entropy maximization CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: February 15, 2016 1 Quantum states and von Neumann entropy Recall that S sym n n
More informationLecture Overview. 2 Weak Induction
COMPSCI 30: Discrete Mathematics for Computer Science February 18, 019 Lecturer: Debmalya Panigrahi Lecture 11 Scribe: Kevin Sun 1 Overview In this lecture, we study mathematical induction, which we often
More informationLecture 10 February 4, 2013
UBC CPSC 536N: Sparse Approximations Winter 2013 Prof Nick Harvey Lecture 10 February 4, 2013 Scribe: Alexandre Fréchette This lecture is about spanning trees and their polyhedral representation Throughout
More informationIITM-CS6845: Theory Toolkit February 3, 2012
IITM-CS6845: Theory Toolkit February 3, 2012 Lecture 4 : Derandomizing the logspace algorithm for s-t connectivity Lecturer: N S Narayanaswamy Scribe: Mrinal Kumar Lecture Plan:In this lecture, we will
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming
princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Matt Weinberg Scribe: Sanjeev Arora One of the running themes in this course is
More informationc 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0
LECTURE LECTURE 2 0. Distinct eigenvalues I haven t gotten around to stating the following important theorem: Theorem: A matrix with n distinct eigenvalues is diagonalizable. Proof (Sketch) Suppose n =
More information25.2 Last Time: Matrix Multiplication in Streaming Model
EE 381V: Large Scale Learning Fall 01 Lecture 5 April 18 Lecturer: Caramanis & Sanghavi Scribe: Kai-Yang Chiang 5.1 Review of Streaming Model Streaming model is a new model for presenting massive data.
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationPartitioning Algorithms that Combine Spectral and Flow Methods
CS369M: Algorithms for Modern Massive Data Set Analysis Lecture 15-11/11/2009 Partitioning Algorithms that Combine Spectral and Flow Methods Lecturer: Michael Mahoney Scribes: Kshipra Bhawalkar and Deyan
More informationLinear Algebra, Summer 2011, pt. 2
Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................
More informationIterative solvers for linear equations
Spectral Graph Theory Lecture 23 Iterative solvers for linear equations Daniel A. Spielman November 26, 2018 23.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms
More informationLecture 20: 6.1 Inner Products
Lecture 0: 6.1 Inner Products Wei-Ta Chu 011/1/5 Definition An inner product on a real vector space V is a function that associates a real number u, v with each pair of vectors u and v in V in such a way
More informationLecture 10: October 27, 2016
Mathematical Toolkit Autumn 206 Lecturer: Madhur Tulsiani Lecture 0: October 27, 206 The conjugate gradient method In the last lecture we saw the steepest descent or gradient descent method for finding
More informationVector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture
Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial
More informationSpecral Graph Theory and its Applications September 7, Lecture 2 L G1,2 = 1 1.
Specral Graph Theory and its Applications September 7, 2004 Lecturer: Daniel A. Spielman Lecture 2 2.1 The Laplacian, again We ll begin by redefining the Laplacian. First, let G 1,2 be the graph on two
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE7C (Spring 08): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee7c@berkeley.edu October
More informationCSC321 Lecture 4 The Perceptron Algorithm
CSC321 Lecture 4 The Perceptron Algorithm Roger Grosse and Nitish Srivastava January 17, 2017 Roger Grosse and Nitish Srivastava CSC321 Lecture 4 The Perceptron Algorithm January 17, 2017 1 / 1 Recap:
More informationLecture 16: Constraint Satisfaction Problems
A Theorist s Toolkit (CMU 18-859T, Fall 2013) Lecture 16: Constraint Satisfaction Problems 10/30/2013 Lecturer: Ryan O Donnell Scribe: Neal Barcelo 1 Max-Cut SDP Approximation Recall the Max-Cut problem
More informationLecture #21. c T x Ax b. maximize subject to
COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the
More informationb + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d
CS161, Lecture 4 Median, Selection, and the Substitution Method Scribe: Albert Chen and Juliana Cook (2015), Sam Kim (2016), Gregory Valiant (2017) Date: January 23, 2017 1 Introduction Last lecture, we
More informationLecture 5. 1 Review (Pairwise Independence and Derandomization)
6.842 Randomness and Computation September 20, 2017 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Tom Kolokotrones 1 Review (Pairwise Independence and Derandomization) As we discussed last time, we can
More informationLecture 2: Linear operators
Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study
More informationUsually, when we first formulate a problem in mathematics, we use the most familiar
Change of basis Usually, when we first formulate a problem in mathematics, we use the most familiar coordinates. In R, this means using the Cartesian coordinates x, y, and z. In vector terms, this is equivalent
More informationx 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2
Lecture 1 LPs: Algebraic View 1.1 Introduction to Linear Programming Linear programs began to get a lot of attention in 1940 s, when people were interested in minimizing costs of various systems while
More information