Graph Sparsification II: Rank one updates, Interlacing, and Barriers

Size: px
Start display at page:

Download "Graph Sparsification II: Rank one updates, Interlacing, and Barriers"

Transcription

1 Graph Sparsification II: Rank one updates, Interlacing, and Barriers Nikhil Srivastava Simons Institute August 26, 2014

2 Previous Lecture Definition. H = (V, F, u) is a κ approximation of G = V, E, w if: L H L G κ L H Theorem. Every G has a 1 + ε approximation H with O(n log n ε 2 ) edges. There is a nearly linear time algorithm which finds it.

3 There is no log(n) here G=K n H = random d-regular x (n/d) E G = O(n 2 ) E H = O(dn)

4 Proof: Approximating the Identity

5 Tool: The Matrix Chernoff Bound Suppose X 1,, X k are i.i.d. random n n matrices with Then 0 X i M I and EX i = I. P 1 k i X i I ε 2n exp kε2 4M Shows O n log n ε 2 samples suffice in R n.

6 Tool: The Matrix Chernoff Bound Suppose X 1,, X k are i.i.d. random n n matrices with 0 X i n I and EX i = I. then O n log n ε 2 samples suffice in R n.

7 Tool: The Matrix Chernoff Bound Suppose X 1,, X k are i.i.d. random n n matrices with 0 X i n I and EX i = I. then O n log n ε 2 samples suffice in R n. Tight example:

8 Tool: The Matrix Chernoff Bound Suppose X 1,, X k are i.i.d. random n n matrices with Simple greedy algorithm gets O(n): place next ball in emptiest bin 0 X i n I and EX i = I. then O n log n ε 2 samples suffice in R n. Tight example:

9 This Lecture [Batson-Spielman-S 09] Spectral Sparsification Theorem: uses greedy approach.

10 This is the goal

11 Plan: Choose vectors one at a time

12 Plan: Choose vectors one at a time

13 Plan: Choose vectors one at a time

14 Plan: Choose vectors one at a time

15 Plan: Choose vectors one at a time..

16 Plan: Choose vectors one at a time.. λ [1, κ]

17 Plan: Choose vectors one at a time Basic Question: What does a rank one update do to the eigenvalues?.. λ [1, κ]

18 What happens when you add a vector?

19 Interlacing (Cauchy, 1800s)

20 The Characteristic Polynomial Characteristic Polynomial: p A x = (x λ i ) i where λ 1,, λ n = eigs(a).

21 Proof of Interlacing I

22 Proof of Interlacing II

23 Proof of Interlacing III

24 The Characteristic Polynomial Characteristic Polynomial: Matrix-Determinant Lemma:

25 The Characteristic Polynomial Characteristic Polynomial: Matrix-Determinant Lemma: are zeros of this.

26 Physical model of interlacing i = positive unit charges resting at barriers on a slope

27 Physical model of interlacing <v,u i > 2 =charges added to barriers

28 Physical model of interlacing Barriers repel eigs.

29 Physical model of interlacing Barriers repel eigs. Inverse law of repulsion gravity

30 Physical model of interlacing Barriers repel eigs.

31 Examples

32 Ex1: All weight on u1

33 Ex1: All weight on u1

34 Ex1: All weight on u 1 Pushed up against next barrier resting due to gravity

35 Ex2: Equal weight on u1,u2

36 Ex2: Equal weight on u1,u2

37 Ex2: Equal weight on u1,u2

38 Ex3: Equal weight on all u1, u2, un

39 Ex3: Equal weight on all u1, u2, un

40 Adding a balanced vector

41 Consider a random vector If thus a random vector has the same expected projection in every direction i :

42 The Expected Characteristic Poly. A (0) = 0 p (0) = x n

43 The Expected Characteristic Poly. A (0) = 0 p (0) = x n

44 The Expected Characteristic Poly. A (1) = vv T Ep (1) = p (0) 1 m x p 0

45 The Expected Characteristic Poly. A (1) = vv T Ep (1) = p (0) 1 m x p 0 = x n n m xn 1

46 The Expected Characteristic Poly. A (2) = A (1) + vv T Ep (2) = 1 1 m x p 1 = 1 1 m x 2 x n

47 The Expected Characteristic Poly. A (3) = A (2) + vv T Ep (3) = 1 1 m x 3 x n

48 Ideal proof roots(ep k ) Ep k = 1 1 m x k x n

49 Ideal proof roots(ep k ) Ep k = 1 1 m x k x n

50 Ideal proof roots(ep k ) Ep k = 1 1 m x k x n

51 Ideal proof roots(ep k ) Ep k = 1 1 m x k x n

52 Ideal proof Ep k = 1 1 m x k x n

53 Ideal proof Ep k = 1 1 m x k x n

54 Ideal proof Ep k = 1 1 m x k x n

55 Ideal proof p (k) = Laguerre poly L k Ep k = 1 1 m x k x n

56 Ideal proof after 4n/ε 2 steps. p (k) = Laguerre poly L k Ep k = 1 1 m x k x n

57 This is not real Problem: roots Ep (k) Eroots(p (k) ) Ep k = 1 1 m x k x n

58 Ep k = 1 1 m x k x n 4n/ε 2

59 Ep k = 1 1 m x k x n 4n/ε 2 Theorem. If we allow arbitrary scalings of the vectors, then there is a greedy algorithm which matches what happens in the above dream...

60 Keep track of total repulsion Ep k = 1 1 m x k x n 4n/ε 2 Theorem. If we allow arbitrary scalings of the vectors, then there is a greedy algorithm which matches what happens in the above dream...

61 End Result [Batson-Spielman-S 09] Spectral Sparsification Theorem:

62 Actual Proof (for 6n vectors, 13-approx)

63 Steady progress by moving barriers -n 0 n

64 Step 1 -n 0 n

65 Step 1 -n 0 n -n 0 n

66 Step 1 -n 0 n +1/3 +2 -n+1/3 0 n+2

67 Step 1 -n tighter constraint looser constraint 0 n +1/3 +2 -n+1/3 0 n+2

68 Step i+1 0

69 Step i+1 +1/3 +2 0

70 Step i+1 0

71 Step i+1 +1/3 +2 0

72 Step i+1 0

73 Step i+1 +1/3 +2 0

74 Step i+1 0

75 Step i+1 0

76 Step i+1 0

77 Step i+1 0

78 Step 6n 0 n 13n

79 Step 6n 0 n 13n 13-approximation with 6n vectors.

80 Problem need to show that an appropriate always exists.

81 Problem need to show that an appropriate always exists. Hope: vectors are well-spread: there must be one which is well-behaved.

82 Bad: Accumulation of Eigenvalues

83 Bad: Accumulation of Eigenvalues

84 Bad: Accumulation of Eigenvalues

85 Bad: Accumulation of Eigenvalues

86 Bad: Accumulation of Eigenvalues is not strong enough to do the induction.

87 Bad: Accumulation of Eigenvalues need a better way to measure quality of eigenvalues. is not strong enough to do the induction.

88 The Upper Barrier

89 The Upper Barrier

90 No i within dist. 1 No 2 i within dist. 2 No 3 i within dist. 3.. No k i within dist. k The Upper Barrier

91 The Upper Barrier Total repulsion in physical model No i within dist. 1 No 2 i within dist. 2 No 3 i within dist. 3.. No k i within dist. k

92 The Lower Barrier

93 The Beginning -n 0 n

94 The Beginning -n 0 n

95 Step i+1 0

96 Step i+1 +1/ Lemma. can always choose so that potentials do not increase

97 Step i+1 0

98 Step i+1 0

99 Step i+1 0

100 Step i+1 0

101 Step 6n 0 n 13n

102 Step 6n 0 n 13n 13-approximation with 6n vectors.

103 Goal Lemma. can always choose so that both potentials do not increase. +1/3 +2

104 The Right Question Which vector should we add?

105 The Right Question Which vector should we add? Given a vector, how much of it can we add?

106 Upper Barrier Update +2 Add & set

107 Upper Barrier Update +2 Add & set

108 Upper Barrier Update +2 Add & set Sherman-Morrisson

109 Upper Barrier Update +2 Add & set

110 Upper Barrier Update +2 Add & set

111 Upper feasibility condition Rearranging:

112 Upper feasibility condition Rearranging:

113 Upper feasibility condition Rearranging: s=0 always feasible

114 Similarly: Lower Feasibility

115 Goal Show that we can always add some vector while respecting both barriers. +1/3 +2

116 Both Barriers Goal There is always a vector with

117 can add Both Barriers must add There is always a vector with

118 can add Both Barriers must add There is always a vector with Then, can squeeze scaling factor in between:

119 Goal Average over all v e

120 Goal Average over all v e

121 Goal Average over all v e

122 Goal Average over all v e

123 Bounding Tr(U A )

124 Bounding Tr(U A )

125 Bounding Tr(U A )

126 Bounding Tr(U A ) induction

127 Bounding Tr(U A ) induction

128 Bounding Tr(U A ) convexity induction

129 Bounding Tr(U A ) convexity induction

130 Goal Taking Averages

131 Taking Averages

132 Taking Averages 2 =3/2

133 Taking Averages 2 =3/2 1/3 2

134 Taking Averages 2 =3/2 1/3 2

135 Step i+1 +1/ Lemma. can always choose so that potentials do not increase.

136 Step i+1 0

137 Step i+1 0

138 Step i+1 0

139 Step i+1 0

140 Step 6n 0 n 13n

141 Step 6n 0 n 13n 13-approximation with 6n vectors.

142 Done! Spectral Sparsification Theorem:

143 Fixing gives Nearly Optimal bound steps and tightening parameters (zeros of Laguerre polynomials). This is within a factor of 2 of the optimal Ramanujan Bound [LPS, Alon-Boppana].

144 Efficiently bounds Why does this work?

145 Efficiently bounds Why does this work?

146 Why does this work? Efficiently bounds guarantees good vector

147 Why does this work? Efficiently bounds guarantees good vector

148 Major Themes Electrical model of interlacing is useful Can use barrier potential to iteratively construct matrices with desired spectra Analysis of progress is greedy / local Requires fractional weights on vectors Instead of directly reasoning about λ i (A), reason about zi A 1.

149 Open Questions Fast algorithm currently O(n^4) Optimization proof? More applications

150 There are no weights here G=K n H = random d-regular x (n/d) E G = O(n 2 ) E H = O(dn)

151 And off by a factor of 2 G=K n H = random d-regular x (n/d) We get 4n/ε 2 E G = O(n 2 ) E H = O(dn)

152 Tomorrow 2/ε 2 degree unweighted approximations for K n Ramanujan Graphs

153 Tomorrow 2/ε 2 degree unweighted approximations for K n Ramanujan Graphs Ep k = 1 1 m x k x n 4n/ε 2 This is not a dream.

Sparsification of Graphs and Matrices

Sparsification of Graphs and Matrices Sparsification of Graphs and Matrices Daniel A. Spielman Yale University joint work with Joshua Batson (MIT) Nikhil Srivastava (MSR) Shang- Hua Teng (USC) HUJI, May 21, 2014 Objective of Sparsification:

More information

Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families

Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families Nikhil Srivastava Microsoft Research India Simons Institute, August 27, 2014 The Last Two Lectures Lecture 1. Every weighted

More information

Graph Sparsification I : Effective Resistance Sampling

Graph Sparsification I : Effective Resistance Sampling Graph Sparsification I : Effective Resistance Sampling Nikhil Srivastava Microsoft Research India Simons Institute, August 26 2014 Graphs G G=(V,E,w) undirected V = n w: E R + Sparsification Approximate

More information

ORIE 6334 Spectral Graph Theory November 8, Lecture 22

ORIE 6334 Spectral Graph Theory November 8, Lecture 22 ORIE 6334 Spectral Graph Theory November 8, 206 Lecture 22 Lecturer: David P. Williamson Scribe: Shijin Rajakrishnan Recap In the previous lectures, we explored the problem of finding spectral sparsifiers

More information

Random Matrices: Invertibility, Structure, and Applications

Random Matrices: Invertibility, Structure, and Applications Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37

More information

Ramanujan Graphs of Every Size

Ramanujan Graphs of Every Size Spectral Graph Theory Lecture 24 Ramanujan Graphs of Every Size Daniel A. Spielman December 2, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The

More information

Graph Sparsifiers. Smaller graph that (approximately) preserves the values of some set of graph parameters. Graph Sparsification

Graph Sparsifiers. Smaller graph that (approximately) preserves the values of some set of graph parameters. Graph Sparsification Graph Sparsifiers Smaller graph that (approximately) preserves the values of some set of graph parameters Graph Sparsifiers Spanners Emulators Small stretch spanning trees Vertex sparsifiers Spectral sparsifiers

More information

Graphs, Vectors, and Matrices Daniel A. Spielman Yale University. AMS Josiah Willard Gibbs Lecture January 6, 2016

Graphs, Vectors, and Matrices Daniel A. Spielman Yale University. AMS Josiah Willard Gibbs Lecture January 6, 2016 Graphs, Vectors, and Matrices Daniel A. Spielman Yale University AMS Josiah Willard Gibbs Lecture January 6, 2016 From Applied to Pure Mathematics Algebraic and Spectral Graph Theory Sparsification: approximating

More information

Graph Sparsifiers: A Survey

Graph Sparsifiers: A Survey Graph Sparsifiers: A Survey Nick Harvey UBC Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman, Srivastava and Teng Approximating Dense Objects

More information

Constant Arboricity Spectral Sparsifiers

Constant Arboricity Spectral Sparsifiers Constant Arboricity Spectral Sparsifiers arxiv:1808.0566v1 [cs.ds] 16 Aug 018 Timothy Chu oogle timchu@google.com Jakub W. Pachocki Carnegie Mellon University pachocki@cs.cmu.edu August 0, 018 Abstract

More information

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze

More information

Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time

Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time Yin Tat Lee MIT Cambridge, USA yintat@mit.edu He Sun The University of Bristol Bristol, UK h.sun@bristol.ac.uk Abstract We present

More information

Bipartite Ramanujan Graphs of Every Degree

Bipartite Ramanujan Graphs of Every Degree Spectral Graph Theory Lecture 26 Bipartite Ramanujan Graphs of Every Degree Daniel A. Spielman December 9, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in

More information

The Method of Interlacing Polynomials

The Method of Interlacing Polynomials The Method of Interlacing Polynomials Kuikui Liu June 2017 Abstract We survey the method of interlacing polynomials, the heart of the recent solution to the Kadison-Singer problem as well as breakthrough

More information

A Matrix Expander Chernoff Bound

A Matrix Expander Chernoff Bound A Matrix Expander Chernoff Bound Ankit Garg, Yin Tat Lee, Zhao Song, Nikhil Srivastava UC Berkeley Vanilla Chernoff Bound Thm [Hoeffding, Chernoff]. If X 1,, X k are independent mean zero random variables

More information

An SDP-Based Algorithm for Linear-Sized Spectral Sparsification

An SDP-Based Algorithm for Linear-Sized Spectral Sparsification An SDP-Based Algorithm for Linear-Sized Spectral Sparsification ABSTRACT Yin Tat Lee Microsoft Research Redmond, USA yile@microsoft.com For any undirected and weighted graph G = V, E, w with n vertices

More information

Lecture 18. Ramanujan Graphs continued

Lecture 18. Ramanujan Graphs continued Stanford University Winter 218 Math 233A: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: March 8, 218 Original scribe: László Miklós Lovász Lecture 18 Ramanujan Graphs

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should

More information

11.1 Set Cover ILP formulation of set cover Deterministic rounding

11.1 Set Cover ILP formulation of set cover Deterministic rounding CS787: Advanced Algorithms Lecture 11: Randomized Rounding, Concentration Bounds In this lecture we will see some more examples of approximation algorithms based on LP relaxations. This time we will use

More information

Lecture 19. The Kadison-Singer problem

Lecture 19. The Kadison-Singer problem Stanford University Spring 208 Math 233A: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: March 3, 208 Scribe: Ben Plaut Lecture 9. The Kadison-Singer problem The Kadison-Singer

More information

approximation algorithms I

approximation algorithms I SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions

More information

Lecture 11: Generalized Lovász Local Lemma. Lovász Local Lemma

Lecture 11: Generalized Lovász Local Lemma. Lovász Local Lemma Lecture 11: Generalized Recall We design an experiment with independent random variables X 1,..., X m We define bad events A 1,..., A n where) the bad event A i depends on the variables (X k1,..., X kni

More information

On the Ritz values of normal matrices

On the Ritz values of normal matrices On the Ritz values of normal matrices Zvonimir Bujanović Faculty of Science Department of Mathematics University of Zagreb June 13, 2011 ApplMath11 7th Conference on Applied Mathematics and Scientific

More information

U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21

U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21 U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Scribe: Anupam Last revised Lecture 21 1 Laplacian systems in nearly linear time Building upon the ideas introduced in the

More information

Algorithmic Primitives for Network Analysis: Through the Lens of the Laplacian Paradigm

Algorithmic Primitives for Network Analysis: Through the Lens of the Laplacian Paradigm Algorithmic Primitives for Network Analysis: Through the Lens of the Laplacian Paradigm Shang-Hua Teng Computer Science, Viterbi School of Engineering USC Massive Data and Massive Graphs 500 billions web

More information

1 Tridiagonal matrices

1 Tridiagonal matrices Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate

More information

Lecture 17: Cheeger s Inequality and the Sparsest Cut Problem

Lecture 17: Cheeger s Inequality and the Sparsest Cut Problem Recent Advances in Approximation Algorithms Spring 2015 Lecture 17: Cheeger s Inequality and the Sparsest Cut Problem Lecturer: Shayan Oveis Gharan June 1st Disclaimer: These notes have not been subjected

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in

More information

CS 435, 2018 Lecture 3, Date: 8 March 2018 Instructor: Nisheeth Vishnoi. Gradient Descent

CS 435, 2018 Lecture 3, Date: 8 March 2018 Instructor: Nisheeth Vishnoi. Gradient Descent CS 435, 2018 Lecture 3, Date: 8 March 2018 Instructor: Nisheeth Vishnoi Gradient Descent This lecture introduces Gradient Descent a meta-algorithm for unconstrained minimization. Under convexity of the

More information

7. Lecture notes on the ellipsoid algorithm

7. Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear

More information

16.1 Bounding Capacity with Covering Number

16.1 Bounding Capacity with Covering Number ECE598: Information-theoretic methods in high-dimensional statistics Spring 206 Lecture 6: Upper Bounds for Density Estimation Lecturer: Yihong Wu Scribe: Yang Zhang, Apr, 206 So far we have been mostly

More information

Lecture notes on the ellipsoid algorithm

Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm

More information

Laplacian Matrices of Graphs: Spectral and Electrical Theory

Laplacian Matrices of Graphs: Spectral and Electrical Theory Laplacian Matrices of Graphs: Spectral and Electrical Theory Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale University Toronto, Sep. 28, 2 Outline Introduction to graphs

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 P and NP P: The family of problems that can be solved quickly in polynomial time.

More information

The Simplest Construction of Expanders

The Simplest Construction of Expanders Spectral Graph Theory Lecture 14 The Simplest Construction of Expanders Daniel A. Spielman October 16, 2009 14.1 Overview I am going to present the simplest construction of expanders that I have been able

More information

Matrix Concentration. Nick Harvey University of British Columbia

Matrix Concentration. Nick Harvey University of British Columbia Matrix Concentration Nick Harvey University of British Columbia The Problem Given any random nxn, symmetric matrices Y 1,,Y k. Show that i Y i is probably close to E[ i Y i ]. Why? A matrix generalization

More information

Random regular digraphs: singularity and spectrum

Random regular digraphs: singularity and spectrum Random regular digraphs: singularity and spectrum Nick Cook, UCLA Probability Seminar, Stanford University November 2, 2015 Universality Circular law Singularity probability Talk outline 1 Universality

More information

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1 8.409 The Behavior of Algorithms in Practice //00 Lecture 4 Lecturer: Dan Spielman Scribe: Matthew Lepinski A Gaussian Elimination Example To solve: [ ] [ ] [ ] x x First factor the matrix to get: [ ]

More information

Lecture 10 February 4, 2013

Lecture 10 February 4, 2013 UBC CPSC 536N: Sparse Approximations Winter 2013 Prof Nick Harvey Lecture 10 February 4, 2013 Scribe: Alexandre Fréchette This lecture is about spanning trees and their polyhedral representation Throughout

More information

Combinatorial Optimization Spring Term 2015 Rico Zenklusen. 2 a = ( 3 2 ) 1 E(a, A) = E(( 3 2 ), ( 4 0

Combinatorial Optimization Spring Term 2015 Rico Zenklusen. 2 a = ( 3 2 ) 1 E(a, A) = E(( 3 2 ), ( 4 0 3 2 a = ( 3 2 ) 1 E(a, A) = E(( 3 2 ), ( 4 0 0 1 )) 0 0 1 2 3 4 5 Figure 9: An example of an axis parallel ellipsoid E(a, A) in two dimensions. Notice that the eigenvectors of A correspond to the axes

More information

Notice that lemma 4 has nothing to do with 3-colorability. To obtain a better result for 3-colorable graphs, we need the following observation.

Notice that lemma 4 has nothing to do with 3-colorability. To obtain a better result for 3-colorable graphs, we need the following observation. COMPSCI 632: Approximation Algorithms November 1, 2017 Lecturer: Debmalya Panigrahi Lecture 18 Scribe: Feng Gui 1 Overview In this lecture, we examine graph coloring algorithms. We first briefly discuss

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i Knapsack Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i Goal: find a subset of items of maximum profit such that the item subset fits in the bag Knapsack X: item set

More information

Linear Programming. Scheduling problems

Linear Programming. Scheduling problems Linear Programming Scheduling problems Linear programming (LP) ( )., 1, for 0 min 1 1 1 1 1 11 1 1 n i x b x a x a b x a x a x c x c x z i m n mn m n n n n! = + + + + + + = Extreme points x ={x 1,,x n

More information

2.1 Laplacian Variants

2.1 Laplacian Variants -3 MS&E 337: Spectral Graph heory and Algorithmic Applications Spring 2015 Lecturer: Prof. Amin Saberi Lecture 2-3: 4/7/2015 Scribe: Simon Anastasiadis and Nolan Skochdopole Disclaimer: hese notes have

More information

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover Grant Schoenebeck Luca Trevisan Madhur Tulsiani Abstract We study semidefinite programming relaxations of Vertex Cover arising

More information

Sparsification by Effective Resistance Sampling

Sparsification by Effective Resistance Sampling Spectral raph Theory Lecture 17 Sparsification by Effective Resistance Sampling Daniel A. Spielman November 2, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened

More information

The Hierarchical Product of Graphs

The Hierarchical Product of Graphs The Hierarchical Product of Graphs Lali Barrière Francesc Comellas Cristina Dalfó Miquel Àngel Fiol Universitat Politècnica de Catalunya - DMA4 March 22, 2007 Outline 1 Introduction 2 The hierarchical

More information

Norms of Random Matrices & Low-Rank via Sampling

Norms of Random Matrices & Low-Rank via Sampling CS369M: Algorithms for Modern Massive Data Set Analysis Lecture 4-10/05/2009 Norms of Random Matrices & Low-Rank via Sampling Lecturer: Michael Mahoney Scribes: Jacob Bien and Noah Youngs *Unedited Notes

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

Random Walks on graphs

Random Walks on graphs November 2007 UW Probability Seminar 1 Random Walks on graphs Random walk on G: Simple to analyze: satisfying some natural properties Mixes quickly to stationary distribution. efficient sampling of the

More information

Lecture 18 Oct. 30, 2014

Lecture 18 Oct. 30, 2014 CS 224: Advanced Algorithms Fall 214 Lecture 18 Oct. 3, 214 Prof. Jelani Nelson Scribe: Xiaoyu He 1 Overview In this lecture we will describe a path-following implementation of the Interior Point Method

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria 12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

25 Minimum bandwidth: Approximation via volume respecting embeddings

25 Minimum bandwidth: Approximation via volume respecting embeddings 25 Minimum bandwidth: Approximation via volume respecting embeddings We continue the study of Volume respecting embeddings. In the last lecture, we motivated the use of volume respecting embeddings by

More information

RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES. 1. Introduction

RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES. 1. Introduction RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES HOI NGUYEN, TERENCE TAO, AND VAN VU Abstract. Gaps (or spacings) between consecutive eigenvalues are a central topic in random matrix theory. The

More information

to be more efficient on enormous scale, in a stream, or in distributed settings.

to be more efficient on enormous scale, in a stream, or in distributed settings. 16 Matrix Sketching The singular value decomposition (SVD) can be interpreted as finding the most dominant directions in an (n d) matrix A (or n points in R d ). Typically n > d. It is typically easy to

More information

Properties of Expanders Expanders as Approximations of the Complete Graph

Properties of Expanders Expanders as Approximations of the Complete Graph Spectral Graph Theory Lecture 10 Properties of Expanders Daniel A. Spielman October 2, 2009 10.1 Expanders as Approximations of the Complete Graph Last lecture, we defined an expander graph to be a d-regular

More information

Complex Analysis. Chapter IV. Complex Integration IV.8. Goursat s Theorem Proofs of Theorems. August 6, () Complex Analysis August 6, / 8

Complex Analysis. Chapter IV. Complex Integration IV.8. Goursat s Theorem Proofs of Theorems. August 6, () Complex Analysis August 6, / 8 Complex Analysis Chapter IV. Complex Integration IV.8. Proofs of heorems August 6, 017 () Complex Analysis August 6, 017 1 / 8 able of contents 1 () Complex Analysis August 6, 017 / 8 . Let G be an open

More information

Lecture 15: MCMC Sanjeev Arora Elad Hazan. COS 402 Machine Learning and Artificial Intelligence Fall 2016

Lecture 15: MCMC Sanjeev Arora Elad Hazan. COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 15: MCMC Sanjeev Arora Elad Hazan COS 402 Machine Learning and Artificial Intelligence Fall 2016 Course progress Learning from examples Definition + fundamental theorem of statistical learning,

More information

eigenvalue bounds and metric uniformization Punya Biswal & James R. Lee University of Washington Satish Rao U. C. Berkeley

eigenvalue bounds and metric uniformization Punya Biswal & James R. Lee University of Washington Satish Rao U. C. Berkeley eigenvalue bounds and metric uniformization Punya Biswal & James R. Lee University of Washington Satish Rao U. C. Berkeley separators in planar graphs Lipton and Tarjan (1980) showed that every n-vertex

More information

The Steiner Network Problem

The Steiner Network Problem The Steiner Network Problem Pekka Orponen T-79.7001 Postgraduate Course on Theoretical Computer Science 7.4.2008 Outline 1. The Steiner Network Problem Linear programming formulation LP relaxation 2. The

More information

Lecture 10: October 27, 2016

Lecture 10: October 27, 2016 Mathematical Toolkit Autumn 206 Lecturer: Madhur Tulsiani Lecture 0: October 27, 206 The conjugate gradient method In the last lecture we saw the steepest descent or gradient descent method for finding

More information

A Tutorial on Matrix Approximation by Row Sampling

A Tutorial on Matrix Approximation by Row Sampling A Tutorial on Matrix Approximation by Row Sampling Rasmus Kyng June 11, 018 Contents 1 Fast Linear Algebra Talk 1.1 Matrix Concentration................................... 1. Algorithms for ɛ-approximation

More information

Lecture Constructive Algorithms for Discrepancy Minimization

Lecture Constructive Algorithms for Discrepancy Minimization Approximation Algorithms Workshop June 13, 2011, Princeton Lecture Constructive Algorithms for Discrepancy Minimization Nikhil Bansal Scribe: Daniel Dadush 1 Combinatorial Discrepancy Given a universe

More information

Linearly-solvable Markov decision problems

Linearly-solvable Markov decision problems Advances in Neural Information Processing Systems 2 Linearly-solvable Markov decision problems Emanuel Todorov Department of Cognitive Science University of California San Diego todorov@cogsci.ucsd.edu

More information

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name. HW2 - Due 0/30 Each answer must be mathematically justified. Don t forget your name. Problem. Use the row reduction algorithm to find the inverse of the matrix 0 0, 2 3 5 if it exists. Double check your

More information

A Simple Algorithm for Clustering Mixtures of Discrete Distributions

A Simple Algorithm for Clustering Mixtures of Discrete Distributions 1 A Simple Algorithm for Clustering Mixtures of Discrete Distributions Pradipta Mitra In this paper, we propose a simple, rotationally invariant algorithm for clustering mixture of distributions, including

More information

10. Ellipsoid method

10. Ellipsoid method 10. Ellipsoid method EE236C (Spring 2008-09) ellipsoid method convergence proof inequality constraints 10 1 Ellipsoid method history developed by Shor, Nemirovski, Yudin in 1970s used in 1979 by Khachian

More information

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018 15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018 Usual rules. :) Exercises 1. Lots of Flows. Suppose you wanted to find an approximate solution to the following

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016 Lecture 1: Introduction and Review We begin with a short introduction to the course, and logistics. We then survey some basics about approximation algorithms and probability. We also introduce some of

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example continued : Coin tossing Math 425 Intro to Probability Lecture 37 Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan April 8, 2009 Consider a Bernoulli trials process with

More information

arxiv: v3 [math.co] 3 Dec 2017

arxiv: v3 [math.co] 3 Dec 2017 Ramanujan Coverings of Graphs Chris Hall, Doron Puder and William F. Sawin December 5, 2017 arxiv:1506.02335v3 [math.co] 3 Dec 2017 Abstract Let G be a finite connected graph, and let ρ be the spectral

More information

Lecture 04: Balls and Bins: Birthday Paradox. Birthday Paradox

Lecture 04: Balls and Bins: Birthday Paradox. Birthday Paradox Lecture 04: Balls and Bins: Overview In today s lecture we will start our study of balls-and-bins problems We shall consider a fundamental problem known as the Recall: Inequalities I Lemma Before we begin,

More information

Umans Complexity Theory Lectures

Umans Complexity Theory Lectures Umans Complexity Theory Lectures Lecture 8: Introduction to Randomized Complexity: - Randomized complexity classes, - Error reduction, - in P/poly - Reingold s Undirected Graph Reachability in RL Randomized

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Convex and Semidefinite Programming for Approximation

Convex and Semidefinite Programming for Approximation Convex and Semidefinite Programming for Approximation We have seen linear programming based methods to solve NP-hard problems. One perspective on this is that linear programming is a meta-method since

More information

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths Matroids Shortest Paths Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths Marc Uetz University of Twente m.uetz@utwente.nl Lecture 2: sheet 1 / 25 Marc Uetz Discrete Optimization Matroids

More information

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min Spectral Graph Theory Lecture 2 The Laplacian Daniel A. Spielman September 4, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before

More information

Oblivious and Adaptive Strategies for the Majority and Plurality Problems

Oblivious and Adaptive Strategies for the Majority and Plurality Problems Oblivious and Adaptive Strategies for the Majority and Plurality Problems Fan Chung 1, Ron Graham 1, Jia Mao 1, and Andrew Yao 2 1 Department of Computer Science and Engineering, University of California,

More information

4: The Pandemic process

4: The Pandemic process 4: The Pandemic process David Aldous July 12, 2012 (repeat of previous slide) Background meeting model with rates ν. Model: Pandemic Initially one agent is infected. Whenever an infected agent meets another

More information

arxiv: v1 [cs.lg] 16 Jun 2015

arxiv: v1 [cs.lg] 16 Jun 2015 Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates Zeyuan Allen-Zhu zeyuan@csailmitedu MIT CSAIL Zhenyu Liao zhenyul@buedu Boston University Lorenzo Orecchia orecchia@buedu

More information

Information Theory + Polyhedral Combinatorics

Information Theory + Polyhedral Combinatorics Information Theory + Polyhedral Combinatorics Sebastian Pokutta Georgia Institute of Technology ISyE, ARC Joint work Gábor Braun Information Theory in Complexity Theory and Combinatorics Simons Institute

More information

Machine Learning. Computational Learning Theory. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Machine Learning. Computational Learning Theory. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012 Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Computational Learning Theory Le Song Lecture 11, September 20, 2012 Based on Slides from Eric Xing, CMU Reading: Chap. 7 T.M book 1 Complexity of Learning

More information

Sum-of-Squares and Spectral Algorithms

Sum-of-Squares and Spectral Algorithms Sum-of-Squares and Spectral Algorithms Tselil Schramm June 23, 2017 Workshop on SoS @ STOC 2017 Spectral algorithms as a tool for analyzing SoS. SoS Semidefinite Programs Spectral Algorithms SoS suggests

More information

CMPUT 675: Approximation Algorithms Fall 2014

CMPUT 675: Approximation Algorithms Fall 2014 CMPUT 675: Approximation Algorithms Fall 204 Lecture 25 (Nov 3 & 5): Group Steiner Tree Lecturer: Zachary Friggstad Scribe: Zachary Friggstad 25. Group Steiner Tree In this problem, we are given a graph

More information

Lectures 2 3 : Wigner s semicircle law

Lectures 2 3 : Wigner s semicircle law Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j=1 be a symmetric n n matrix with Random entries

More information

Optimization Methods via Simulation

Optimization Methods via Simulation Optimization Methods via Simulation Optimization problems are very important in science, engineering, industry,. Examples: Traveling salesman problem Circuit-board design Car-Parrinello ab initio MD Protein

More information

Machine Learning. Computational Learning Theory. Eric Xing , Fall Lecture 9, October 5, 2016

Machine Learning. Computational Learning Theory. Eric Xing , Fall Lecture 9, October 5, 2016 Machine Learning 10-701, Fall 2016 Computational Learning Theory Eric Xing Lecture 9, October 5, 2016 Reading: Chap. 7 T.M book Eric Xing @ CMU, 2006-2016 1 Generalizability of Learning In machine learning

More information

Spectral and Electrical Graph Theory. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

Spectral and Electrical Graph Theory. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Spectral and Electrical Graph Theory Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Outline Spectral Graph Theory: Understand graphs through eigenvectors and

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

Exact Algorithms for Dominating Induced Matching Based on Graph Partition

Exact Algorithms for Dominating Induced Matching Based on Graph Partition Exact Algorithms for Dominating Induced Matching Based on Graph Partition Mingyu Xiao School of Computer Science and Engineering University of Electronic Science and Technology of China Chengdu 611731,

More information

The Ellipsoid Algorithm

The Ellipsoid Algorithm The Ellipsoid Algorithm John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA 9 February 2018 Mitchell The Ellipsoid Algorithm 1 / 28 Introduction Outline 1 Introduction 2 Assumptions

More information

More Approximation Algorithms

More Approximation Algorithms CS 473: Algorithms, Spring 2018 More Approximation Algorithms Lecture 25 April 26, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 1 Spring 2018 1 / 28 Formal definition of approximation

More information

Linearly-Solvable Stochastic Optimal Control Problems

Linearly-Solvable Stochastic Optimal Control Problems Linearly-Solvable Stochastic Optimal Control Problems Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter 2014

More information

Lecture 24: August 28

Lecture 24: August 28 10-725: Optimization Fall 2012 Lecture 24: August 28 Lecturer: Geoff Gordon/Ryan Tibshirani Scribes: Jiaji Zhou,Tinghui Zhou,Kawa Cheung Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information