Sparsification of Graphs and Matrices

Size: px
Start display at page:

Download "Sparsification of Graphs and Matrices"

Transcription

1 Sparsification of Graphs and Matrices Daniel A. Spielman Yale University joint work with Joshua Batson (MIT) Nikhil Srivastava (MSR) Shang- Hua Teng (USC) HUJI, May 21, 2014

2 Objective of Sparsification: Approximate any (weighted) graph by a sparse weighted graph.

3 Objective of Sparsification: Approximate any (weighted) graph by a sparse weighted graph. Spanners - Preserve Distances [Chew 89] Cut- Sparsifiers preserve wt of edges leaving every set S V [Benczur- Karger 96] S S

4 Spectral Sparsification [S- Teng] Approximate any (weighted) graph by a sparse weighted graph. Graph G =(V,E,w) Laplacian X w u,v (x(u) x(v)) 2 (u,v)2e

5 Laplacian Quadratic Form, examples All edge- weights are 1 1 x :

6 Laplacian Quadratic Form, examples All edge- weights are 1 1 x : x T L G x = = X (u,v)2e (x(u) x(v)) 2 Sum of squares of differences across edges

7 Laplacian Quadratic Form, examples All edge- weights are 1 1 x : x T L G x = = =1 X 1 (u,v)2e Sum of squares of differences across edges 1 (x(u) x(v)) 2

8 Laplacian Quadratic Form, examples When x is the characteristic vector of a set S, sum the weights of edges on the boundary of S x T L G x = X w u,v (u,v)2e u2s v62s S

9 Learning on Graphs (Zhu- Ghahramani- Lafferty 03) Infer values of a function at all vertices from known values at a few vertices. Minimize X (a,b)2e Subject to known values (x(a) x(b)) 2 1 0

10 Learning on Graphs (Zhu- Ghahramani- Lafferty 03) Infer values of a function at all vertices from known values at a few vertices. Minimize X (a,b)2e Subject to known values (x(a) x(b))

11 Graphs as Resistor Networks View edges as resistors connecting vertices Apply voltages at some vertices. Measure induced voltages and current flow. 1V 0V

12 Graphs as Resistor Networks View edges as resistors connecting vertices Apply voltages at some vertices. Measure induced voltages and current flow. 0.5V 0.5V 1V 0V 0.375V 0.625V

13 Graphs as Resistor Networks View edges as resistors connecting vertices Apply voltages at some vertices. Measure induced voltages and current flow. Induced voltages minimize X (v(a) v(b)) 2 (a,b)2e Subject to fixed voltages (by battery)

14 Graphs as Resistor Networks Effective Resistance between s and t = potential difference of unit flow s t Re (s, t) = s 0 t Re (s, t) =

15 Laplacian Matrices x T L G x = X (u,v)2e w u,v (x(u) x(v)) 2 X L G = w u,v L u,v (u,v)2e 1 1 E.g. L 1,2 = = 1 1 1

16 Laplacian Matrices x T L G x = L G = X (u,v)2e X (u,v)2e = X (u,v)2e w u,v (x(u) x(v)) 2 w u,v L u,v w u,v (b u,v b T u,v) where b u,v = u v Sum of outer products

17 Laplacian Matrices x T L G x = L G = X (u,v)2e X (u,v)2e = X (u,v)2e w u,v (x(u) x(v)) 2 w u,v L u,v w u,v (b u,v b T u,v) Positive semidefinite If connected, nullspace = Span(1)

18 Inequalities on Graphs and Matrices For matrices M and f M M 4 f M if x T Mx apple x T f Mx for all x For graphs G =(V,E,w) and H =(V,F,z) G 4 H if L G 4 L H

19 Inequalities on Graphs and Matrices For matrices M and f M M 4 f M if x T Mx apple x T f Mx for all x For graphs G =(V,E,w) and H =(V,F,z) G 4 H G 4 kh if L G 4 L H if L G 4 kl H

20 Approximations of Graphs and Matrices M M f 1 if 1+ M 4 M f 4 (1 + )M For graphs G =(V,E,w) and H =(V,F,z) G H if L G L H

21 Approximations of Graphs and Matrices M M f 1 if 1+ M 4 M f 4 (1 + )M For graphs G =(V,E,w) and H =(V,F,z) G H That is, for all if L G L H 1 1+ apple P(u,v)2F z u,v(x(u) x(v)) 2 P(u,v)2E w apple 1+ u,v(x(u) x(v)) 2

22 Implications of Approximation G H Boundaries of sets are similar. Effective resistances are similar. L H and L G have similar eigenvalues L + G L + H Solutions to systems of linear equations are similar.

23 Spectral Sparsification [S- Teng] For an input graph G with n vertices, find a sparse graph H having Õ(n) edges so that G H

24 Why? Solving linear equations in Laplacian Matrices key part of nearly- linear time algorithm use for learning on graphs, maxflow, PDEs, Preserve Eigenvectors, Eigenvalues and electrical properties Generalize Expanders Certifiable cut- sparsifiers

25 Approximations of Complete Graphs are Expanders Expanders: d- regular graphs on n vertices (n grows, d fixed) every set of vertices has large boundary random walks mix quickly incredibly useful

26 Approximations of Complete Graphs are Expanders Expanders: d- regular graphs on n vertices (n grows, d fixed) weak expanders: eigenvalues bounded from 0 strong expanders: all eigenvalues near d

27 Example: Approximating a Complete Graph For G the complete graph on n verts. all non- zero eigenvalues of L G are n. For, x? 1 kxk =1 x T L G x = n

28 Example: Approximating a Complete Graph For G the complete graph on n verts. all non- zero eigenvalues of L G are n. For, x? 1 kxk =1 x T L G x = n For H a d- regular strong expander, all non- zero eigenvalues of L H are close to d. For, x? 1 kxk =1 x T L H x d

29 Example: Approximating a Complete Graph For G the complete graph on n verts. all non- zero eigenvalues of L G are n. For, x? 1 kxk =1 x T L G x = n For H a d- regular strong expander, all non- zero eigenvalues of L H are close to d. For, x? 1 n d H kxk =1 x T L H x d is a good approximation of G

30 Best Approximations of Complete Graphs Ramanujan Expanders [Margulis, Lubotzky- Phillips- Sarnak] d 2 p d 1 apple (L H ) apple d +2 p d 1

31 Best Approximations of Complete Graphs Ramanujan Expanders [Margulis, Lubotzky- Phillips- Sarnak] d 2 p d 1 apple (L H ) apple d +2 p d 1 Cannot do better if n grows while d is fixed [Alon- Boppana]

32 Best Approximations of Complete Graphs Ramanujan Expanders [Margulis, Lubotzky- Phillips- Sarnak] d 2 p d 1 apple (L H ) apple d +2 p d 1 Can we approximate every graph this well?

33 Example: Dumbbell G Complete graph on n vertices 1 Complete graph on n vertices H d- regular Ramanujan, times n/d 1 d- regular Ramanujan, times n/d 33

34 Example: Dumbbell K n 1 K n d- regular Ramanujan, times n/d 1 d- regular Ramanujan, times n/d 34

35 Example: Dumbbell G 2 G 1 1 G 3 K n K n H 1 d- regular Ramanujan, times n/d H 2 1 d- regular Ramanujan, times n/d H 3 G = G 1 + G 2 + G 3 H = H 1 + H 2 + H 3 G 1 4 (1 + )H 1 G 2 = H 2 G 4 (1 + )H G 3 4 (1 + )H 3 35

36 Cut- Sparsifiers [Benczur- Karger 96] For every G, is an H with for all x {0, 1} n 1 1+ apple xt L H x x T L G x O(n log n/ 2 ) apple 1+ edges S S

37 Cut- approximation is different m 1 k

38 Cut- approximation is different m 1 k m 3

39 Cut- approximation is different m 1 k m 3 x T L G x = km + m 2 Need long edge if k<m

40 Main Theorems for Graphs For every G =(V,E,w), there is a H =(V,F,z) s.t. G H and F V (2 + ) 2 / 2

41 Main Theorems for Graphs For every G =(V,E,w), there is a H =(V,F,z) s.t. G H and F V (2 + ) 2 / 2 Within a factor of 2 of the Ramanujan bound

42 Main Theorems for Graphs For every G =(V,E,w), there is a H =(V,F,z) s.t. G H and F V (2 + ) 2 / 2 By careful random sampling, get (S- Srivastava 08) In time O( E log 2 V log(1/ )) (Koutis- Levin- Peng 12)

43 Sparsification by Random Sampling Assign a probability p u,v to each edge (u,v) Include edge (u,v) in H with probability p u,v. If include edge (u,v), give it weight w u,v /p u,v E [ L H ]= X (u,v)2e p u,v (w u,v /p u,v )L u,v = L G

44 Sparsification by Random Sampling Choose p u,v to be w u,v times the effective resistance between u and v. Low resistance between u and v means there are many alternate routes for current to flow and that the edge is not critical. Proof by random matrix concentration bounds (Rudelson, Ahlswede- Winter, Tropp, etc.)

45 Matrix Sparsification M = B B T fm = 1 (1 + ) M 4 f M 4 (1 + )M

46 Matrix Sparsification M = B B T = P e b eb T e fm = = P e s eb e b T e 1 (1 + ) M 4 f M 4 (1 + )M most s e =0

47 Main Theorem (Batson- S- Srivastava) For M = P e b eb T e, there exist s e so that for fm = P e s eb e b T e M f M and at most n(2 + ) 2 / 2 s e are non- zero

48 Simplification of Matrix Sparsification 1 (1 + ) M 4 M f 4 (1 + )M is equivalent to 1 (1 + ) I 4 M 1/2 MM f 1/2 4 (1 + )I

49 Simplification of Matrix Sparsification 1 (1 + ) I 4 M 1/2 f MM 1/2 4 (1 + )I Set v e = M 1/2 b e X e v e v T e = I Decomposition of the identity X hu, v e i 2 = kuk 2 e

50 Simplification of Matrix Sparsification 1 (1 + ) I 4 M 1/2 MM f 1/2 4 (1 + )I Set v e = M 1/2 b e We need X v e ve T = I e X s e v e ve T I e

51 Simplification of Matrix Sparsification 1 (1 + ) I 4 M 1/2 MM f 1/2 4 (1 + )I Set v e = M 1/2 b e We need X v e ve T = I e X s e v e ve T I e Random sampling sets p e = kv e k 2

52 What happens when we add a vector?

53 Interlacing

54 More precisely Characteristic Polynomial:

55 More precisely Characteristic Polynomial: Rank- one update: p A+vv T = 1+ P i <v,u i > 2 i x p A Where Au i = i u i

56 More precisely Characteristic Polynomial: Rank- one update: p A+vv T = 1+ P i <v,u i > 2 i x p A are zeros of

57 Physical model of interlacing λ i = positive unit charges resting at barriers on a slope

58 Physical model of interlacing u i is eigenvector v hv, u i i 2 is added vector charge on barrier

59 Physical model of interlacing Barriers repel eigs. u i is eigenvector v hv, u i i 2 is added vector charge on barrier

60 Physical model of interlacing Barriers repel eigs. Inverse law repulsion gravity

61 Examples

62 v = u 1 Ex1: All weight on u 1

63 v = u 1 Ex1: All weight on u 1

64 v = u 1 Ex1: All weight on u 1

65 Ex1: All weight on u 1 Gravity keeps resting on barrier Pushed up against next barrier v = u 1

66 Ex2: Equal weight on u1, u2

67 Ex2: Equal weight on u1, u2

68 Ex2: Equal weight on u1, u2

69 Ex3: Equal weight on all u 1, u 2,, u n +1/n +1/n +1/n

70 Ex3: Equal weight on all u 1, u 2,, u n +1/n +1/n +1/n

71 Adding a random v e Because v e are decomposition of identity, h i E hv e,u i i 2 =1/m e E e PA+ve v T e 1 = 1+ m X i i 1 x! P A 1 = P A m d dx P A

72 Many random v e E PA+ve v T (x) 1 d =1 e e m dx P A(x) h i E P ve1 v e 1,...,e e T + +v ek v T (x) = 1 k 1 e k 1 m k d x n dx

73 Many random v e E PA+ve v T (x) 1 d =1 e e m dx P A(x) h i E P ve1 v e 1,...,e e T + +v ek v T (x) = 1 k 1 e k 1 m k d x n dx Is an associated Laguerre polynomial! For k = n/ 2, roots lie between (1 ) 2 n and m 2 (1 + ) 2 n m 2

74 Matrix Sparsification Proof Sketch Have X v e v T e = I Want X s e v e v T e I e e Will do with {e : s e 6=0} apple 6n All eigenvalues between 1 and 13, 2.6

75 Broad outline: moving barriers - n 0 n

76 Step 1 - n 0 n

77 Step 1 - n 0 n - n 0 n

78 Step 1 - n 0 n +1/ n+1/3 0 n+2

79 Step 1 - n tighter constraint looser constraint 0 n +1/ n+1/3 0 n+2

80 Step i+1 0

81 Step i+1 +1/3 +2 0

82 Step i+1 0

83 Step i+1 +1/3 +2 0

84 Step i+1 0

85 Step i+1 +1/3 +2 0

86 Step i+1 0

87 Step i+1 0

88 Step i+1 0

89 Step i+1 0

90 Step 6n 0 n 13n

91 Step 6n 0 n 13n 2.6-approximation with 6n vectors.

92 Problem need to show that an appropriate always exists.

93 Problem need to show that an appropriate always exists. Is not strong enough for induction

94 Problems If many small eigenvalues, can only move one Bunched large eigenvalues repel the highest one

95 The Lower Barrier Potential Function (A) = P i 1 i =Tr (A I) 1

96 The Lower Barrier Potential Function (A) = P i 1 i =Tr (A I) 1 (A) apple 1 =) min (A) +1

97 The Lower Barrier Potential Function (A) = P i 1 i =Tr (A I) 1 No λ i within dist. 1 No 2 λ i within dist. 2 No 3 λ i within dist. 3.. No k λ i within dist. k (A) apple 1 =) min (A) +1

98 The Upper Barrier Potential Function u (A) = P i 1 u i =Tr (ui A) 1

99 The Beginning -n 0 n

100 The Beginning -n 0 n

101 Step i+1 0

102 Step i+1 +1/ Lemma. can always choose +s e v e ve T so that potentials do not increase

103 Step i+1 0

104 Step i+1 0

105 Step i+1 0

106 Step i+1 0

107 Step 6n 0 n 13n

108 Step 6n 0 n 13n 2.6-approximation with 6n vectors.

109 Goal Lemma. can always choose +s e v e ve T so that potentials do not increase +1/3 +2 +s e v e v T e

110 Upper Barrier Update +2 Add svv T and set

111 Upper Barrier Update +2 Add svv T and set u 0 (A + svv T ) =Tr (u 0 I A svv T ) 1

112 Upper Barrier Update +2 Add svv T and set u 0 (A + svv T ) =Tr (u 0 I A svv T ) 1 = u0 (A)+ sv T (u 0 I A) 2 v 1 sv T (u 0 I A) 1 v By Sherman- Morrison Formula

113 Upper Barrier Update +2 Add svv T and set u 0 (A + svv T ) =Tr (u 0 I A svv T ) 1 = u0 (A)+ sv T (u 0 I A) 2 v 1 sv T (u 0 I A) 1 v Need apple u (A)

114 How much of Rearranging: vv T can we add? u 0 (A + svv T ) apple u (A) iff 1 sv T (u 0 I A) 2 u (A) u 0 (A) +(u0 I A) 1 v

115 How much of Rearranging: vv T can we add? u 0 (A + svv T ) apple u (A) iff 1 sv T (u 0 I A) 2 u (A) u 0 (A) +(u0 I A) 1 v Write as 1 sv T U A v

116 Lower Barrier Similarly: l 0 (A + svv T ) apple l (A) iff 1 apple sv T (A l 0 I) 2 l 0 (A) l(a) (A l 0 I) 1 v Write as 1 apple sv T L A v

117 Goal Show that we can always add some vector while respecting both barriers. +1/3 +2 +svv T Need: sv T U A v apple 1 apple sv T L A v

118 Two expectations Need: sv T U A v apple 1 apple sv T L A v Can show: E v T e U A v e apple 3/2m e E v T e L A v e 2/m e

119 Two expectations Need: sv T U A v apple 1 apple sv T L A v Can show: So: E v T e U A v e apple 3/2m e E v T e L A v e 2/m e E e v T e U A v e apple Ee v T e L A v e And, exists e : v T e U A v e apple v T e L A v e

120 Two expectations Need: sv T U A v apple 1 apple sv T L A v Can show: So: E v T e U A v e apple 3/2m e E v T e L A v e 2/m e E e v T e U A v e apple Ee v T e L A v e And, exists e : v T e U A v e apple v T e L A v e And s that puts 1 between them

121 Two expectations Need: sv T U A v apple 1 apple sv T L A v Can show: So: E v T e U A v e apple 3/2m e E v T e L A v e 2/m e E e v T e U A v e apple Ee v T e L A v e And, exists e : v T e U A v e apple v T e L A v e And s that puts 1 between them

122 Bounding expectations v T U A v =Tr U A vv T E e Tr UA v e v T e =Tr U A E e ve v T e =Tr(U A I) /m =Tr(U A )/m

123 Bounding expectations Tr (U A )= Tr (u0 I A) 2 u (A) u 0 (A) +Tr (u0 I A) 1 = u0 (A) apple apple 1 u (A) As barrier function is monotone decreasing

124 Bounding expectations Tr (U A )= Tr (u0 I A) 2 u (A) u 0 (A) +Tr (u0 I A) 1 Numerator is derivative of barrier function. As barrier function is convex, = u0 (A) apple u (A) apple 1 apple 1 u 0 u Tr(U A ) apple 1/2+1=3/2

125 Bounding expectations Similarly, Tr (L A ) l 0 1 l 1 = 1 1/3 1 =2

126 Step i+1 +1/ Lemma. can always choose so that potentials do not increase

127 Twice- Ramanujan Sparsifiers Fixing steps and tightening parameters gives ratio max(a) min(a) p d +1+2 d apple d +1 2 p d Less than twice as many edges as used by Ramanujan Expander of same quality

128 Open Questions The Ramanujan bound Properties of vectors from graphs? Faster algorithm union of random Hamiltonian cycles?

Graphs, Vectors, and Matrices Daniel A. Spielman Yale University. AMS Josiah Willard Gibbs Lecture January 6, 2016

Graphs, Vectors, and Matrices Daniel A. Spielman Yale University. AMS Josiah Willard Gibbs Lecture January 6, 2016 Graphs, Vectors, and Matrices Daniel A. Spielman Yale University AMS Josiah Willard Gibbs Lecture January 6, 2016 From Applied to Pure Mathematics Algebraic and Spectral Graph Theory Sparsification: approximating

More information

Graph Sparsification II: Rank one updates, Interlacing, and Barriers

Graph Sparsification II: Rank one updates, Interlacing, and Barriers Graph Sparsification II: Rank one updates, Interlacing, and Barriers Nikhil Srivastava Simons Institute August 26, 2014 Previous Lecture Definition. H = (V, F, u) is a κ approximation of G = V, E, w if:

More information

Graph Sparsification I : Effective Resistance Sampling

Graph Sparsification I : Effective Resistance Sampling Graph Sparsification I : Effective Resistance Sampling Nikhil Srivastava Microsoft Research India Simons Institute, August 26 2014 Graphs G G=(V,E,w) undirected V = n w: E R + Sparsification Approximate

More information

Laplacian Matrices of Graphs: Spectral and Electrical Theory

Laplacian Matrices of Graphs: Spectral and Electrical Theory Laplacian Matrices of Graphs: Spectral and Electrical Theory Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale University Toronto, Sep. 28, 2 Outline Introduction to graphs

More information

Graph Sparsifiers: A Survey

Graph Sparsifiers: A Survey Graph Sparsifiers: A Survey Nick Harvey UBC Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman, Srivastava and Teng Approximating Dense Objects

More information

Algorithms, Graph Theory, and the Solu7on of Laplacian Linear Equa7ons. Daniel A. Spielman Yale University

Algorithms, Graph Theory, and the Solu7on of Laplacian Linear Equa7ons. Daniel A. Spielman Yale University Algorithms, Graph Theory, and the Solu7on of Laplacian Linear Equa7ons Daniel A. Spielman Yale University Rutgers, Dec 6, 2011 Outline Linear Systems in Laplacian Matrices What? Why? Classic ways to solve

More information

Algorithms, Graph Theory, and Linear Equa9ons in Laplacians. Daniel A. Spielman Yale University

Algorithms, Graph Theory, and Linear Equa9ons in Laplacians. Daniel A. Spielman Yale University Algorithms, Graph Theory, and Linear Equa9ons in Laplacians Daniel A. Spielman Yale University Shang- Hua Teng Carter Fussell TPS David Harbater UPenn Pavel Cur9s Todd Knoblock CTY Serge Lang 1927-2005

More information

Constant Arboricity Spectral Sparsifiers

Constant Arboricity Spectral Sparsifiers Constant Arboricity Spectral Sparsifiers arxiv:1808.0566v1 [cs.ds] 16 Aug 018 Timothy Chu oogle timchu@google.com Jakub W. Pachocki Carnegie Mellon University pachocki@cs.cmu.edu August 0, 018 Abstract

More information

Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families

Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families Graph Sparsification III: Ramanujan Graphs, Lifts, and Interlacing Families Nikhil Srivastava Microsoft Research India Simons Institute, August 27, 2014 The Last Two Lectures Lecture 1. Every weighted

More information

Spectral and Electrical Graph Theory. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

Spectral and Electrical Graph Theory. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Spectral and Electrical Graph Theory Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Outline Spectral Graph Theory: Understand graphs through eigenvectors and

More information

ORIE 6334 Spectral Graph Theory November 8, Lecture 22

ORIE 6334 Spectral Graph Theory November 8, Lecture 22 ORIE 6334 Spectral Graph Theory November 8, 206 Lecture 22 Lecturer: David P. Williamson Scribe: Shijin Rajakrishnan Recap In the previous lectures, we explored the problem of finding spectral sparsifiers

More information

Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving

Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving Stat260/CS294: Randomized Algorithms for Matrices and Data Lecture 24-12/02/2013 Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving Lecturer: Michael Mahoney Scribe: Michael Mahoney

More information

Sparsification by Effective Resistance Sampling

Sparsification by Effective Resistance Sampling Spectral raph Theory Lecture 17 Sparsification by Effective Resistance Sampling Daniel A. Spielman November 2, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened

More information

Graph Sparsifiers. Smaller graph that (approximately) preserves the values of some set of graph parameters. Graph Sparsification

Graph Sparsifiers. Smaller graph that (approximately) preserves the values of some set of graph parameters. Graph Sparsification Graph Sparsifiers Smaller graph that (approximately) preserves the values of some set of graph parameters Graph Sparsifiers Spanners Emulators Small stretch spanning trees Vertex sparsifiers Spectral sparsifiers

More information

A Tutorial on Matrix Approximation by Row Sampling

A Tutorial on Matrix Approximation by Row Sampling A Tutorial on Matrix Approximation by Row Sampling Rasmus Kyng June 11, 018 Contents 1 Fast Linear Algebra Talk 1.1 Matrix Concentration................................... 1. Algorithms for ɛ-approximation

More information

The Method of Interlacing Polynomials

The Method of Interlacing Polynomials The Method of Interlacing Polynomials Kuikui Liu June 2017 Abstract We survey the method of interlacing polynomials, the heart of the recent solution to the Kadison-Singer problem as well as breakthrough

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in

More information

Algorithmic Primitives for Network Analysis: Through the Lens of the Laplacian Paradigm

Algorithmic Primitives for Network Analysis: Through the Lens of the Laplacian Paradigm Algorithmic Primitives for Network Analysis: Through the Lens of the Laplacian Paradigm Shang-Hua Teng Computer Science, Viterbi School of Engineering USC Massive Data and Massive Graphs 500 billions web

More information

Random Matrices: Invertibility, Structure, and Applications

Random Matrices: Invertibility, Structure, and Applications Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37

More information

Properties of Expanders Expanders as Approximations of the Complete Graph

Properties of Expanders Expanders as Approximations of the Complete Graph Spectral Graph Theory Lecture 10 Properties of Expanders Daniel A. Spielman October 2, 2009 10.1 Expanders as Approximations of the Complete Graph Last lecture, we defined an expander graph to be a d-regular

More information

Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time

Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time Yin Tat Lee MIT Cambridge, USA yintat@mit.edu He Sun The University of Bristol Bristol, UK h.sun@bristol.ac.uk Abstract We present

More information

Towards Practically-Efficient Spectral Sparsification of Graphs. Zhuo Feng

Towards Practically-Efficient Spectral Sparsification of Graphs. Zhuo Feng Design Automation Group Towards Practically-Efficient Spectral Sparsification of Graphs Zhuo Feng Acknowledgements: PhD students: Xueqian Zhao, Lengfei Han, Zhiqiang Zhao, Yongyu Wang NSF CCF CAREER grant

More information

U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21

U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21 U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Scribe: Anupam Last revised Lecture 21 1 Laplacian systems in nearly linear time Building upon the ideas introduced in the

More information

Effective Resistance and Schur Complements

Effective Resistance and Schur Complements Spectral Graph Theory Lecture 8 Effective Resistance and Schur Complements Daniel A Spielman September 28, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in

More information

An SDP-Based Algorithm for Linear-Sized Spectral Sparsification

An SDP-Based Algorithm for Linear-Sized Spectral Sparsification An SDP-Based Algorithm for Linear-Sized Spectral Sparsification ABSTRACT Yin Tat Lee Microsoft Research Redmond, USA yile@microsoft.com For any undirected and weighted graph G = V, E, w with n vertices

More information

Lecture 17: Cheeger s Inequality and the Sparsest Cut Problem

Lecture 17: Cheeger s Inequality and the Sparsest Cut Problem Recent Advances in Approximation Algorithms Spring 2015 Lecture 17: Cheeger s Inequality and the Sparsest Cut Problem Lecturer: Shayan Oveis Gharan June 1st Disclaimer: These notes have not been subjected

More information

Physical Metaphors for Graphs

Physical Metaphors for Graphs Graphs and Networks Lecture 3 Physical Metaphors for Graphs Daniel A. Spielman October 9, 203 3. Overview We will examine physical metaphors for graphs. We begin with a spring model and then discuss resistor

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Vectorized Laplacians for dealing with high-dimensional data sets

Vectorized Laplacians for dealing with high-dimensional data sets Vectorized Laplacians for dealing with high-dimensional data sets Fan Chung Graham University of California, San Diego The Power Grid Graph drawn by Derrick Bezanson 2010 Vectorized Laplacians ----- Graph

More information

PSRGs via Random Walks on Graphs

PSRGs via Random Walks on Graphs Spectral Graph Theory Lecture 11 PSRGs via Random Walks on Graphs Daniel A. Spielman October 3, 2012 11.1 Overview There has been a lot of work on the design of Pseudo-Random Number Generators (PSRGs)

More information

arxiv: v3 [cs.ds] 20 Jul 2010

arxiv: v3 [cs.ds] 20 Jul 2010 Spectral Sparsification of Graphs arxiv:0808.4134v3 [cs.ds] 20 Jul 2010 Daniel A. Spielman Department of Computer Science Program in Applied Mathematics Yale University July 22, 2010 Abstract Shang-Hua

More information

Specral Graph Theory and its Applications September 7, Lecture 2 L G1,2 = 1 1.

Specral Graph Theory and its Applications September 7, Lecture 2 L G1,2 = 1 1. Specral Graph Theory and its Applications September 7, 2004 Lecturer: Daniel A. Spielman Lecture 2 2.1 The Laplacian, again We ll begin by redefining the Laplacian. First, let G 1,2 be the graph on two

More information

Ramanujan Graphs of Every Size

Ramanujan Graphs of Every Size Spectral Graph Theory Lecture 24 Ramanujan Graphs of Every Size Daniel A. Spielman December 2, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The

More information

Fitting a Graph to Vector Data. Samuel I. Daitch (Yale) Jonathan A. Kelner (MIT) Daniel A. Spielman (Yale)

Fitting a Graph to Vector Data. Samuel I. Daitch (Yale) Jonathan A. Kelner (MIT) Daniel A. Spielman (Yale) Fitting a Graph to Vector Data Samuel I. Daitch (Yale) Jonathan A. Kelner (MIT) Daniel A. Spielman (Yale) Machine Learning Summer School, June 2009 Given a collection of vectors x 1,..., x n IR d 1,, n

More information

Approximate Gaussian Elimination for Laplacians

Approximate Gaussian Elimination for Laplacians CSC 221H : Graphs, Matrices, and Optimization Lecture 9 : 19 November 2018 Approximate Gaussian Elimination for Laplacians Lecturer: Sushant Sachdeva Scribe: Yasaman Mahdaviyeh 1 Introduction Recall sparsifiers

More information

Graph Partitioning Using Random Walks

Graph Partitioning Using Random Walks Graph Partitioning Using Random Walks A Convex Optimization Perspective Lorenzo Orecchia Computer Science Why Spectral Algorithms for Graph Problems in practice? Simple to implement Can exploit very efficient

More information

to be more efficient on enormous scale, in a stream, or in distributed settings.

to be more efficient on enormous scale, in a stream, or in distributed settings. 16 Matrix Sketching The singular value decomposition (SVD) can be interpreted as finding the most dominant directions in an (n d) matrix A (or n points in R d ). Typically n > d. It is typically easy to

More information

Lecture 1 and 2: Random Spanning Trees

Lecture 1 and 2: Random Spanning Trees Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny

More information

4 Linear Algebra Review

4 Linear Algebra Review Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the text 1 Vectors and Matrices For the context of data analysis, the

More information

ORIE 6334 Spectral Graph Theory October 13, Lecture 15

ORIE 6334 Spectral Graph Theory October 13, Lecture 15 ORIE 6334 Spectral Graph heory October 3, 206 Lecture 5 Lecturer: David P. Williamson Scribe: Shijin Rajakrishnan Iterative Methods We have seen in the previous lectures that given an electrical network,

More information

Spectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

Spectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Spectral Graph Theory and its Applications Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Outline Adjacency matrix and Laplacian Intuition, spectral graph drawing

More information

Approximate Spectral Clustering via Randomized Sketching

Approximate Spectral Clustering via Randomized Sketching Approximate Spectral Clustering via Randomized Sketching Christos Boutsidis Yahoo! Labs, New York Joint work with Alex Gittens (Ebay), Anju Kambadur (IBM) The big picture: sketch and solve Tradeoff: Speed

More information

Graph fundamentals. Matrices associated with a graph

Graph fundamentals. Matrices associated with a graph Graph fundamentals Matrices associated with a graph Drawing a picture of a graph is one way to represent it. Another type of representation is via a matrix. Let G be a graph with V (G) ={v 1,v,...,v n

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality Lecturer: Shayan Oveis Gharan May 4th Scribe: Gabriel Cadamuro Disclaimer:

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.]

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.] Lectures 16-17: Random walks on graphs CSE 525, Winter 2015 Instructor: James R. Lee 1 Random walks Let G V, E) be an undirected graph. The random walk on G is a Markov chain on V that, at each time step,

More information

Ranking and sparsifying a connection graph

Ranking and sparsifying a connection graph Ranking and sparsifying a connection graph Fan Chung 1, and Wenbo Zhao 1 Department of Mathematics Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 9093,

More information

Iterative solvers for linear equations

Iterative solvers for linear equations Spectral Graph Theory Lecture 23 Iterative solvers for linear equations Daniel A. Spielman November 26, 2018 23.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving

More information

Spectral Analysis of Random Walks

Spectral Analysis of Random Walks Graphs and Networks Lecture 9 Spectral Analysis of Random Walks Daniel A. Spielman September 26, 2013 9.1 Disclaimer These notes are not necessarily an accurate representation of what happened in class.

More information

Convergence of Random Walks

Convergence of Random Walks Graphs and Networks Lecture 9 Convergence of Random Walks Daniel A. Spielman September 30, 010 9.1 Overview We begin by reviewing the basics of spectral theory. We then apply this theory to show that lazy

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors

More information

Iterative solvers for linear equations

Iterative solvers for linear equations Spectral Graph Theory Lecture 15 Iterative solvers for linear equations Daniel A. Spielman October 1, 009 15.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving linear

More information

Algebraic Constructions of Graphs

Algebraic Constructions of Graphs Spectral Graph Theory Lecture 15 Algebraic Constructions of Graphs Daniel A. Spielman October 17, 2012 15.1 Overview In this lecture, I will explain how to make graphs from linear error-correcting codes.

More information

Solving systems of linear equations

Solving systems of linear equations Chapter 5 Solving systems of linear equations Let A R n n and b R n be given. We want to solve Ax = b. What is the fastest method you know? We usually learn how to solve systems of linear equations via

More information

Random Lifts of Graphs

Random Lifts of Graphs 27th Brazilian Math Colloquium, July 09 Plan of this talk A brief introduction to the probabilistic method. A quick review of expander graphs and their spectrum. Lifts, random lifts and their properties.

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

A Matrix Expander Chernoff Bound

A Matrix Expander Chernoff Bound A Matrix Expander Chernoff Bound Ankit Garg, Yin Tat Lee, Zhao Song, Nikhil Srivastava UC Berkeley Vanilla Chernoff Bound Thm [Hoeffding, Chernoff]. If X 1,, X k are independent mean zero random variables

More information

Eigenvalues and Eigenvectors. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Eigenvalues and Eigenvectors. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Eigenvalues and Eigenvectors Consider the equation A x = λ x, where A is an nxn matrix. We call x (must be non-zero) an eigenvector of A if this equation can be solved for some value of λ. We call λ an

More information

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 8

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 8 EECS 6B Designing Information Devices and Systems II Fall 28 Elad Alon and Miki Lustig Homework 8 his homework is due on Wednesday, October 24, 28, at :59PM. Self-grades are due on Monday, October 29,

More information

Lecture 7 Spectral methods

Lecture 7 Spectral methods CSE 291: Unsupervised learning Spring 2008 Lecture 7 Spectral methods 7.1 Linear algebra review 7.1.1 Eigenvalues and eigenvectors Definition 1. A d d matrix M has eigenvalue λ if there is a d-dimensional

More information

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find

More information

Bipartite Ramanujan Graphs of Every Degree

Bipartite Ramanujan Graphs of Every Degree Spectral Graph Theory Lecture 26 Bipartite Ramanujan Graphs of Every Degree Daniel A. Spielman December 9, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in

More information

Jim Lambers MAT 419/519 Summer Session Lecture 11 Notes

Jim Lambers MAT 419/519 Summer Session Lecture 11 Notes Jim Lambers MAT 49/59 Summer Session 20-2 Lecture Notes These notes correspond to Section 34 in the text Broyden s Method One of the drawbacks of using Newton s Method to solve a system of nonlinear equations

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Efficiently Implementing Sparsity in Learning

Efficiently Implementing Sparsity in Learning Efficiently Implementing Sparsity in Learning M. Magdon-Ismail Rensselaer Polytechnic Institute (Joint Work) December 9, 2013. Out-of-Sample is What Counts NO YES A pattern exists We don t know it We have

More information

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Lecture 18. Ramanujan Graphs continued

Lecture 18. Ramanujan Graphs continued Stanford University Winter 218 Math 233A: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: March 8, 218 Original scribe: László Miklós Lovász Lecture 18 Ramanujan Graphs

More information

Applications of Matrix Structure

Applications of Matrix Structure Applications of Matrix Structure D. Bindel Department of Computer Science Cornell University 15 Sep 2009 (Cornell University) Brown Bag 1 / 32 The Computational Science Picture Ingredients: Application

More information

Walks, Springs, and Resistor Networks

Walks, Springs, and Resistor Networks Spectral Graph Theory Lecture 12 Walks, Springs, and Resistor Networks Daniel A. Spielman October 8, 2018 12.1 Overview In this lecture we will see how the analysis of random walks, spring networks, and

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Lecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis

Lecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis CS-621 Theory Gems October 18, 2012 Lecture 10 Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis 1 Introduction In this lecture, we will see how one can use random walks to

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name. HW2 - Due 0/30 Each answer must be mathematically justified. Don t forget your name. Problem. Use the row reduction algorithm to find the inverse of the matrix 0 0, 2 3 5 if it exists. Double check your

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Introduction. 1.1 First Things. 1.2 Introduction. 1.3 Mechanics. Spectral Graph Theory Lecture 1. Daniel A. Spielman August 29, 2018

Introduction. 1.1 First Things. 1.2 Introduction. 1.3 Mechanics. Spectral Graph Theory Lecture 1. Daniel A. Spielman August 29, 2018 Spectral Graph Theory Lecture 1 Introduction Daniel A. Spielman August 29, 2018 1.1 First Things 1. Please call me Dan. If such informality makes you uncomfortable, you can try Professor Dan. If that fails,

More information

Graph Partitioning Algorithms and Laplacian Eigenvalues

Graph Partitioning Algorithms and Laplacian Eigenvalues Graph Partitioning Algorithms and Laplacian Eigenvalues Luca Trevisan Stanford Based on work with Tsz Chiu Kwok, Lap Chi Lau, James Lee, Yin Tat Lee, and Shayan Oveis Gharan spectral graph theory Use linear

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Graphs in Machine Learning

Graphs in Machine Learning Graphs in Machine Learning Michal Valko INRIA Lille - Nord Europe, France Partially based on material by: Ulrike von Luxburg, Gary Miller, Doyle & Schnell, Daniel Spielman January 27, 2015 MVA 2014/2015

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

Learning Weighted Automata

Learning Weighted Automata Learning Weighted Automata Joint work with Borja Balle (Amazon Research) MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Weighted Automata (WFAs) page 2 Motivation Weighted automata (WFAs): image

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Lecture 21 Nov 18, 2015

Lecture 21 Nov 18, 2015 CS 388R: Randoized Algoriths Fall 05 Prof. Eric Price Lecture Nov 8, 05 Scribe: Chad Voegele, Arun Sai Overview In the last class, we defined the ters cut sparsifier and spectral sparsifier and introduced

More information

Matrix Concentration. Nick Harvey University of British Columbia

Matrix Concentration. Nick Harvey University of British Columbia Matrix Concentration Nick Harvey University of British Columbia The Problem Given any random nxn, symmetric matrices Y 1,,Y k. Show that i Y i is probably close to E[ i Y i ]. Why? A matrix generalization

More information

Rank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities

Rank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities Rank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities C.J. Argue Joint work with Fatma Kılınç-Karzan October 22, 2017 INFORMS Annual Meeting Houston, Texas C. Argue, F.

More information

Community detection in stochastic block models via spectral methods

Community detection in stochastic block models via spectral methods Community detection in stochastic block models via spectral methods Laurent Massoulié (MSR-Inria Joint Centre, Inria) based on joint works with: Dan Tomozei (EPFL), Marc Lelarge (Inria), Jiaming Xu (UIUC),

More information

1 Review: symmetric matrices, their eigenvalues and eigenvectors

1 Review: symmetric matrices, their eigenvalues and eigenvectors Cornell University, Fall 2012 Lecture notes on spectral methods in algorithm design CS 6820: Algorithms Studying the eigenvalues and eigenvectors of matrices has powerful consequences for at least three

More information

Eigenvalues, random walks and Ramanujan graphs

Eigenvalues, random walks and Ramanujan graphs Eigenvalues, random walks and Ramanujan graphs David Ellis 1 The Expander Mixing lemma We have seen that a bounded-degree graph is a good edge-expander if and only if if has large spectral gap If G = (V,

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

A NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER

A NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER A NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER 1. The method Ashwelde and Winter [1] proposed a new approach to deviation inequalities for sums of independent random matrices. The

More information

Laplacian eigenvalues and optimality: II. The Laplacian of a graph. R. A. Bailey and Peter Cameron

Laplacian eigenvalues and optimality: II. The Laplacian of a graph. R. A. Bailey and Peter Cameron Laplacian eigenvalues and optimality: II. The Laplacian of a graph R. A. Bailey and Peter Cameron London Taught Course Centre, June 2012 The Laplacian of a graph This lecture will be about the Laplacian

More information

Matrix stabilization using differential equations.

Matrix stabilization using differential equations. Matrix stabilization using differential equations. Nicola Guglielmi Universitá dell Aquila and Gran Sasso Science Institute, Italia NUMOC-2017 Roma, 19 23 June, 2017 Inspired by a joint work with Christian

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

Projections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for

Projections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for Math 57 Spring 18 Projections and Least Square Solutions Recall that given an inner product space V with subspace W and orthogonal basis for W, B {v 1, v,..., v k }, the orthogonal projection of V onto

More information

CS 435, 2018 Lecture 3, Date: 8 March 2018 Instructor: Nisheeth Vishnoi. Gradient Descent

CS 435, 2018 Lecture 3, Date: 8 March 2018 Instructor: Nisheeth Vishnoi. Gradient Descent CS 435, 2018 Lecture 3, Date: 8 March 2018 Instructor: Nisheeth Vishnoi Gradient Descent This lecture introduces Gradient Descent a meta-algorithm for unconstrained minimization. Under convexity of the

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Capacitive Touchscreen

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Capacitive Touchscreen EES 16A Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 16 16.1 Introduction to apacitive Touchscreen We ve seen how a resistive touchscreen works by using the concept of voltage

More information

Recitation 8: Graphs and Adjacency Matrices

Recitation 8: Graphs and Adjacency Matrices Math 1b TA: Padraic Bartlett Recitation 8: Graphs and Adjacency Matrices Week 8 Caltech 2011 1 Random Question Suppose you take a large triangle XY Z, and divide it up with straight line segments into

More information