Chapter 11. Matrix Algorithms and Graph Partitioning. M. E. J. Newman. June 10, M. E. J. Newman Chapter 11 June 10, / 43
|
|
- Stanley Campbell
- 5 years ago
- Views:
Transcription
1 Chapter 11 Matrix Algorithms and Graph Partitioning M. E. J. Newman June 10, 2016 M. E. J. Newman Chapter 11 June 10, / 43
2 Table of Contents 1 Eigenvalue and Eigenvector Eigenvector Centrality The second largest eigenvalue and eigenvector 2 Dividing Networks into Clusters Partitioning and Community Detection 3 Graph Partitioning Why Partitioning is Hard The Kernighan-Lin Algorithm 4 Community Detection Simple Modularity Maximization Spectral Modularity Maximization Division into More than Two Groups Other Modularity Maximization Methods M. E. J. Newman Chapter 11 June 10, / 43
3 Eigenvalue and Eigenvector Eigenvector Centrality Leading Eigenvectors and Eigenvector Centrality Eigenvector Centrality Ax = κ 1 x, (1) where κ 1 is the largest eigenvalue and x is the corresponding eigenvector. the eigenvector centrality of a node i is defined to be the ith element of the leading eigenvector of the adjacency matrix. There are many ways to calculate the eigenvalue/eigenvector using computer. See for example Numerical Recipes in C. M. E. J. Newman Chapter 11 June 10, / 43
4 Eigenvalue and Eigenvector Eigenvector Centrality Leading Eigenvectors and Eigenvector Centrality Power method One of a simple and fast methods for calculating the eigenvector centrality. 1. Start with any initial vector x(0). 2. Multiply it repeatedly by the adjacency matrix A (see Sec. 7.2). x(t) = A t x(0). (2) As t x(t) will converge to the leading eigenvector of A. M. E. J. Newman Chapter 11 June 10, / 43
5 Eigenvalue and Eigenvector Eigenvector Centrality Leading Eigenvectors and Eigenvector Centrality Caveats of the Power method 1. The method will not work if the initial vector x(0) happens to be orthogonal to the leading eigenvector. Solution Choose the initial vector to have all elements positive. 2. The elements of the vector have a tendency to grow on each iteration. get multiplied approximately a factor of the leading eigenvalue which is larger than 1. Sol periodically renormalize the vector. 3. How long do we need to go on multiplying by A before the result converges to the leading eigenvalue? Sol. perform the calculation in parallel for two different initial vectors. And watch to see when they reach the same value within some prescribed tolerance. Finding the largest eigenvalue κ 1. Once the algorithm find the leading eigenvector, one more multiplication by the adjacency matrix will multiply that vector by exactly a factor κ 1. M. E. J. Newman Chapter 11 June 10, / 43
6 Eigenvalue and Eigenvector Eigenvector Centrality Calculating Other Eigenvalues and Eigenvectors Power methods calculate the largest eigenvalue of a matrix and the corresponding eigenvector. Sometimes we need other eigenvalues and eigenvectors. E.g. algebraic connectivity: the second eigenvalue of the graph Laplacian λ 2. λ 2 is non-zero iff the network is connected. Related to the bisection of the graph. Finding the smallest eigenvalue and eigenvector: Let λ 1 λ 2 λ N be the eigenvalues of matrix L and v 1, v 2,, v N be the corresponding eigenvectors. (λ N I L)v i = (λ N λ i )v i (3) Eq. (3) means that λ N λ i is an eigenvalue of λ N I L with corresponding eigenvector v i. For i = 1, λ N λ i becomes the largest eigenvalue. 1. Find λ N using power method. 2. Find λ 1 from Eq. (3) using power method again. M. E. J. Newman Chapter 11 June 10, / 43
7 Eigenvalue and Eigenvector The second largest eigenvalue and eigenvector Calculating Other Eigenvalues and Eigenvectors Finding the second largest eigenvalue and eigenvector. Use Gram-Schmidt orthogonalization. Let v 1 be the normalized eigenvector corresponding to the largest eigenvalue of a matrix A. Choose an arbitrary vector y which is orthogonal to v 1. choose any starting vector x and define y as y = x (v T 1 x)v 1 (4) { vi T y = vt i x (vt 1 x)(vt i v 1) = vi T 0 if i = 1 x (vt 1 x)δ i1 = vi T x otherwise (5) Therefore, the expansion of y in terms of the eigenvectors of A has no term in v 1 : N y = c i v i, (6) wherec i = v T i y. Use the vector y as the starting vector for repeated multiplication by A. i=2 M. E. J. Newman Chapter 11 June 10, / 43
8 Eigenvalue and Eigenvector The second largest eigenvalue and eigenvector Calculating Other Eigenvalues and Eigenvectors Validity of the Gram-Schmidt orthogonalization with power method: After multiplying y by A a total of t times, we have y(t) = A t y(0) = κ t 2 The ratio κ i /κ 2 < 1 for all i > 2. κ i /κ 2 0 as t. N i=2 ( ) t κi c i v i. (7) κ 2 Finding more eigenvectors and eigenvalues: not trivial, see, for example, Numerical Recipe in C M. E. J. Newman Chapter 11 June 10, / 43
9 Dividing Networks into Clusters Dividing Networks into Clusters Graph Partitioning and Community Detection Divide the nodes of a network into groups, clusters, or communities based on the pattern of edges in the network. Divide the nodes so that the groups formed are tightly knit with many edges inside groups and only a few edges between groups. M. E. J. Newman Chapter 11 June 10, / 43
10 Dividing Networks into Clusters Partitioning and Community Detection Partitioning and Community Detection Graph partitioning The number and size of the groups are fixed by the experimenter. Parallelization of the computation for networks. Minize the interprocessor communication as much as possible. Balance the workload by assigning (roughly) the same number of nodes to each processor. Community Detection The number and size of the groups are unspecified. The number and size of the groups are determined by the network itself. Community detection can provides clues about the nature of the interaction within the community represented. can be used as a tool for understanding the structure of a network, for shedding light on large scale patterns of connections that may not be easily visible in the raw network topology. M. E. J. Newman Chapter 11 June 10, / 43
11 Graph Partitioning Graph Partitioning What will we learn here? Kernighan-Lin algorithm (not based on the matrix methods) Spectral partitioning method (based on the spectral properties of graph Laplacian. M. E. J. Newman Chapter 11 June 10, / 43
12 Graph Partitioning Why Partitioning is Hard Why Partitioning is Hard cut size Graph bisection: Divide a network into two parts. The simplest graph partitioning. The repetition of bisection method is the commonest approach to the partitioning of networks into arbitrary numbers of parts. the number of edges running between vertices in different group minimize the cut size M. E. J. Newman Chapter 11 June 10, / 43
13 Graph Partitioning Why Partitioning is Hard Why Partitioning is Hard For Exhaustive Search Exhaustive search: bisect a network by looking through all possible divisions of the network into two parts of the required sizes and choosing one with the smallest cut size. The number of ways of dividing a network of N vertices into two groups of N 1 and N 2 is N! N 1!N 2!. Using Stirling s Formula and using the relation we obtain N! N 1!N 2! N! 2πN ( ) N N. e N 1 + N 2 = N, 2πN(N/e) N 2πN1 (N 1 /e) N1 2πN2 (N 2 /e) N2 N N+1/2 N N1+1/2 1 N N2+1/2 2 (8) M. E. J. Newman Chapter 11 June 10, / 43
14 Graph Partitioning Why Partitioning is Hard Why Partitioning is Hard For Exhaustive Search If we divide the network into two parts of equal size N/2, then N! N 1!N 2! N N+1/2 2N+1 =. (9) (N/2) N+1 N The amount of time or complexity goes up roughly exponentially with N. The goal of partitioning algorithm is to find a pretty good division not the perfect one. Heuristic algorithm. M. E. J. Newman Chapter 11 June 10, / 43
15 Graph Partitioning The Kernighan-Lin Algorithm The Kernighan-Lin (KL) Algorithm 1. We start by dividing the vertices of network into tow groups of the required sizes in any way we like. E.g.] Randomly divide the vertices into two groups. 2. For each pair (i, j), of vertices such that i is in one of the group and j belongs to the other, calculate the change of the cut size between the groups if we were to interchange i and j. 3. Among all pairs (i, j) we find the pair that reduce the cut size by the largest amount. 4. If no pair reduces the cut size, find the pair that increase it by the smallest amount. 5. Then swap that pair of vertices. 6. Repeat the process. Restriction: each vertex can only be moved once. 7. When there is no more pairs of nodes to be swapped, stop the swapping. M. E. J. Newman Chapter 11 June 10, / 43
16 Graph Partitioning The Kernighan-Lin Algorithm The Kernighan-Lin Algorithm cont d 8. When all swaps have been completed, go back through every state that the network passed through during the swapping procedure and choose among them the state in which the cut size takes its smallest value. 9. Finally, this entire process is performed repeatedly, starting each time with the best division of the network found on the last time. 10. Continue until no improvement in the cut size occurs. Slow algorithm: O(N 3 ) for a sparse network. M. E. J. Newman Chapter 11 June 10, / 43
17 Graph Partitioning Cut size Introduced by Fiedler. Use the matrix properties of the graph Laplacian. R = 1 2 i, j in different groups A ij, (10) Let us define a set of quantities s i for each node i as s i = { +1 if vertex i belongs to group 1, 1 if vertex i belongs to group 2. (11) M. E. J. Newman Chapter 11 June 10, / 43
18 Graph Partitioning Then { 1 1 if i and j are in different groups, 2 (1 s is j ) = 0 if i and j are in the same group Thus Eq. (10) becomes for all i and j. Since s i = ±1, s 2 i = 1 and A ij = ij i. (12) R = 1 A ij (1 s i s j ). (13) 4 ij k i = i k i s 2 i = ij k i δ ij s i s j (14) Eq. (10) becomes R = 1 (k i δ ij A ij ) s i s j = 1 L ij s i s j, (15) 4 4 ij where L ij = k i δ ij A ij is the ijth element of the graph Laplacian. M. E. J. Newman Chapter 11 June 10, / 43 ij
19 Graph Partitioning In the matrix form R = 1 4 st Ls. (16) Goal Here s is the vector with elements s i. Eq. (16) gives us a matrix formulation of the graph partitioning. L specifies the network structure. s defines a division of the network into groups. Find the vector s which minimize Eq. (16). The restriction that s i has the special values s i = ±1 makes the problem be hard to minimize Eq. (16). Use the relaxation method. M. E. J. Newman Chapter 11 June 10, / 43
20 Graph Partitioning Constraints I: 1 s i = ±1 the vector s always points to one of the 2 N corners of an N-dimensional hypercube. 2 The length of s is N: s = N or s 2 i = N. (17) Relax the constraint 1 on the vector s direction. s can point to any direction in its N-dimensional space. s can point to any location on the surface of a hypersphere of radius N. i M. E. J. Newman Chapter 11 June 10, / 43
21 Graph Partitioning Constraint II The numbers of s i s that are equal to +1 and 1 respectively equal the desired sizes of the two groups. Let N 1 and N 2 are the sizes of group 1 and 2. s i = N 1 N 2 (18) or in vector notation where 1 = (1, 1,, 1). i 1 T s = N 1 N 2, (19) Now the problem to solve is the minimization of Eq. (16) under the two constraints Eq. (17) and Eq. (18) M. E. J. Newman Chapter 11 June 10, / 43
22 Graph Partitioning Two constraints two Lagrange multipliers, λ and 2µ. L jk s j s k + λ N s 2 j + 2µ (N 1 N 2 ) s i jk j j s j = 0 (20) Performing the derivatives, we find that L ij s j = λs i + µ, (21) j or in matrix form Ls = λs + µ1. (22) M. E. J. Newman Chapter 11 June 10, / 43
23 Graph Partitioning Finding µ Since 1 is an eigenvector of the Laplacian with eigenvalue 0, i.e., L1 = 0, multiplying Eq. (22) on the left by 1 T and from Eq. (18), λ(n 1 N 2 ) + µn = 0 or µ = N 1 N 2 λ (23) N M. E. J. Newman Chapter 11 June 10, / 43
24 Graph Partitioning Eigenvalue Equation Let us define a new vector x as Lx = L (s + µ ) λ 1 = Ls = λs + µ1 = λx. (24) x is an eigenvector of the Laplacian with eigenvalue λ. Porperties of x 1 T x = 1 T s µ λ 1T 1 = (N 1 N 2 ) N 1 N 2 N = 0. (25) N Thus, x is orthogonal to 1. M. E. J. Newman Chapter 11 June 10, / 43
25 Graph Partitioning The remaining thing is to determine the value of λ which gives the smallest value of R. Choosing λ and the corresponding eigenvector From Eq. (24), R = 1 4 st Ls = 1 4 xt Lx = 1 4 λxt x. (26) Therefore, x T x = s T s + µ λ = N 2 N 1 N 2 N = 4 N 1N 2 N. ( s T T s ) + µ2 λ 2 1T 1 (N 1 N 2 ) + (N 1 N 2 ) 2 N N (27) R = N 1N 2 λ. (28) N M. E. J. Newman Chapter 11 June 10, / 43
26 Graph Partitioning Choosing λ and the corresponding eigenvector R λ. Choose x proportional to the eigenvalue v 2 corresponding to the second lowest eigenvalue λ 2, with its normalization fixed by Eq. (27). Finding optimal s From Eq. (24) or s = x + N 1 N 2 1. (29) N s i = x i + N 1 N 2. (30) N M. E. J. Newman Chapter 11 June 10, / 43
27 Graph Partitioning Finding Real s s i should be ±1. Exactly N 1 of them are +1 and N 2 are 1. Choose s to maximize the product s T ( x + N 1 N 2 N ) 1 = i ( s i x i + N ) 1 N 2. (31) N The maximum is achieved by assigning s i = +1 for the vertices with the largest (positive) value of x i + (N 1 N 2 )/N and s i = 1 for the remainder. the most positive value of x i + (N 1 N 2 )/N the most positive value of x i. 1 Calculate v 2, which has N elements, one for each vertex in the network. 2 Place the N 1 vertices with the most positive elements in group 1. 3 Place the rest in group 2. If N 1 N 2, then it is arbitrary which group we call group 1 and which we call group 2. Two different ways of making the split either the larger or the smaller group correspond to the more positive value. Compute R for both splits and choose the one with the smaller value. M. E. J. Newman Chapter 11 June 10, / 43
28 Graph Partitioning Final Algorithm 1 Calculate the eigenvector v 2 corresponding to the second smallest eigenvalue λ 2 of the graph Laplacian. 2 Sort the elements of the eigenvector in order from largest to smallest. 3 Put the vertices corresponding to the N 1 largest elements in group 1, the rest in group 2, and calculate R. 4 Put the vertices corresponding to the N 1 smallest elements in group 1, the rest in group 2, and calculate R 5 Between these two divisions of the network, choose the one that gives the smaller value of R. Complexity: O(Nm) or O(N 2 ). The second eigenvalue is called the algebraic connectivity, in this reason. Algebraic connectivity measure of how easily a network can be divided. M. E. J. Newman Chapter 11 June 10, / 43
29 Community Detection Community Detection Community: the naturally occuring groups in a network regardless of their size or number The basic goal is similar to that of graph partitioning. separate the network into groups of vertices that have fewer connection between them. However, the number and the size of the groups is not fixed. M. E. J. Newman Chapter 11 June 10, / 43
30 Community Detection Community Detection Start from the simplest example. Divide a network into just two non-overlapping groups without any constraint on the size of the groups, other than that the sum of the sizes should be N. First guess: Can the minimum cut size without any constraint on the sizes of groups give the community? Trivial division: all the nodes belongs in one group and none of them in the other [cut size]=0. Cut size is not a good measure for the community detection. Consider the number of edges within group, not the number of edges between groups. Find a measure that quantifies how many edges lie within groups in our network relative to the number of such edges expected on the basis of random chance. Modularity in Chap. 7. Modularity maximization is a hard problem heuristic algorithms. M. E. J. Newman Chapter 11 June 10, / 43
31 Community Detection Simple Modularity Maximization Simple Modularity Maximization One of the straightforward algorithms: similar to the KL algorithm. Divide networks into two communities. 1 Start from some initial division, for example, random division with equally sized groups. 2 Consider each vertex in the network in turn and calculate how much the modularity would change if that vertex were moved to the other group. 3 Choose a vertex whose movement would most increase, or least decrease, the modularity. 4 Move the chosen vertex. 5 Repeat the process form (2). An important constraint: a vertex once moved cannot be moved again, at least on this round of the algorithm. When all nodes have been moved exactly once, go back over the states through which the network has passed and select the one with the highest modularity. Use that state as the starting condition for another round of the same algorithm. Repeat the whole process until the modularity no longer improved. M. E. J. Newman Chapter 11 June 10, / 43
32 Community Detection Simple Modularity Maximization Simple Modularity Maximization Example: Karate Club complexity: O(N m) M. E. J. Newman Chapter 11 June 10, / 43
33 Community Detection Spectral Modularity Maximization Spectral Modularity Maximization Modularity Q = 1 2m ij ( A ij k ) ik j δ(c i, c j ) = 1 2m 2m c i is the group or community to which vertex i belongs. δ(m, n) is the Kronecker delta. Note that B ij has the property and also B ij = j j B ij δ(c i, c j ) (32) ij B ij = A ij k ik j 2m. (33) A ij k i 2m j k j = k i k i 2m = 0, (34) 2m B ij = 0. (35) i M. E. J. Newman Chapter 11 June 10, / 43
34 Community Detection Spectral Modularity Maximization Spectral Modularity Maximization Let us define a set of quantities s i for each node i as s i = { +1 if vertex i belongs to group 1, 1 if vertex i belongs to group 2. (36) Note that 1 2 (s is j + 1) = 1 if i and j belong to the same group and zero otherwise Substituting Eq. (37) into Eq. (32) and from Eq. (34): Q = 1 4m δ(c i, c j ) = 1 2 (s is j + 1). (37) ij B ij (s i s j + 1) = 1 4m B ij s i s j (38) ij M. E. J. Newman Chapter 11 June 10, / 43
35 Community Detection Spectral Modularity Maximization Spectral Modularity Maximization In matrix form Q = 1 4m st Bs. (39) B is called the modularity matrix. Eq. (39) is similar to Eq. (26). Goal: Find the value of s that maximize Eq. (39). The number of elements with value +1 or 1 is not fixed the sizes of groups are unconstrained. Use the relaxation method as before. s points to one of the corners of an n dimensional hypercube s points any direction with a fixed length: s T s = i s 2 i = N (40) M. E. J. Newman Chapter 11 June 10, / 43
36 Community Detection Spectral Modularity Maximization Spectral Modularity Maximization Using a single Lagrange multiplier β: B jk s j s k + β N s i jk j s 2 j = 0. (41) Then we obtain or in matrix form B ij s j = βs i, (42) j Bs = βs. (43) Eq. (43) implies that s is one of the eigenvector of B. M. E. J. Newman Chapter 11 June 10, / 43
37 Community Detection Spectral Modularity Maximization Spectral Modularity Maximization Substituting Eq. (43) into Eq. (39) and from Eq. (40), we obtain Q = 1 4m βst s = N β. (44) 4m Maximum modularity the largest value of β (eigenvalue). Therefore, we should choose s to be the u 1 corresponding to the largest eigenvalue of B. As before, we cannot typically choose s = u 1. - because the elements of s should be ±1. Maximize the product s T u 1 = i s i [u 1 ] i (45) M. E. J. Newman Chapter 11 June 10, / 43
38 Community Detection Spectral Modularity Maximization Spectral Modularity Maximization The maximum of Eq. (45) is achieved when each term in the sum is non-negative, i.e., when s i = { +1 if [u1 ] i > 0, 1 if [u 1 ] i < 0. (46) If [u 1 ] i = 0 then choose whichever we prefer. Since B is not a sparse matrix: complexity O(N 3 ). O(N 2 ) is possible: see [M. E. J. Newman, PNAS 103, 8577 (2006)] M. E. J. Newman Chapter 11 June 10, / 43
39 Community Detection Division into More than Two Groups Divsision into More than Two Groups Communities are natural grouping of vertices in networks. There is no reason to suppose that networks will have just two groups. The number of communities should be fixed by the structure of the network and not by the experimenter. Maximize modularity over divisions into any number of groups. There are a number of community detection algorithms. The simpler one: extension of the method of previous sections. repeated bisection of a network We start by dividing the network first into two parts. Then we further subdivide those parts in to smaller ones, and so on. caution: Unlike the graph partitioning, the modularity of the complete network does not break up into independent contributions from the separate communities and the individual maximization of the modularity of those communities treated as separate networks will not, in general, produce the maximum modularity for the network as a whole. M. E. J. Newman Chapter 11 June 10, / 43
40 Community Detection Division into More than Two Groups Divsision into More than Two Groups Consider explicitly the change Q in the modularity of the entire network upon further bisecting a community c of size N c. Q = = = = 1 1 B ij (s i s j + 1) 2m 2 i,j c i,j c 1 B ij s i s j B ij 4m i,j c i,j c 1 B ij s i s j B ij s 2 i 4m 1 4m i,j c i,j c B ij [ ] B ij δ ij B ik s i s j (47) i,j c k c M. E. J. Newman Chapter 11 June 10, / 43
41 Community Detection Division into More than Two Groups Divsision into More than Two Groups In matrix form Here B (c) is the N c N c matrix: B (c) ij Q = 1 4m st B (c) s. (48) = B ij δ ij B ik. (49) k c Eq. (48) is the same form as Eq. (39). Maximizing Q is just finding the leading eigenvector and dividing the network according to the sign of its elements. Repeat the procedure until Q > 0 our bisection algorithm will put all vertices in one of its two groups and none in the other: indivisible state! This works well but not perfect! M. E. J. Newman Chapter 11 June 10, / 43
42 Community Detection Division into More than Two Groups Divsision into More than Two Groups Ex.) Failure of the bisection method: M. E. J. Newman Chapter 11 June 10, / 43
43 Community Detection Other Modularity Maximization Methods Other Modularity Maximization Methods General minimization/maximization algorithms. See the References in the textbook for the details. Using the method of simulated annealing minus modularity energy minimization of energy very slow algorithm Genetic Algorithm Assign fitness to each division: the fitness is proportional to the modularity. Over a series of generations one simulates the preferential reproduction of high-modularity divisions, while those of low modularity die out. very slow like the simulated annealing. Greedy Algrothm Start with each vertex in the network in a one-vertex group of its own. Successively amalgamate groups in pairs, choosing at each step the pair whose amalgamation gives the biggest increase in modularity, or the smallest decrease if no choice gives an increase. Finally all vertices merges into a single large community. Go back over the state through witch the network passed during the course of the algorithm and select the one with the highest value of Q. M. E. J. Newman Chapter 11 June 10, / 43
1 Matrix notation and preliminaries from spectral graph theory
Graph clustering (or community detection or graph partitioning) is one of the most studied problems in network analysis. One reason for this is that there are a variety of ways to define a cluster or community.
More information1 Matrix notation and preliminaries from spectral graph theory
Graph clustering (or community detection or graph partitioning) is one of the most studied problems in network analysis. One reason for this is that there are a variety of ways to define a cluster or community.
More informationLecture 12 : Graph Laplacians and Cheeger s Inequality
CPS290: Algorithmic Foundations of Data Science March 7, 2017 Lecture 12 : Graph Laplacians and Cheeger s Inequality Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Graph Laplacian Maybe the most beautiful
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 12: Graph Clustering Cho-Jui Hsieh UC Davis May 29, 2018 Graph Clustering Given a graph G = (V, E, W ) V : nodes {v 1,, v n } E: edges
More informationNetworks and Their Spectra
Networks and Their Spectra Victor Amelkin University of California, Santa Barbara Department of Computer Science victor@cs.ucsb.edu December 4, 2017 1 / 18 Introduction Networks (= graphs) are everywhere.
More informationCommunities, Spectral Clustering, and Random Walks
Communities, Spectral Clustering, and Random Walks David Bindel Department of Computer Science Cornell University 26 Sep 2011 20 21 19 16 22 28 17 18 29 26 27 30 23 1 25 5 8 24 2 4 14 3 9 13 15 11 10 12
More informationECS231: Spectral Partitioning. Based on Berkeley s CS267 lecture on graph partition
ECS231: Spectral Partitioning Based on Berkeley s CS267 lecture on graph partition 1 Definition of graph partitioning Given a graph G = (N, E, W N, W E ) N = nodes (or vertices), E = edges W N = node weights
More informationLecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality
CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality Lecturer: Shayan Oveis Gharan May 4th Scribe: Gabriel Cadamuro Disclaimer:
More informationSpectra of Adjacency and Laplacian Matrices
Spectra of Adjacency and Laplacian Matrices Definition: University of Alicante (Spain) Matrix Computing (subject 3168 Degree in Maths) 30 hours (theory)) + 15 hours (practical assignment) Contents 1. Spectra
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationMarkov Chains and Spectral Clustering
Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu
More informationCOMPSCI 514: Algorithms for Data Science
COMPSCI 514: Algorithms for Data Science Arya Mazumdar University of Massachusetts at Amherst Fall 2018 Lecture 8 Spectral Clustering Spectral clustering Curse of dimensionality Dimensionality Reduction
More informationMachine Learning for Data Science (CS4786) Lecture 11
Machine Learning for Data Science (CS4786) Lecture 11 Spectral clustering Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016sp/ ANNOUNCEMENT 1 Assignment P1 the Diagnostic assignment 1 will
More informationLecture 13: Spectral Graph Theory
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 13: Spectral Graph Theory Lecturer: Shayan Oveis Gharan 11/14/18 Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationCS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory
CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory Tim Roughgarden & Gregory Valiant May 2, 2016 Spectral graph theory is the powerful and beautiful theory that arises from
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationClustering compiled by Alvin Wan from Professor Benjamin Recht s lecture, Samaneh s discussion
Clustering compiled by Alvin Wan from Professor Benjamin Recht s lecture, Samaneh s discussion 1 Overview With clustering, we have several key motivations: archetypes (factor analysis) segmentation hierarchy
More informationSpectral Clustering. Guokun Lai 2016/10
Spectral Clustering Guokun Lai 2016/10 1 / 37 Organization Graph Cut Fundamental Limitations of Spectral Clustering Ng 2002 paper (if we have time) 2 / 37 Notation We define a undirected weighted graph
More informationMAA507, Power method, QR-method and sparse matrix representation.
,, and representation. February 11, 2014 Lecture 7: Overview, Today we will look at:.. If time: A look at representation and fill in. Why do we need numerical s? I think everyone have seen how time consuming
More informationA Dimensionality Reduction Framework for Detection of Multiscale Structure in Heterogeneous Networks
Shen HW, Cheng XQ, Wang YZ et al. A dimensionality reduction framework for detection of multiscale structure in heterogeneous networks. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 27(2): 341 357 Mar. 2012.
More informationMa/CS 6b Class 23: Eigenvalues in Regular Graphs
Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationSummary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)
Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) The authors explain how the NCut algorithm for graph bisection
More informationLinear algebra and applications to graphs Part 1
Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces
More informationSpectral Clustering on Handwritten Digits Database
University of Maryland-College Park Advance Scientific Computing I,II Spectral Clustering on Handwritten Digits Database Author: Danielle Middlebrooks Dmiddle1@math.umd.edu Second year AMSC Student Advisor:
More informationMining Newsgroups Using Networks Arising From Social Behavior by Rakesh Agrawal et al. Presented by Will Lee
Mining Newsgroups Using Networks Arising From Social Behavior by Rakesh Agrawal et al. Presented by Will Lee wwlee1@uiuc.edu September 28, 2004 Motivation IR on newsgroups is challenging due to lack of
More informationConsensus, Flocking and Opinion Dynamics
Consensus, Flocking and Opinion Dynamics Antoine Girard Laboratoire Jean Kuntzmann, Université de Grenoble antoine.girard@imag.fr International Summer School of Automatic Control GIPSA Lab, Grenoble, France,
More informationData Mining and Analysis: Fundamental Concepts and Algorithms
: Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 2 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA 2 Department of Computer
More informationLinear Algebra 2 Spectral Notes
Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex
More informationCommunities, Spectral Clustering, and Random Walks
Communities, Spectral Clustering, and Random Walks David Bindel Department of Computer Science Cornell University 3 Jul 202 Spectral clustering recipe Ingredients:. A subspace basis with useful information
More informationIntroduction to Spectral Graph Theory and Graph Clustering
Introduction to Spectral Graph Theory and Graph Clustering Chengming Jiang ECS 231 Spring 2016 University of California, Davis 1 / 40 Motivation Image partitioning in computer vision 2 / 40 Motivation
More information1 Review: symmetric matrices, their eigenvalues and eigenvectors
Cornell University, Fall 2012 Lecture notes on spectral methods in algorithm design CS 6820: Algorithms Studying the eigenvalues and eigenvectors of matrices has powerful consequences for at least three
More informationA Statistical Look at Spectral Graph Analysis. Deep Mukhopadhyay
A Statistical Look at Spectral Graph Analysis Deep Mukhopadhyay Department of Statistics, Temple University Office: Speakman 335 deep@temple.edu http://sites.temple.edu/deepstat/ Graph Signal Processing
More informationarxiv:quant-ph/ v1 19 Mar 2006
On the simulation of quantum circuits Richard Jozsa Department of Computer Science, University of Bristol, Merchant Venturers Building, Bristol BS8 1UB U.K. Abstract arxiv:quant-ph/0603163v1 19 Mar 2006
More informationLecture 13 Spectral Graph Algorithms
COMS 995-3: Advanced Algorithms March 6, 7 Lecture 3 Spectral Graph Algorithms Instructor: Alex Andoni Scribe: Srikar Varadaraj Introduction Today s topics: Finish proof from last lecture Example of random
More informationAn indicator for the number of clusters using a linear map to simplex structure
An indicator for the number of clusters using a linear map to simplex structure Marcus Weber, Wasinee Rungsarityotin, and Alexander Schliep Zuse Institute Berlin ZIB Takustraße 7, D-495 Berlin, Germany
More informationCS224W: Social and Information Network Analysis Jure Leskovec, Stanford University
CS224W: Social and Information Network Analysis Jure Leskovec Stanford University Jure Leskovec, Stanford University http://cs224w.stanford.edu Task: Find coalitions in signed networks Incentives: European
More informationCommunities Via Laplacian Matrices. Degree, Adjacency, and Laplacian Matrices Eigenvectors of Laplacian Matrices
Communities Via Laplacian Matrices Degree, Adjacency, and Laplacian Matrices Eigenvectors of Laplacian Matrices The Laplacian Approach As with betweenness approach, we want to divide a social graph into
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationClustering. SVD and NMF
Clustering with the SVD and NMF Amy Langville Mathematics Department College of Charleston Dagstuhl 2/14/2007 Outline Fielder Method Extended Fielder Method and SVD Clustering with SVD vs. NMF Demos with
More informationLinear algebra for MATH2601: Theory
Linear algebra for MATH2601: Theory László Erdős August 12, 2000 Contents 1 Introduction 4 1.1 List of crucial problems............................... 5 1.2 Importance of linear algebra............................
More informationAn ADMM Algorithm for Clustering Partially Observed Networks
An ADMM Algorithm for Clustering Partially Observed Networks Necdet Serhat Aybat Industrial Engineering Penn State University 2015 SIAM International Conference on Data Mining Vancouver, Canada Problem
More informationModularity and Graph Algorithms
Modularity and Graph Algorithms David Bader Georgia Institute of Technology Joe McCloskey National Security Agency 12 July 2010 1 Outline Modularity Optimization and the Clauset, Newman, and Moore Algorithm
More informationFinding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October
Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find
More informationMa/CS 6b Class 20: Spectral Graph Theory
Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationLECTURE NOTE #11 PROF. ALAN YUILLE
LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform
More informationRelations Between Adjacency And Modularity Graph Partitioning: Principal Component Analysis vs. Modularity Component Analysis
Relations Between Adjacency And Modularity Graph Partitioning: Principal Component Analysis vs. Modularity Component Analysis Hansi Jiang Carl Meyer North Carolina State University October 27, 2015 1 /
More informationPower Grid Partitioning: Static and Dynamic Approaches
Power Grid Partitioning: Static and Dynamic Approaches Miao Zhang, Zhixin Miao, Lingling Fan Department of Electrical Engineering University of South Florida Tampa FL 3320 miaozhang@mail.usf.edu zmiao,
More informationThe Matrix-Tree Theorem
The Matrix-Tree Theorem Christopher Eur March 22, 2015 Abstract: We give a brief introduction to graph theory in light of linear algebra. Our results culminates in the proof of Matrix-Tree Theorem. 1 Preliminaries
More informationLecture 14: Random Walks, Local Graph Clustering, Linear Programming
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 14: Random Walks, Local Graph Clustering, Linear Programming Lecturer: Shayan Oveis Gharan 3/01/17 Scribe: Laura Vonessen Disclaimer: These
More informationΩ R n is called the constraint set or feasible set. x 1
1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We
More informationProblem Set 4. General Instructions
CS224W: Analysis of Networks Fall 2017 Problem Set 4 General Instructions Due 11:59pm PDT November 30, 2017 These questions require thought, but do not require long answers. Please be as concise as possible.
More informationSpectral Graph Theory
Spectral Graph Theory Aaron Mishtal April 27, 2016 1 / 36 Outline Overview Linear Algebra Primer History Theory Applications Open Problems Homework Problems References 2 / 36 Outline Overview Linear Algebra
More informationData Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings
Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline
More informationClustering using Mixture Models
Clustering using Mixture Models The full posterior of the Gaussian Mixture Model is p(x, Z, µ,, ) =p(x Z, µ, )p(z )p( )p(µ, ) data likelihood (Gaussian) correspondence prob. (Multinomial) mixture prior
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More informationQuick Tour of Linear Algebra and Graph Theory
Quick Tour of Linear Algebra and Graph Theory CS224W: Social and Information Network Analysis Fall 2014 David Hallac Based on Peter Lofgren, Yu Wayne Wu, and Borja Pelato s previous versions Matrices and
More informationMining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University
Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit
More informationPrinciple Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA
Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In
More informationCommunity Detection. Data Analytics - Community Detection Module
Community Detection Data Analytics - Community Detection Module Zachary s karate club Members of a karate club (observed for 3 years). Edges represent interactions outside the activities of the club. Community
More informationNonlinear Dimensionality Reduction
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Kernel PCA 2 Isomap 3 Locally Linear Embedding 4 Laplacian Eigenmap
More informationLAPLACIAN MATRIX AND APPLICATIONS
LAPLACIAN MATRIX AND APPLICATIONS Alice Nanyanzi Supervisors: Dr. Franck Kalala Mutombo & Dr. Simukai Utete alicenanyanzi@aims.ac.za August 24, 2017 1 Complex systems & Complex Networks 2 Networks Overview
More informationQuantum Annealing Approaches to Graph Partitioning on the D-Wave System
Quantum Annealing Approaches to Graph Partitioning on the D-Wave System 2017 D-Wave QUBITS Users Conference Applications 1: Optimization S. M. Mniszewski, smm@lanl.gov H. Ushijima-Mwesigwa, hayato@lanl.gov
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationIterative solvers for linear equations
Spectral Graph Theory Lecture 23 Iterative solvers for linear equations Daniel A. Spielman November 26, 2018 23.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving
More informationIntroduction to Techniques for Counting
Introduction to Techniques for Counting A generating function is a device somewhat similar to a bag. Instead of carrying many little objects detachedly, which could be embarrassing, we put them all in
More informationMarkov Chains, Random Walks on Graphs, and the Laplacian
Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationOptimization of Quadratic Forms: NP Hard Problems : Neural Networks
1 Optimization of Quadratic Forms: NP Hard Problems : Neural Networks Garimella Rama Murthy, Associate Professor, International Institute of Information Technology, Gachibowli, HYDERABAD, AP, INDIA ABSTRACT
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationDRAGAN STEVANOVIĆ. Key words. Modularity matrix; Community structure; Largest eigenvalue; Complete multipartite
A NOTE ON GRAPHS WHOSE LARGEST EIGENVALUE OF THE MODULARITY MATRIX EQUALS ZERO SNJEŽANA MAJSTOROVIĆ AND DRAGAN STEVANOVIĆ Abstract. Informally, a community within a graph is a subgraph whose vertices are
More informationStatistical Pattern Recognition
Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction
More informationNotes on basis changes and matrix diagonalization
Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix
More informationComputer Vision Group Prof. Daniel Cremers. 14. Clustering
Group Prof. Daniel Cremers 14. Clustering Motivation Supervised learning is good for interaction with humans, but labels from a supervisor are hard to obtain Clustering is unsupervised learning, i.e. it
More informationCommunications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved
Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationMa/CS 6b Class 20: Spectral Graph Theory
Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an
More informationPredicting the future with Newton s Second Law
Predicting the future with Newton s Second Law To represent the motion of an object (ignoring rotations for now), we need three functions x(t), y(t), and z(t), which describe the spatial coordinates of
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationWeb Structure Mining Nodes, Links and Influence
Web Structure Mining Nodes, Links and Influence 1 Outline 1. Importance of nodes 1. Centrality 2. Prestige 3. Page Rank 4. Hubs and Authority 5. Metrics comparison 2. Link analysis 3. Influence model 1.
More informationGraph Detection and Estimation Theory
Introduction Detection Estimation Graph Detection and Estimation Theory (and algorithms, and applications) Patrick J. Wolfe Statistics and Information Sciences Laboratory (SISL) School of Engineering and
More information1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)
CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) Final Review Session 03/20/17 1. Let G = (V, E) be an unweighted, undirected graph. Let λ 1 be the maximum eigenvalue
More informationNotes for CS542G (Iterative Solvers for Linear Systems)
Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationEigenvalue problems and optimization
Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationLearning latent structure in complex networks
Learning latent structure in complex networks Lars Kai Hansen www.imm.dtu.dk/~lkh Current network research issues: Social Media Neuroinformatics Machine learning Joint work with Morten Mørup, Sune Lehmann
More informationSpectral Clustering. Zitao Liu
Spectral Clustering Zitao Liu Agenda Brief Clustering Review Similarity Graph Graph Laplacian Spectral Clustering Algorithm Graph Cut Point of View Random Walk Point of View Perturbation Theory Point of
More informationGraphs, Vectors, and Matrices Daniel A. Spielman Yale University. AMS Josiah Willard Gibbs Lecture January 6, 2016
Graphs, Vectors, and Matrices Daniel A. Spielman Yale University AMS Josiah Willard Gibbs Lecture January 6, 2016 From Applied to Pure Mathematics Algebraic and Spectral Graph Theory Sparsification: approximating
More informationTHE EIGENVALUE PROBLEM
THE EIGENVALUE PROBLEM Let A be an n n square matrix. If there is a number λ and a column vector v 0 for which Av = λv then we say λ is an eigenvalue of A and v is an associated eigenvector. Note that
More informationCME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.
CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax
More information12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria
12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos
More informationLecture 10: Dimension Reduction Techniques
Lecture 10: Dimension Reduction Techniques Radu Balan Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD April 17, 2018 Input Data It is assumed that there is a set
More informationNonlinear Optimization
Nonlinear Optimization (Com S 477/577 Notes) Yan-Bin Jia Nov 7, 2017 1 Introduction Given a single function f that depends on one or more independent variable, we want to find the values of those variables
More information