An ADMM Algorithm for Clustering Partially Observed Networks

Size: px
Start display at page:

Download "An ADMM Algorithm for Clustering Partially Observed Networks"

Transcription

1 An ADMM Algorithm for Clustering Partially Observed Networks Necdet Serhat Aybat Industrial Engineering Penn State University 2015 SIAM International Conference on Data Mining Vancouver, Canada

2 Problem setting Goal: Detect communities" from partially observed relation" graph Details: N = {1,..., n} nodes of an undirected graph G = (N, E) (i, j) = (j, i) E if nodes i and j are "related" Nl : nodes in community-l, {N l }r l=1 partition of N r l=1 N l = N Nl 1 Nl 2 = if l 1 l 2 p, q [0, 1] s.t. 0 q p 1, { ( ) p, if l s.t. i, j N P (i, j) E = l ; q, o.w. (i, j) E or (i, j) E is observable with probability p 0 2

3 A popular quality function: Modularity Measures the density of edges within clusters relative to a random graph with the same node degrees {N l } r l=1 a partition of nodes in G = (N, E) modularity ({N l } r l=1 ) := 1 2 E maximizing modularity is NP-hard ni,j=1 (D ij d(i)d(j) 2 E ) δ (i, j; {N l } r l=1 ) Louvain method is a popular greedy alg. outperforming many others in time and modularity 3

4 A popular quality function: Modularity Measures the density of edges within clusters relative to a random graph with the same node degrees {N l } r l=1 a partition of nodes in G = (N, E) modularity ({N l } r l=1 ) := 1 2 E maximizing modularity is NP-hard ni,j=1 (D ij d(i)d(j) 2 E ) δ (i, j; {N l } r l=1 ) Louvain method is a popular greedy alg. outperforming many others in time and modularity Shortcoming: Resolution limit Implicit assumption: there is a chance that any two node can be connected in the random model (not realistic for large networks) Cliques connected by a single edge would be merged as E Leads to undetectable small communities 3

5 Preliminary: Robust PCA R n n D = L + S such that rank( L) = r n, S 0 = s n 2 L: Low-rank, incoherent not sparse S: Sparse, nonzero loc. uniformly at random not low-rank Ω: Observable loc. uniformly at random s.t. Ω = p 0 n 2 Candès et al. 09 proposed solving (L ρ, S ρ) argmin{ L + ρ S 1 : π Ω (L + S) = π Ω (D)} For ρ = 1 p0 n, c > 0 such that (L ρ, S ρ) = ( L, S) (exact recovery) w.p. at least 1 cn 10 Notation: L := rank(l) i=1 σ i (L), and S 1 := i,j S ij { π Ω : R n n R n n Dij, if (i, j) Ω; s.t. (π Ω (D)) ij = 0, o.w. 4

6 Robust PCA for clustering D S n : adjacency matrix for G such that diag(d) = 1 { 1, if (i, j) E or i = j; D ij = 0, o.w. 1, if (i, j) E and l s.t. i, j Nl S ; ij := 1, if (i, j) E and l s.t. i, j Nl ; and L := D S. 0, o.w. 5

7 Robust PCA for clustering D S n : adjacency matrix for G such that diag(d) = 1 { 1, if (i, j) E or i = j; D ij = 0, o.w. 1, if (i, j) E and l s.t. i, j Nl S ; ij := 1, if (i, j) E and l s.t. i, j Nl ; and L := D S. 0, o.w. N1 = {1, 2, 3}, N 2 = {4, 5}, E = {(1, 3), (2, 3), (2, 4), (3, 5), (4, 5)} = } {{ 1 1 } } {{ 1 1 } } {{ 0 0 } D L S D=Low-Rank+Sparse 5

8 Robust PCA for clustering N1 = {1, 2, 3}, N 2 = {4, 5}, E = {(1, 3), (2, 3), (2, 4), (3, 5), (4, 5)} = } {{ 1 1 } D D = Low-Rank + Sparse } {{ } L rank( L) = r, the number of clusters, communities λ l = N l for l = 1,..., r, e.g., λ 1 = 3, λ 2 = 2 S is sparse with high probability } {{ } S 6

9 Related work Chen et al. 14 showed the statistical guarantees, and proposed RPCA-B. Thm: Let γ = max{1 p, q}, ρ = 1 32 p 0 n, and K min := min l N l. C, c > 0 such that (L ρ, S ρ) = ( L, S) w.p. 1 cn 10 provided that n log 2 (n) CK 2 min p 0(1 2γ) 2. Algorithm RPCA-B (ρ) 1: STOP false 2: while STOP == false do 3: (L ρ, Sρ) argmin{ L + ρ S 1 : π Ω (L + S) = π Ω (D)} 4: if Tr(L ρ) > n (1 + tol) then 5: ρ ρ/2 6: else if Tr(L ρ) < n (1 tol) then 7: ρ 2ρ 8: else 9: STOP true 10: end if 11: end while 7

10 RCPA-C model by Aybat et al. (RPCA-C): Robust PCA for Clustering (a tighter formulation) (L, S ) argmin L,S S n Tr(L) + ρ S 1 s.t. π Ω (L + S) = π Ω (D), diag(s) = 0, S ij 1 i j, L 0 n, L 0 n. Let χ S n S n denote the feasible region. Clearly, ( L, S) χ χ {(L, S) : π Ω (L + S) = π Ω (D)} same results in Chen et al. 14 (L, S ) = ( L, S) w.p. at least 1 cn 10 8

11 ADMIP-C: ADMM for RPCA-C Equivalent problem using partial variable splitting: min L,X,S S n Tr(L) + ρ S 1 s.t. X = L, L 0 n, (X, S) φ. for ρ > 0 and φ defined as φ := (X, S) S n S n : π Ω (X + S) = π Ω (D), X 0 n, diag(s) = 0, S ij 1 i j. Objective: Develop an ADMM with increasing penalty 9

12 ADMIP-C: ADMM for RPCA-C Augmented Lagrangian: L µ (L, X, S; Y ) := Tr(L) + ρ S 1 + Y, X L + µ 2 X L 2 F Hard to compute Method of Multipliers iterate sequence: (L k+1, X k+1, S k+1 ) argmin{l µ (L, X, S; Y k ) : L 0 n, (X, S) φ} Y k+1 = Y k + µ(x k+1 L k+1 ), where µ > 0 10

13 ADMIP-C: ADMM for RPCA-C Augmented Lagrangian: L µ (L, X, S; Y ) := Tr(L) + ρ S 1 + Y, X L + µ 2 X L 2 F Hard to compute Method of Multipliers iterate sequence: (L k+1, X k+1, S k+1 ) argmin{l µ (L, X, S; Y k ) : L 0 n, (X, S) φ} Y k+1 = Y k + µ(x k+1 L k+1 ), where µ > 0 Easy to compute ADMIP-C iterate sequence: (X k+1, S k+1 ) = argmin{l µk (L k, X, S; Y k ) : (X, S) φ} L k+1 = argmin{l µk (L, X k+1, S k+1 ; Y k ) : L 0 n } Y k+1 = Y k + µ k (X k+1 L k+1 ) 10

14 ADMIP-C: ADMM for RPCA-C Augmented Lagrangian: L µ (L, X, S; Y ) := Tr(L) + ρ S 1 + Y, X L + µ 2 X L 2 F Hard to compute Method of Multipliers iterate sequence: (L k+1, X k+1, S k+1 ) argmin{l µ (L, X, S; Y k ) : L 0 n, (X, S) φ} Y k+1 = Y k + µ(x k+1 L k+1 ), where µ > 0 Easy to compute ADMIP-C iterate sequence: (X k+1, S k+1 ) = argmin{l µk (L k, X, S; Y k ) : (X, S) φ} L k+1 = argmin{l µk (L, X k+1, S k+1 ; Y k ) : L 0 n } Y k+1 = Y k + µ k (X k+1 L k+1 ) Note: ADMM is sensitive to µ if µ k = µ for all k Choose {µ k } such that µ k+1 µ k, and sup k µ k < No need to search optimal µ 10

15 ADMIP-C: L-subproblem Augmented Lagrangian: L µ (L, X, S; Y ) := Tr(L) + ρ S 1 + Y, X L + µ 2 X L 2 F L k+1 = argmin{l µk (L, X k+1, S k+1 ; Y k ) : L 0 n }, ( ) =W diag max{λ k 1 µ k 1, 0} W T, W diag(λ k )W T is the eigenvalue decomp. of Q X k := X k µ k Y k. Note: No need to compute eigenvalues < µ 1 k Use PROPACK to compute partial SVD of Q X k X k L and {Y k } bounded {Q X k } numerically low-rank eventually Cheap partial SVDs in the transient phase for small µ k 11

16 ADMIP-C: (X, S)-subproblem Augmented Lagrangian: L µ (L, X, S; Y ) := Tr(L) + ρ S 1 + Y, X L + µ 2 X L 2 F (X k+1, S k+1 ) = argmin{l µk (L k, X, S; Y k ) : (X, S) φ}, Closed form solution: ( ) C k = sgn π Ω(D Q L k ) S k+1 = min{π Ω(D), max{ E n, C k }}, X k+1 = π Ω (D S k+1 ) + max{π Ω c(q L k ), 0 n }, max{ π Ω(D Q L k ) ρµ 1 k E n, 0 n }, where Ω := Ω D for Ω N N \ D, and Q L k := L k 1 µ k Y k φ := (X, S) S n S n : π Ω (X + S) = π Ω (D), X 0 n, diag(s) = 0, S ij 1 i j. 12

17 ADMIP-C: Convergence Algorithm ADMIP-C (ρ, Y 0, {µ k }) 1: k 0, L 0 0 n 2: while not converged do 3: (X k+1, S k+1 ) argmin{l µk (L k, X, S; Y k ) : (X, S) φ} 4: L k+1 argmin{l µk (L, X k+1, S k+1 ; Y k ) : L 0 n} 5: Y k+1 Y k + µ k (X k+1 L k+1 ) 6: end while Z k = (L k, X k, S k, Y k ) primal-dual ADMIP-C iterates Z set of primal-dual optimal pairs to RPCA-C: min Tr(L) + ρ S 1 s.t. X = L, L 0 n, (X, S) φ. L,X,S S n Then min{ Z Z k F : Z Z } 0. 13

18 G = (N, E) - Random Network Generation G: undirected network with n := N nodes, and r non-overlapping communities α (0, 1] parameter controlling cluster sizes Nl for l = 1,..., r [ ] Nl := n l = 1 α 1 α α l 1 n for l = 1,..., r { r Nl = l 1 i=1 n i + 1,..., } l i=1 n i Example: N = n = 100, and r = 5 α n 1 n 2 n 3 n 4 n

19 G = (N, E) - Random Network Generation After {N l }r l=1 is generated with r = 0.05 N Edge generation: E U U := {(i, j) N N : i < j} chosen uniformly at random among all subsets with cardinality E U = [0.05 U ] E L := {(i, j) N N : (j, i) E U } Ē := {(i, j) N N : l s.t. i, j N l } E := Ē (EU E L ) - symmetric difference Observable indices: Ω U U chosen uniformly at random among all subsets with cardinality Ω U = [p 0 U ] Ω L := {(i, j) N N : (j, i) Ω U } Ω := Ω U Ω L and Ω := Ω D 15

20 Numerical tests: ADMIP-C vs RPCA-B ADMIP-C solves RPCA-C formulation (tighter than RPCA) RPCA-B solves RPCA with bisection on ρ Given L be low-rank component output, define for 1 l 1, l 2 r { ( ) Bl 1,l 2 := L Enl, if l 1 = l 2 = l; ij, Bl1 i Nl,j N,l 2 := 1 l 2 0 nl1 n l2, o.w. Define R l1,l 2 := B l 1,l 2 B l1,l 2 F for 1 l 1, l 2 r. The quality of clustering is measured by 3 statistics: [0, 1] s in :=average of { } Rl,l r n l l=1 1 [0, 1] s out := l1 l2 n l1 n l2 l 1 l 2 R 2 l 1,l 2 [0, 1] s f : fraction of correctly recovered clusters 16

21 Numerical tests: ADMIP-C vs RPCA-B 10 random G = (N, E) for each (n, α), and p 0 = 0.9 ADMIP-C RPCA-B RPCA n α s in s out s f s in s out s f s in s out s f Table: Mean values for s in, s out, and s f in %, and ρ = 1 n 17

22 Numerical tests: ADMIP-C vs RPCA-B 5 random G = (N, E) for n {500, 1000}, α = 0.95, and p 0 = 0.9 ADMIP-C RPCA-B n # s in s out s f cpu svd s in s out s f cpu svd it B Table: Recovery and runtime statistics: s in, s out, s f in %, cpu in sec. 18

23 Numerical tests: ADMIP-C vs RPCA-B Summary: Increasing n, decreasing α adversely affects all methods Failure caused by s out for RPCA-B, by s in for RPCA and ADMIP-C RPCA cannot detect small clusters due to high s in RPCA-B reduces s in at the cost of s out : merging small clusters ADMIP-C correctly identifies 92% all the time 19

24 Numerical tests: ADMIP-C vs Louvain Measures for comparing similarity of C = {N i }r i=1 and C = {N j }r j=1 : n i := N i, n j := N j s.t. n = r i=1 n i = r j=1 n j, and m ij = N i N j Jaccard Index: JI:= # node pairs in same clusters both in C and C # node pairs in same clusters either in C or C Normalized Mutual Information: NMI:= I(C,C ), where H(C)H(C ) Mutual Information: I(C, C ) = r Entropy: H(C) = r i=1 n i n log 2 r i=1 j=1 ( n i n PERC:= proportion of exactly recovered clusters ( ) m ij n log mij/n 2 n i nj/n2 ), H(C ) defined similarly. 20

25 Numerical tests: ADMIP-C vs Louvain Experimental setup: n {100, 200, 300, 400, 500} and α = {1, 0.9, 0.8} 20 random G = (N, E) generated for each (n, α) Output of Louvain depends on the ordering of nodes For each G, 200 node orderings generated uniformly at random 21

26 Numerical tests: ADMIP-C vs Louvain Louvain ADMIP-C n α JI NMI PERC Table: The mean values for 3 measures in % when p 0 = 0.9. Increasing n, decreasing α adversely affects all methods Louvain: small clusters swallowed by large ones in both cases ADMIP-C: detects small clusters even for small α ADMIP-C: creates isolated nodes for large n 22

27 Conclusions and Future work Conclusions: RPCA-C: a tighter model based on RPCA ADMIP-C: an ADMM with increasing penalty for RPCA-C Detection is robust to increases in n and variation among cluster sizes Future Work: Weighted networks Overlapping communities Partial SVD is the bottleneck Code is available at Thanks! 23

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

1 Matrix notation and preliminaries from spectral graph theory

1 Matrix notation and preliminaries from spectral graph theory Graph clustering (or community detection or graph partitioning) is one of the most studied problems in network analysis. One reason for this is that there are a variety of ways to define a cluster or community.

More information

Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016

Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 Optimization for Tensor Models Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 1 Tensors Matrix Tensor: higher-order matrix

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 XI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 Alternating direction methods of multipliers for separable convex programming Bingsheng He Department of Mathematics

More information

Robust convex relaxation for the planted clique and densest k-subgraph problems

Robust convex relaxation for the planted clique and densest k-subgraph problems Robust convex relaxation for the planted clique and densest k-subgraph problems Brendan P.W. Ames June 22, 2013 Abstract We consider the problem of identifying the densest k-node subgraph in a given graph.

More information

Communities, Spectral Clustering, and Random Walks

Communities, Spectral Clustering, and Random Walks Communities, Spectral Clustering, and Random Walks David Bindel Department of Computer Science Cornell University 26 Sep 2011 20 21 19 16 22 28 17 18 29 26 27 30 23 1 25 5 8 24 2 4 14 3 9 13 15 11 10 12

More information

Submodular Functions Properties Algorithms Machine Learning

Submodular Functions Properties Algorithms Machine Learning Submodular Functions Properties Algorithms Machine Learning Rémi Gilleron Inria Lille - Nord Europe & LIFL & Univ Lille Jan. 12 revised Aug. 14 Rémi Gilleron (Mostrare) Submodular Functions Jan. 12 revised

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

1 Matrix notation and preliminaries from spectral graph theory

1 Matrix notation and preliminaries from spectral graph theory Graph clustering (or community detection or graph partitioning) is one of the most studied problems in network analysis. One reason for this is that there are a variety of ways to define a cluster or community.

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State

More information

LOCAL LINEAR CONVERGENCE OF ADMM Daniel Boley

LOCAL LINEAR CONVERGENCE OF ADMM Daniel Boley LOCAL LINEAR CONVERGENCE OF ADMM Daniel Boley Model QP/LP: min 1 / 2 x T Qx+c T x s.t. Ax = b, x 0, (1) Lagrangian: L(x,y) = 1 / 2 x T Qx+c T x y T x s.t. Ax = b, (2) where y 0 is the vector of Lagrange

More information

Alternating Direction Method of Multipliers. Ryan Tibshirani Convex Optimization

Alternating Direction Method of Multipliers. Ryan Tibshirani Convex Optimization Alternating Direction Method of Multipliers Ryan Tibshirani Convex Optimization 10-725 Consider the problem Last time: dual ascent min x f(x) subject to Ax = b where f is strictly convex and closed. Denote

More information

Chapter 11. Matrix Algorithms and Graph Partitioning. M. E. J. Newman. June 10, M. E. J. Newman Chapter 11 June 10, / 43

Chapter 11. Matrix Algorithms and Graph Partitioning. M. E. J. Newman. June 10, M. E. J. Newman Chapter 11 June 10, / 43 Chapter 11 Matrix Algorithms and Graph Partitioning M. E. J. Newman June 10, 2016 M. E. J. Newman Chapter 11 June 10, 2016 1 / 43 Table of Contents 1 Eigenvalue and Eigenvector Eigenvector Centrality The

More information

Stochastic dynamical modeling:

Stochastic dynamical modeling: Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou

More information

Networks and Their Spectra

Networks and Their Spectra Networks and Their Spectra Victor Amelkin University of California, Santa Barbara Department of Computer Science victor@cs.ucsb.edu December 4, 2017 1 / 18 Introduction Networks (= graphs) are everywhere.

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Robust PCA CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Robust PCA 1 / 52 Previously...

More information

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u) CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) Final Review Session 03/20/17 1. Let G = (V, E) be an unweighted, undirected graph. Let λ 1 be the maximum eigenvalue

More information

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent Yinyu Ye K. T. Li Professor of Engineering Department of Management Science and Engineering Stanford

More information

FAST FIRST-ORDER METHODS FOR STABLE PRINCIPAL COMPONENT PURSUIT

FAST FIRST-ORDER METHODS FOR STABLE PRINCIPAL COMPONENT PURSUIT FAST FIRST-ORDER METHODS FOR STABLE PRINCIPAL COMPONENT PURSUIT N. S. AYBAT, D. GOLDFARB, AND G. IYENGAR Abstract. The stable principal component pursuit SPCP problem is a non-smooth convex optimization

More information

Modularity and Graph Algorithms

Modularity and Graph Algorithms Modularity and Graph Algorithms David Bader Georgia Institute of Technology Joe McCloskey National Security Agency 12 July 2010 1 Outline Modularity Optimization and the Clauset, Newman, and Moore Algorithm

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

Optimization Exercise Set n. 4 :

Optimization Exercise Set n. 4 : Optimization Exercise Set n. 4 : Prepared by S. Coniglio and E. Amaldi translated by O. Jabali 2018/2019 1 4.1 Airport location In air transportation, usually there is not a direct connection between every

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Gongguo Tang and Arye Nehorai Department of Electrical and Systems Engineering Washington University in St Louis

More information

Low-Rank Factorization Models for Matrix Completion and Matrix Separation

Low-Rank Factorization Models for Matrix Completion and Matrix Separation for Matrix Completion and Matrix Separation Joint work with Wotao Yin, Yin Zhang and Shen Yuan IPAM, UCLA Oct. 5, 2010 Low rank minimization problems Matrix completion: find a low-rank matrix W R m n so

More information

Functional SVD for Big Data

Functional SVD for Big Data Functional SVD for Big Data Pan Chao April 23, 2014 Pan Chao Functional SVD for Big Data April 23, 2014 1 / 24 Outline 1 One-Way Functional SVD a) Interpretation b) Robustness c) CV/GCV 2 Two-Way Problem

More information

3.4 Relaxations and bounds

3.4 Relaxations and bounds 3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Sparse Covariance Selection using Semidefinite Programming

Sparse Covariance Selection using Semidefinite Programming Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support

More information

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find

More information

arxiv: v1 [math.oc] 23 May 2017

arxiv: v1 [math.oc] 23 May 2017 A DERANDOMIZED ALGORITHM FOR RP-ADMM WITH SYMMETRIC GAUSS-SEIDEL METHOD JINCHAO XU, KAILAI XU, AND YINYU YE arxiv:1705.08389v1 [math.oc] 23 May 2017 Abstract. For multi-block alternating direction method

More information

Signal Processing and Networks Optimization Part VI: Duality

Signal Processing and Networks Optimization Part VI: Duality Signal Processing and Networks Optimization Part VI: Duality Pierre Borgnat 1, Jean-Christophe Pesquet 2, Nelly Pustelnik 1 1 ENS Lyon Laboratoire de Physique CNRS UMR 5672 pierre.borgnat@ens-lyon.fr,

More information

Sparsifying Transform Learning for Compressed Sensing MRI

Sparsifying Transform Learning for Compressed Sensing MRI Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois

More information

Optimization Exercise Set n.5 :

Optimization Exercise Set n.5 : Optimization Exercise Set n.5 : Prepared by S. Coniglio translated by O. Jabali 2016/2017 1 5.1 Airport location In air transportation, usually there is not a direct connection between every pair of airports.

More information

Distributed Convex Optimization

Distributed Convex Optimization Master Program 2013-2015 Electrical Engineering Distributed Convex Optimization A Study on the Primal-Dual Method of Multipliers Delft University of Technology He Ming Zhang, Guoqiang Zhang, Richard Heusdens

More information

III. Applications in convex optimization

III. Applications in convex optimization III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Sums of Squares and Semidefinite Programming Relaxations for Polynomial Optimization Problems with Structured Sparsity Hayato Waki,

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Binary Optimization via Mathematical Programming with Equilibrium Constraints

Binary Optimization via Mathematical Programming with Equilibrium Constraints 1 Binary Optimization via Mathematical Programg with Equilibrium Constraints Ganzhao Yuan, Bernard Ghanem arxiv:108.05v [math.oc] Dec 017 Abstract Binary optimization is a central problem in mathematical

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

Homework 5 ADMM, Primal-dual interior point Dual Theory, Dual ascent

Homework 5 ADMM, Primal-dual interior point Dual Theory, Dual ascent Homework 5 ADMM, Primal-dual interior point Dual Theory, Dual ascent CMU 10-725/36-725: Convex Optimization (Fall 2017) OUT: Nov 4 DUE: Nov 18, 11:59 PM START HERE: Instructions Collaboration policy: Collaboration

More information

9. Submodular function optimization

9. Submodular function optimization Submodular function maximization 9-9. Submodular function optimization Submodular function maximization Greedy algorithm for monotone case Influence maximization Greedy algorithm for non-monotone case

More information

PRINCIPAL Component Analysis (PCA) is a fundamental approach

PRINCIPAL Component Analysis (PCA) is a fundamental approach IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Member, IEEE, Zhouchen

More information

Matrix Factorization with Applications to Clustering Problems: Formulation, Algorithms and Performance

Matrix Factorization with Applications to Clustering Problems: Formulation, Algorithms and Performance Date: Mar. 3rd, 2017 Matrix Factorization with Applications to Clustering Problems: Formulation, Algorithms and Performance Presenter: Songtao Lu Department of Electrical and Computer Engineering Iowa

More information

A bad example for the iterative rounding method for mincost k-connected spanning subgraphs

A bad example for the iterative rounding method for mincost k-connected spanning subgraphs A bad example for the iterative rounding method for mincost k-connected spanning subgraphs Ashkan Aazami a, Joseph Cheriyan b,, Bundit Laekhanukit c a Dept. of Comb. & Opt., U. Waterloo, Waterloo ON Canada

More information

Data Mining and Analysis: Fundamental Concepts and Algorithms

Data Mining and Analysis: Fundamental Concepts and Algorithms : Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 2 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA 2 Department of Computer

More information

Matrix Completion for Structured Observations

Matrix Completion for Structured Observations Matrix Completion for Structured Observations Denali Molitor Department of Mathematics University of California, Los ngeles Los ngeles, C 90095, US Email: dmolitor@math.ucla.edu Deanna Needell Department

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course otes for EE7C (Spring 018): Conve Optimization and Approimation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Ma Simchowitz Email: msimchow+ee7c@berkeley.edu October

More information

An Optimization-based Approach to Decentralized Assignability

An Optimization-based Approach to Decentralized Assignability 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016 Boston, MA, USA An Optimization-based Approach to Decentralized Assignability Alborz Alavian and Michael Rotkowitz Abstract

More information

Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization

Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization Yuan Shen Zaiwen Wen Yin Zhang January 11, 2011 Abstract The matrix separation problem aims to separate

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

arxiv: v1 [cs.cv] 17 Nov 2015

arxiv: v1 [cs.cv] 17 Nov 2015 Robust PCA via Nonconvex Rank Approximation Zhao Kang, Chong Peng, Qiang Cheng Department of Computer Science, Southern Illinois University, Carbondale, IL 6901, USA {zhao.kang, pchong, qcheng}@siu.edu

More information

Sparse and Robust Optimization and Applications

Sparse and Robust Optimization and Applications Sparse and and Statistical Learning Workshop Les Houches, 2013 Robust Laurent El Ghaoui with Mert Pilanci, Anh Pham EECS Dept., UC Berkeley January 7, 2013 1 / 36 Outline Sparse Sparse Sparse Probability

More information

Asymmetric Cheeger cut and application to multi-class unsupervised clustering

Asymmetric Cheeger cut and application to multi-class unsupervised clustering Asymmetric Cheeger cut and application to multi-class unsupervised clustering Xavier Bresson Thomas Laurent April 8, 0 Abstract Cheeger cut has recently been shown to provide excellent classification results

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the

More information

Chapter 7 Network Flow Problems, I

Chapter 7 Network Flow Problems, I Chapter 7 Network Flow Problems, I Network flow problems are the most frequently solved linear programming problems. They include as special cases, the assignment, transportation, maximum flow, and shortest

More information

Dual Proximal Gradient Method

Dual Proximal Gradient Method Dual Proximal Gradient Method http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/19 1 proximal gradient method

More information

Final exam.

Final exam. EE364b Convex Optimization II June 4 8, 205 Prof. John C. Duchi Final exam By now, you know how it works, so we won t repeat it here. (If not, see the instructions for the EE364a final exam.) Since you

More information

Preconditioning via Diagonal Scaling

Preconditioning via Diagonal Scaling Preconditioning via Diagonal Scaling Reza Takapoui Hamid Javadi June 4, 2014 1 Introduction Interior point methods solve small to medium sized problems to high accuracy in a reasonable amount of time.

More information

Markov Chains and Spectral Clustering

Markov Chains and Spectral Clustering Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu

More information

Introduction to Alternating Direction Method of Multipliers

Introduction to Alternating Direction Method of Multipliers Introduction to Alternating Direction Method of Multipliers Yale Chang Machine Learning Group Meeting September 29, 2016 Yale Chang (Machine Learning Group Meeting) Introduction to Alternating Direction

More information

5.6 Penalty method and augmented Lagrangian method

5.6 Penalty method and augmented Lagrangian method 5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the

More information

EE227 Project Report Robust Principal Component Analysis. Maximilian Balandat Walid Krichene Chi Pang Lam Ka Kit Lam

EE227 Project Report Robust Principal Component Analysis. Maximilian Balandat Walid Krichene Chi Pang Lam Ka Kit Lam EE227 Project Report Robust Principal Component Analysis Maximilian Balandat Walid Krichene Chi Pang Lam Ka Kit Lam May 10, 2012 May 10, 2012 1 Introduction Over the past decade there has been an explosion

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

EXACT ALGORITHMS FOR THE ATSP

EXACT ALGORITHMS FOR THE ATSP EXACT ALGORITHMS FOR THE ATSP Branch-and-Bound Algorithms: Little-Murty-Sweeney-Karel (Operations Research, ); Bellmore-Malone (Operations Research, ); Garfinkel (Operations Research, ); Smith-Srinivasan-Thompson

More information

Dual bounds: can t get any better than...

Dual bounds: can t get any better than... Bounds, relaxations and duality Given an optimization problem z max{c(x) x 2 }, how does one find z, or prove that a feasible solution x? is optimal or close to optimal? I Search for a lower and upper

More information

GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case

GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case ianyi Zhou IANYI.ZHOU@SUDEN.US.EDU.AU Dacheng ao DACHENG.AO@US.EDU.AU Centre for Quantum Computation & Intelligent Systems, FEI, University of echnology, Sydney, NSW 2007, Australia Abstract Low-rank and

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

arxiv: v1 [cs.cv] 1 Jun 2014

arxiv: v1 [cs.cv] 1 Jun 2014 l 1 -regularized Outlier Isolation and Regression arxiv:1406.0156v1 [cs.cv] 1 Jun 2014 Han Sheng Department of Electrical and Electronic Engineering, The University of Hong Kong, HKU Hong Kong, China sheng4151@gmail.com

More information

A GENERAL INERTIAL PROXIMAL POINT METHOD FOR MIXED VARIATIONAL INEQUALITY PROBLEM

A GENERAL INERTIAL PROXIMAL POINT METHOD FOR MIXED VARIATIONAL INEQUALITY PROBLEM A GENERAL INERTIAL PROXIMAL POINT METHOD FOR MIXED VARIATIONAL INEQUALITY PROBLEM CAIHUA CHEN, SHIQIAN MA, AND JUNFENG YANG Abstract. In this paper, we first propose a general inertial proximal point method

More information

Minimizing the Difference of L 1 and L 2 Norms with Applications

Minimizing the Difference of L 1 and L 2 Norms with Applications 1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Zhouchen Lin Visual Computing Group Microsoft Research Asia Risheng Liu Zhixun Su School of Mathematical Sciences

More information

Sparse PCA with applications in finance

Sparse PCA with applications in finance Sparse PCA with applications in finance A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon 1 Introduction

More information

Generalized Power Method for Sparse Principal Component Analysis

Generalized Power Method for Sparse Principal Component Analysis Generalized Power Method for Sparse Principal Component Analysis Peter Richtárik CORE/INMA Catholic University of Louvain Belgium VOCAL 2008, Veszprém, Hungary CORE Discussion Paper #2008/70 joint work

More information

Lecture 3. 1 Polynomial-time algorithms for the maximum flow problem

Lecture 3. 1 Polynomial-time algorithms for the maximum flow problem ORIE 633 Network Flows August 30, 2007 Lecturer: David P. Williamson Lecture 3 Scribe: Gema Plaza-Martínez 1 Polynomial-time algorithms for the maximum flow problem 1.1 Introduction Let s turn now to considering

More information

Robust Sparse Recovery via Non-Convex Optimization

Robust Sparse Recovery via Non-Convex Optimization Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should

More information

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel J. Candès, and Yi Ma, Microsoft Research Asia, Beijing, P.R.C Dept. of Electrical

More information

Distributed Control of Connected Vehicles and Fast ADMM for Sparse SDPs

Distributed Control of Connected Vehicles and Fast ADMM for Sparse SDPs Distributed Control of Connected Vehicles and Fast ADMM for Sparse SDPs Yang Zheng Department of Engineering Science, University of Oxford Homepage: http://users.ox.ac.uk/~ball4503/index.html Talk at Cranfield

More information

Clustering compiled by Alvin Wan from Professor Benjamin Recht s lecture, Samaneh s discussion

Clustering compiled by Alvin Wan from Professor Benjamin Recht s lecture, Samaneh s discussion Clustering compiled by Alvin Wan from Professor Benjamin Recht s lecture, Samaneh s discussion 1 Overview With clustering, we have several key motivations: archetypes (factor analysis) segmentation hierarchy

More information

OWL to the rescue of LASSO

OWL to the rescue of LASSO OWL to the rescue of LASSO IISc IBM day 2018 Joint Work R. Sankaran and Francis Bach AISTATS 17 Chiranjib Bhattacharyya Professor, Department of Computer Science and Automation Indian Institute of Science,

More information

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 Do all problems. Put your answers on blank paper or in a test booklet. There are 00 points total in the exam. You have 80 minutes. Please note

More information

A DETERMINISTIC APPROXIMATION ALGORITHM FOR THE DENSEST K-SUBGRAPH PROBLEM

A DETERMINISTIC APPROXIMATION ALGORITHM FOR THE DENSEST K-SUBGRAPH PROBLEM A DETERMINISTIC APPROXIMATION ALGORITHM FOR THE DENSEST K-SUBGRAPH PROBLEM ALAIN BILLIONNET, FRÉDÉRIC ROUPIN CEDRIC, CNAM-IIE, 8 allée Jean Rostand 905 Evry cedex, France. e-mails: {billionnet,roupin}@iie.cnam.fr

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

Graph Clustering Algorithms

Graph Clustering Algorithms PhD Course on Graph Mining Algorithms, Università di Pisa February, 2018 Clustering: Intuition to Formalization Task Partition a graph into natural groups so that the nodes in the same cluster are more

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Chapter DM:II (continued)

Chapter DM:II (continued) Chapter DM:II (continued) II. Cluster Analysis Cluster Analysis Basics Hierarchical Cluster Analysis Iterative Cluster Analysis Density-Based Cluster Analysis Cluster Evaluation Constrained Cluster Analysis

More information