Sparse Recovery over Graph Incidence Matrices

Size: px
Start display at page:

Download "Sparse Recovery over Graph Incidence Matrices"

Transcription

1 Sparse Recovery over Graph Incidence Matrices Mengnan Zhao, M. Devrim Kaba, René Vidal, Daniel P. Robinson, and Enrique Mallada Abstract Classical results in sparse recovery guarantee the exact reconstruction of s-sparse signals under assumptions on the dictionary that are either too strong or NP-hard to check. Moreover, such results may be pessimistic in practice since they are based on a worst-case analysis. In this paper, we consider the sparse recovery of signals defined over a graph, for which the dictionary takes the form of an incidence matrix. We derive necessary and sufficient conditions for sparse recovery, which depend on properties of the cycles of the graph that can be checked in polynomial time. We also derive support-dependent conditions for sparse recovery that depend only on the intersection of the cycles of the graph with the support of the signal. Finally, we exploit sparsity properties on the measurements and the structure of incidence matrices to propose a specialized sub-graph-based recovery algorithm that outperforms the standard l -minimization approach. I. INTRODUCTION Recently, sparse recovery methods (e.g. see [], [2]) have become very popular for compressing and processing highdimensional data. In particular, they have found widespread applications in data acquisition [3], machine learning [4], [5], medical imaging [6] [9], and networking [0] [2]. The goal of sparse recovery is to reconstruct a signal x R n from m < n linear measurements y = Φ x R m, where Φ R m n is the measurement matrix (also known as dictionary or sparsifying basis). In general, the recovery problem is ill-posed unless x is assumed to satisfy additional assumptions. For example, if we assume that x is s-sparse, where s n, then mild conditions on the measurement matrix Φ (see Lemma ) allow for the recovery x from y using the l 0 -minimization problem min x x 0 s.t. y = Φx, () where x 0 denotes the number of nonzero entries in x. However, problem () is known to be NP-hard [3]. To address this challenge, a common strategy is to solve a convex relaxation of () based on l -minimization given by min x x s.t. y = Φx, (2) which can be written equivalently as a linear program. It is known that the sparse signal can be recovered from (2) if the measurement matrix Φ satisfies certain conditions. In general, these conditions are either computationally hard to M. Zhao and E. Mallada are with the Department of Electrical and Computer Engineering, M.D. Kaba and D.P. Robinson are with the Department of Applied Mathematics and Statistics, and R. Vidal is with the Department of Biomedical Engineering, at The Johns Hopkins University, Baltimore, MD 228, USA. s: {mzhao2, mkaba, rvidal, daniel.p.robinson, mallada}@jhu.edu This work was supported by NSF grants and AMPS The vector x is s-sparse if and only if at most s of its entries are nonzero. verify, or too conservative so that false negative certificates are often encountered. For example, the Nullspace Property (NUP) [4], which provides necessary and sufficient conditions for sparse recovery, and the Restricted Isometry Property (RIP) [5], [6], which is only a sufficient condition for sparse recovery, are both NP-hard [7] to check. On the other hand, the Mutual Coherence [8] property is a sufficient condition that can be efficiently verified [7], but is conservative in that it fails to certify that sparse recovery is possible for many measurement matrices seen in practice. Another major limitation of these recovery guarantees and their associated computational complexity arises because they must allow for the worst case problem in their derivation. For example, the fact that the NUP and RIP are NP-hard to check does not prohibit efficient verification for particular subclasses of matrices. Similarly, even when the NUP is not satisfied for a sparsity level s, it may still be possible to recover certain subsets of s-sparse signals by exploiting additional knowledge about their support. These observations suggest studying the sparse recovery problem for subclasses of matrices and signals to obtain conditions that are easier to verify as well as specialized algorithms with improved recovery performance. The goal of this paper is to study sparse recovery of signals that are defined over graphs when the measurement matrix is the graph s incidence matrix. Our interest in incidence matrices stems from the fact that they are a fundamental representation of graphs, and thus a natural choice for the dictionary when analyzing network flows. In various application areas like communication networks, social networks, and transportation networks, the incidence matrices naturally appear when modeling the flow of information, disease, and goods (e.g., the detection of sparse structural network changes via observations at the nodes can be modeled as (), where the incidence matrix serves as a measurement matrix). The main contributions of this paper are as follows: ) We derive a topological characterization of the NUP for incidence matrices. Specifically, we show that the NUP for these matrices is equivalent to a condition on simple cycles of the graph, which is a finite subset of the nullspace of the incidence matrix. As a consequence, we show that for incidence matrices the sparse recovery guarantee depends only on the girth 2 of the underlying graph. This overcomes NP-hardness, as the girth of a graph can be calculated in polynomial time. 2) Using the above topological characterization, we further derive necessary and sufficient conditions on the 2 Girth: Size of the smallest cycle.

2 support of the sparse signal that enable its recovery. Specifically, for incidence matrices we show that all signals with a given support can be recovered via (2) if and only if the support consists of strictly less than half of the edges in every simple cycle of the graph. Since our conditions on x depend on its support, we will refer to them as support-dependent conditions. 3) We propose a specialized algorithm that utilizes the knowledge of the support of the measurements and the structure of incidence matrices to constrain the support of the signal x, and consequently can guarantee sparse recovery of x under even weaker conditions. The remainder of this paper is organized as follows: In Section II we review basic concepts from convex analysis, graph theory, and sparse recovery. In Section III we formulate sparse recovery problem for incidence matrices, derive conditions for sparse and support-dependent recovery, and propose an efficient algorithm to solve the problem. In Section IV we present numerical experiments that illustrate our theoretical results, and in Section V we provide some conclusions. A. Notation II. PRELIMINARIES Given x R n, we let x p for p denote the l p -norm, and denote the function that counts the number of nonzero entries in x by x 0. Although, the latter function is not a norm, we follow the common practice of calling it the l 0 - norm. For p, we define the unit l p -sphere in R n as := {x R n : x p = }. Similarly, the unit l p -ball in R n is defined as B n p := {x R n : x p }. The nullspace of a matrix Φ will be denoted by Null(Φ). As usual, S denotes the cardinality of a set S. For a vector x R n and a nonempty index set S {,..., n}, we denote the subvector of x that corresponds to S by x S, and the associated mapping by Γ S : R n R S, i.e., x S := Γ S (x). The complement of S in {,..., n} is denoted by S c. S n p B. Convex analysis Critical to our analysis will be the notion of extreme points of a convex set and of quasi-convex functions. Definition : An extreme point of a convex set C R n is a point in C that cannot be written as a convex combination of two different points from C. The set of all extreme points of a convex set C R n is denoted by Ext(C). Definition 2: Let C R n be a convex set. A function f : C R is called quasi-convex if and only if for all x, y C and λ [0, ] it holds that f(λx + ( λ)y) {f(x), f(y)}. (3) It is easy to show that a function f : C R is quasiconvex if and only if every sublevel set S α := {x C f(x) α} is a convex set. Every convex function is quasi-convex. In particular, l p -norm functions for p are quasi-convex. The next result on quasi-convex functions [9] is included for completeness. Proposition : Let C R n be a compact convex set and f : C R be a continuous quasi-convex function. Then f attains its imum value at an extreme point of C. C. Graph theory A directed graph with vertex set V = {v,..., v m } and edge set E = {e,..., e n } V V will be denoted by G(V, E). When the edge and vertex sets are irrelevant to the discussion, we will drop them from the notation and denote the graph by G. Sometimes the vertex set and the edge set of G will be denoted by V (G) and E(G) respectively. A graph G is called simple if there is at most one edge connecting any two vertices and no edge starts and ends at the same vertex, i.e. G has no self loops. Henceforth, G will always denote a simple directed graph with a finite number of edges and vertices. Although we focus on directed graphs, our analysis only requires the undirected variants of the definitions for paths, cycles, and connectivity. We say that two vertices {a, b} V are adjacent if either (a, b) E or (b, a) E. A sequence (u,..., u r ) of distinct vertices of a graph G such that u i and u i+ are adjacent for all i {,..., r } is called a path. A connected component Ĝ of G is a subgraph of G in which any two distinct vertices are connected to each other by a path and no edge exists between V (Ĝ) and V (G) \ V (Ĝ). We will say that the graph G is connected (for directed graphs this is often referred to as weakly connected) if G has a unique connected component. A sequence (u,..., u r, u ) of adjacent vertices of a graph G is called a (simple) cycle if r 3 and u i u j whenever i j; the length of such a cycle is r. The length of the shortest simple cycle of a graph G is called the girth of G. For an acyclic graph (i.e., a graph with no cycles), the girth is defined to be +. Since acyclic graphs are not interesting for our purposes, we will assume that the girth is finite. That is, the graph has at least one simple cycle. Associated with a directed graph G = G(V, E), we can define the incidence matrix A = A(G) R m n as, if v i is the initial vertex of edge e j, a ij =, if v i is the terminal vertex of edge e j, (4) 0, otherwise. For a nonempty index set S {,..., n}, the subgraph of G consisting of edges {e j j S} is denoted by G S. The incidence matrix of G S is denoted by A S. Let C = (u,..., u r, u r+ = u ) be a simple cycle of a simple directed graph G. Then, C can be associated with a vector w(c) R n, where each coordinate w j of w(c) is defined as, if e j = (u i, u i+ ) for some i {,..., r} w j =, if e j = (u i+, u i ) for some i {,..., r} 0, otherwise. (5) We now define the cycle space of G as the subspace spanned by {w(c) C is a simple cycle of G}. 3 Remark : The cycle space of G is exactly the nullspace of the incidence matrix A(G) [20]. Remark 2: The dimension of the nullspace of A(G) is n m + k, where k is the number of connected components of G. Hence, the rank of A(G) is m k [20]. 3 Sometimes this subspace is called the Flow Space [20].

3 Fig. : A connected directed simple graph. Example : The incidence matrix of the graph in Fig. is given by A = , and the vectors from (5) associated with the simple cycles are given by C = (, 2, 3, 4, ), C 2 = (2, 3, 5, 6, 7, 8, 2), C 3 = (, 2, 8, 7, 6, 5, 3, 4, ) w = [ ] T w 2 = [ ] T w 3 = [ 0 0 ] T. Note that w 3 = w w 2, so that Null(A(G)) must be of dimension 2. The addition/subtraction operations on w j s correspond to addition/subtraction operations on edges. D. Sparse Recovery It is known [2] that every vector x R n with supp( x) S is the unique solution to (2) if and only if η S < η S c for all η Null(Φ) \ {0}. (6) This leads us to the definition of the Nullspace Property. Definition 3 (Nullspace Property, NUP): A matrix Φ R m n is said to satisfy the nullspace property (NUP) of order s, if for any η Null(Φ) \ {0}, and any nonempty index set S {,..., n} with S s, we have η S < η S c. Another needed concept is the spark of a matrix. Definition 4 (Spark): The spark of a matrix Φ R m n is the smallest number of linearly dependent columns in Φ. Formally, we have spark(φ) := min η 0 η 0 s.t. Φη = We note that the rank of a matrix may be used to bound its spark. Specifically, it holds that spark(φ) rank(φ) +. (7) The spark may be used to provide a necessary and sufficient condition for uniqueness of sparse solutions [2]. Lemma : For any matrix Φ R m n, the s-sparse signal x R n is the unique solution to the optimization problem if and only if spark(φ) > 2s. min x 0 (8) Φ x=φx III. MAIN RESULTS It is not difficult to prove that the spark of an incidence matrix A is equal to the girth of the underlying graph G. By combining this fact with Lemma, we obtain the following. Proposition 2: Let A = A(G) R m n be the incidence matrix of a simple, connected graph G with girth g. Then, for every s-sparse vector x R n, x is the unique solution to min x 0 (9) A x=ax if and only if s < g 2. Even though Proposition 2 is useful as a uniqueness result, it does not come in handy when one would like to recover the original signal x from the measurements A x. For this purpose, an l -relaxation of the optimization problem is preferable. Hence, one needs a theorem akin to Proposition 2 that addresses the solutions of the optimization problem min x. (0) A x=ax This leads us to study the NUP for incidence matrices. Specifically, in this section we answer the following questions about the incidence matrix A R m n of a simple connected graph G with n edges and m vertices, and an s-sparse vector x R n : ) What are necessary and sufficient conditions for A to satisfy the NUP of order s? Such conditions would guarantee sparse recovery, i.e., that any s-sparse signal x can be recovered as the unique solution to (0). 2) If traditional sparse recovery is not possible, can we characterize subclasses of sparse signals that are recoverable via (0) in terms of the support of the signal and the topology of the graph G(V, E)? 3) Can we use the support of the measurement A x and the structure of A to derive constraints on the support of x that allow us to modify (0) and successfully recover the sparse signal x? A. Topological Characterization of the Nullspace Property for the Class of Incidence Matrices Before addressing the questions above in detail, we would like to build a simple framework to study them. We will start with a reformulation of the NUP. Lemma 2: A matrix Φ R m n satisfies the NUP of order s if and only if S s η S < η Null(Φ) B n 2. ()

4 Proof: For η Null(Φ) \ {0}, using η = η S + η S c, we get η S < η S c if and only if η S η < 2. Using this inequality and Definition 3, it follows that Φ satisfies NUP of order s if and only if S s η S sup < η Null(Φ)\{0} η 2. (2) Since the objective function in (2) is independent of the scale of η, the condition in (2) is equivalent to S s η Null(Φ) S n η S < 2. (3) Since we can replace S n with B n (this does not change the result of the imization problem), we find that (3) is equivalent to (), as claimed. Remark 3: The value of the left hand side of () is called the nullspace constant in the literature [7]. The calculation of the nullspace constant for an arbitrary matrix Φ and sparsity s is known to be NP-hard [7]. The reformulation of the NUP in Lemma 2 has certain benefits. For a fixed index set S {,..., n}, it draws our attention to the optimization problem η S (4) η Null(Φ) B n which is the imization of a continuous convex function Γ S ( ) over a compact convex set Null(Φ) B n. Thus, it follows from Proposition that the imum is attained at an extreme point of Null(Φ) B n. This leads us to want to understand the extreme points of this set, which can be a computationally involved task for arbitrary matrices Φ. Nonetheless, one can still set a bound on the sparsity of the extreme points of Null(Φ) B n. Lemma 3: If Φ R m n is an m n matrix of rank r, then extreme points of Null(Φ) B n are at most (r + )-sparse. Proof: If dim(null(φ)) n r, then the result of the lemma holds trivially since r + n. The unit l -sphere in R n, namely S n, may be written as the union of (n )-dimensional simplices. Hence, if dim(null(φ)) n r, each extreme point is contained in an (n )-dimensional simplex. In particular, if dim(null(φ)) n r >, we argue that, no extreme point of Null(Φ) B n lies in the interior of these (n )-dimensional simplices. This is because in this case the intersection of Null(Φ) and the interior of the (n )-dimensional simplex is either empty, or a nonsingleton open convex subset of R n, where every point is a convex combination of two distinct points. Hence, no extreme point can live there. So, the extreme points must lie in the boundary of (n )-dimensional simplices, which are (n 2)-dimensional simplices. Moreover, these (n 2)- dimensional simplices are where one coordinate becomes zero. Hence, at least one coordinate of an extreme point living in an (n 2)-dimensional simplex is zero. As long as the sum of the dimension of the simplex, which contains the extreme point, and the dimension of Null(Φ) is strictly larger than n, we could repeat the argument in the previous paragraph. This argument stops when the dimension of the simplex containing the extreme point is r. Thus, at least n r coordinates of the extreme point have to be zero, so that each extreme point is at most (r + )-sparse. Although Lemma 3 is analogous to (7), it is stronger in the sense that the statement bounds the sparsity of all the extreme points of the convex set Null(Φ) B n, not just the sparsest vectors in the Null(Φ), as is implied by (7). Armed with Lemma 3, it turns out that for incidence matrices, these extreme points have a nice characterization in terms of the properties of the underlying graph. Proposition 3: Let A R m n be the incidence matrix of a simple connected graph G that has at least one simple cycle, and let W denote the set of normalized simple cycles of G, i.e. { } w(c) W := C is a simple cycle of G. (5) w(c) Then, we have Ext(Null(A) B n ) W. Proof: Let z be an extreme point of Null(A) B n with support S. We necessarily have z =. Suppose that G S has m vertices and k connected components. Note that z S Null(A S ) B S has only nonzero entries. We claim that z S is an extreme point of Null(A S ) B S. For a proof by contradiction, suppose that z S could be written as a convex combination of two distinct vectors in Null(A S ) B S. In this case, one could pad those vectors with zeros at the coordinates in S c to get two distinct vectors in Null(A) B n, whose convex combination would give z. Since this would contradict the fact that z is an extreme point, we must conclude that z S is an extreme point of Null(A S ) B S. Using the fact that z S is an extreme point of Null(A S ) B S, Lemma 3, and Remark 2, it follows that S rank(a S ) + = m k +. (6) Combining (6) with the rank-nullity theorem yields dim(null(a S )) = S m + k. This may be combined with 0 z S Null(A S ) to conclude that dim(null(a S )) = ; thus Null(A S ) is spanned by z S. It now follows from Remark that z S corresponds to the unique simple cycle in G S. In addition, since z S has no zero entries, the corresponding simple cycle includes every edge of G S. It follows that z corresponds to a simple cycle in G, which combined with z = proves z W. Next we show that Proposition 3 provides a way to study the imization problem (4). Theorem : Let A R m n be the incidence matrix of a simple, connected graph G, and let W denote the set of normalized simple cycles of G as in (5). Let S {,..., n} be a nonempty index set. Then η S = η Null(A) B n z S. (7) z W

5 Proof: Since W Null(A) B n, we obviously have η S η Null(A) B n z S. z W For the converse inequality we argue as follows: Since Γ S ( ) is a continuous convex function (thus also quasiconvex), and Null(A) B n is a compact convex set, the imum in the left hand side of (7) will be attained at an extreme point of Null(A) B n (see Proposition ). That is, η S = ν S. (8) η Null(A) B n ν Ext(Null(A) B n ) But Ext(Null(A) B n ) W by Proposition 3. Hence, ν S ν Ext(Null(A) B n ) z S. z W Combining this with (8) we get the converse inequality, and hence the result. Theorem builds a connection between the algebraic condition NUP and a topological property of the graph, namely its simple cycles. This connection is what we will primarily exploit in the rest of the paper. Let us make a simple but important observation. Lemma 4: Let G and W be as in Theorem. Then, for any index set S and z W, it holds that z S = S supp(z) z 0. Proof: For any simple cycle C, the entries of w(c) are in the set {0,, }. Hence, entries of any z W are from the set {0, z 0, z 0 }, from which the result follows. B. Polynomial Time Guarantees for Sparse Recovery An answer to the first question posed at the beginning of Section III is given by Theorem 2. Theorem 2: Let A R m n be the incidence matrix of a simple, connected graph G with girth g. Then, every s-sparse vector x R n is the unique solution to min A x=ax x if and only if s < g 2. Proof: From Lemma 2 and Theorem it follows that the NUP of order s is satisfied if and only if z S < S s z W 2. (9) From Lemma 4, the imum on the left of (9) is equal to S supp(z). (20) S s z W z 0 If s g, then (20) is equal to. If s < g, the imum is attained at the smallest simple cycle when S picks edges only from this cycle, and the imum is equal to s g. Hence, z S = min{s/g, }. (2) S s z W The desired result now follows from (9) and (2). Remark 4: As was mentioned in Remark 3, calculating the nullspace constant is NP-hard in general. However, there are algorithms that can calculate the girth of a graph exactly in O(mn) or sufficiently accurately for our purposes in O(m 2 ) (see [22]). Therefore, Theorem 2 reveals that the nullspace constant can be calculated hence the NUP can be verified for graph incidence matrices in polynomial time. C. Guarantees for Support-dependent Sparse Recovery The sparse recovery result given by Theorem 2 depends on the cardinality s of the support of the unknown signal. For a graph with small girth (e.g., g = 3), this result establishes sparse recovery guarantees only for signals with low sparsity levels (specifically, s < g/2). It is natural, therefore, to ask the following question: Given a graph with relatively small girth, can we identify a class of signals with sparsity s g/2 that can be recovered? This leads us to the study of supportdependent sparse recovery, where we not only focus on the cardinality of the support of the unknown signal x, but also take the location of the support into account. More precisely, we have the following result. Theorem 3: Let A R m n be the incidence matrix of a simple, connected graph G, let W denote the set of normalized simple cycles of G as in (5), and let S {,..., n}. Then, every vector x R n with supp( x) S is the unique solution to the optimization problem if and only if min x (22) A x=ax S supp(z) < z W z 0 2. (23) Proof: From arguments in the proof of Lemma 2, we know that (6) is equivalent to η S < η Null(A) B n 2, which with Theorem and Lemma 4 gives the result. Remark 5: In comparison to Theorem 2, Theorem 3 provides a deeper insight into the vectors x that can be recovered from the observations A x via l -minimization. Specifically, any vector whose support within each simple cycle has size strictly less than half the size of that cycle, can be successfully recovered. As an example consider the graph in Fig., which has girth g = 4. Thus, it follows from Theorem 2 that any - sparse signal can be recovered. However, in reality, some signals that are supported on more than one edge can also be recovered. For instance, Theorem 3 guarantees that a signal x supported on edges {2, 7} or {4, 6, 8} can be recovered as the unique solution to the l problem (22) since for each of the three simple cycles, its intersection with the support of the unknown vector x is less than half of the length of the simple cycle. This reveals that different sparsity patterns of the same sparsity level can have different recovery performances. Remark 6: To use Theorem 3 as a way of providing a support-dependent recovery guarantee, one needs to compute the intersection of the support of x with all simple cycles in the graph. A number of algorithms with a polynomial time complexity bound for cycle enumeration are known [23].

6 D. Using Measurement Sparsity to Aid Recovery When there is additional information about the support of the unknown signal, Theorem 3 gives a necessary and sufficient condition for exact recovery. In practice, this information is usually missing. However, the special structure of the incidence matrix and its connection to the graph can help us circumvent this difficulty. Notice that the columns of any incidence matrix are always 2-sparse, which means that the measurement y := A x will be 2s-sparse for any s-sparse signal x. Therefore, one can seek to obtain a superset of supp( x) by observing supp(y), i.e. the vertices with nonzero measurements. Typically, this observation reduces the size of the problem, and gives rise to Algorithm. Algorithm Recovering the unknown signal by using sparse vertex measurements. : INPUT: y R m, A R m n, G(V, E). 2: Set ˆx = 0. 3: Define T := supp(y). 4: Define S := {j e j = (v i, v k ) E and v i, v k T }, the indices of the edges of G connecting vertices with nonzero measurements. 5: Construct G S, the subgraph of G consisting of the edges indexed by S. 6: Construct the incidence matrix A S of the subgraph G S. 7: Solve the l -minimization problem 8: Set ˆx S = x. 9: return ˆx R n x = arg min x y T =A S x Remark 7: There are cases where Algorithm may fail. For instance, when the unknown nonzero signal x is in the nullspace of one of the rows of A, say a k, and supp(a k ) supp( x) (i.e. when the non-zero values of the signal x cancel each other at a vertex k), Algorithm would simply ignore the corresponding vertex, and hence, the edges connected to it. This leads to a wrong subgraph G S and possibly an incorrect outcome ˆx. Fortunately, this case happens with zero probability when the signal has random support and random values. Corollary : Let A R m n be the incidence matrix of a simple, connected graph G. Given a random signal x, let y := A x and G S be the subgraph associated with y in Algorithm. Then, Algorithm will almost surely recover the signal x if and only if supp( x) picks strictly less than half of the edges from each simple cycle of G S. The proof of Corollary follows from Theorem 3 and the fact that x is assumed to be randomly generated. In comparison to l -minimization on the original graph, Algorithm may have improved support-dependent recovery performance because the formation of the subgraph may eliminate cycles; this is especially true for large-scale networks. For concreteness, let us give a simple example. Example 2: Consider the graph depicted in Fig.. Let x be a random signal that is supported on the edges Fig. 2: Acyclic subgraph G S induced by an unknown signal with support S = {3, 4, 5, 6, 7, 8, 0}. {3, 4, 5, 6, 7, 8, 0}, so that (with probability one) there are nonzero measurements on nodes {, 3, 4, 5, 6, 7, 8, 9}. Then, x cannot be recovered using the incidence matrix of the original graph. However, the subgraph G S is acyclic, as shown in Fig. 2, which allows exact recovery via Algorithm. IV. SIMULATIONS In this section we provide numerical simulation results on the recovery performance associated with incidence matrices. In all experiments, the signal x has random support with each nonzero entry drawn i.i.d. from the standard normal distribution. We use the CVX software package [24] to solve the optimization problems. A vector is declared to be recovered if the 2-norm error is less than or equal to 0 5. Experiment : Fig. 4 shows the probability of exact recovery of signals via l -relaxation over a sequence of graphs containing two cycles (see Fig. 3). Each graph has a cycle of length 3 and a larger one of varying lengths 3, 5, 7, 9, and, respectively. For each sparsity level, 000 trials were performed. According to Theorem 2, for all graphs we can recover -sparse signals since the girth is 3 for each graph in the sequence. When the sparsity level is increased, we expect that the probability of exact recovery will increase for graphs with larger cycles because it becomes less likely that the support of the random signal will consist of more than half of the edges for one of the simple cycles in the graph. Note that this agrees with our observation in Fig. 4. Experiment 2: We now evaluate the performance of Algorithm against the l -minimization method in (0) on 3 2 l + Fig. 3: Graph used in Experiment, which consists of a cycle of length 3 and a cycle of varying length l {3, 5, 7, 9, }

7 Fig. 4: Probability of exact recovery as a function of the sparsity level for the graphs of various loop sizes in Fig. 3. Fig. 6: Probability of exact recovery as a function of the sparsity level for Algorithm and l -minimization on G B. Fig. 5: Graphs with 20 nodes for Experiment 2. G B consists of blue edges only and G BR consists of blue and red edges. two graphs with 20 nodes (see Fig. 5). The first graph, G B, is a simple cycle connecting node to node 20 in order (blue edges in Fig. 5). The second graph, G BR, consists of both blue and red edges. The red edges connect each node to its third neighbor, i.e., (, 4), (2, 5) (7, 20) (20, 3). Fig. 6 and Fig. 7 show the performance of both algorithms on G B and G BR respectively. For each sparsity level, the experiments are repeated 000 times to compute recovery probabilities. For G B, Algorithm outperforms l - minimization since the reduced graph in Algorithm will be acyclic in some cases. For G BR, Algorithm will eliminate small cycles when forming the subgraph and lead to higher recovery probability for fixed s due to a larger girth. V. CONCLUSION In this paper we studied sparse recovery for the class of graph incidence matrices. For such matrices we provided a characterization of the NUP in terms of the topological properties of the underlying graph. This characterization allows one to verify sparse recovery guarantees in polynomial time for the class of incidence matrices. Moreover, we showed Fig. 7: Probability of exact recovery as a function of the sparsity level for Algorithm and l -minimization on G BR. that support-dependent recovery performance can also be analyzed in terms of the simple cycles of the graph. Finally, by exploiting the structure of incidence matrices and sparsity of the measurements, we proposed an efficient algorithm for recovery and evaluated its performance numerically. REFERENCES [] D. L. Donoho, Compressed sensing, IEEE Transactions on Information Theory, vol. 52, no. 4, pp , [2] S. Mallat, A wavelet tour of signal processing: the sparse way. Academic press, [3] E. J. Candès and M. B. Wakin, An introduction to compressive sampling, IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 2 30, [4] E. Elhamifar and R. Vidal, Sparse subspace clustering, in IEEE Conference on Computer Vision and Pattern Recognition, pp , IEEE, [5] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, Robust face recognition via sparse representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 3, no. 2, pp , [6] M. Lustig, D. Donoho, and J. Pauly, Rapid MR imaging with compressed sensing and randomly under-sampled 3DFT trajectories, in Proc. 4th Ann. Meeting ISMRM, Citeseer, [7] M. Lustig, D. Donoho, and J. M. Pauly, Sparse MRI: The application of compressed sensing for rapid MR imaging, Magnetic Resonance in Medicine, vol. 58, no. 6, pp , 2007.

8 [8] E. J. Candès, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, vol. 52, no. 2, pp , [9] R. Liégeois, B. Mishra, M. Zorzi, and R. Sepulchre, Sparse plus low-rank autoregressive identification in neuroimaging time series, in IEEE Conference on Decision and Control, pp , 205. [0] M. Coates, Y. Pointurier, and M. Rabbat, Compressed network monitoring, in IEEE/SP Workshop on Statistical Signal Processing, pp , IEEE, [] J. Haupt, W. U. Bajwa, M. Rabbat, and R. Nowak, Compressed sensing for networked data, IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 92 0, [2] W. Xu, E. Mallada, and A. Tang, Compressive sensing over graphs, in IEEE INFOCOM, pp , IEEE, 20. [3] R. G. Michael and S. J. David, Computers and intractability: a guide to the theory of NP-completeness, WH Free. Co., San Fr, pp. 90 9, 979. [4] A. Cohen, W. Dahmen, and R. DeVore, Compressed sensing and best k-term approximation, Journal of the American Mathematical Society, vol. 22, no., pp. 2 23, [5] E. J. Candes and T. Tao, Decoding by linear programming, IEEE Transactions on Information Theory, vol. 5, no. 2, pp , [6] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, A simple proof of the restricted isometry property for random matrices, Constructive Approximation, vol. 28, no. 3, pp , [7] A. M. Tillmann and M. E. Pfetsch, The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing, IEEE Transactions on Information Theory, vol. 60, no. 2, pp , 204. [8] D. L. Donoho and X. Huo, Uncertainty principles and ideal atomic decomposition, IEEE Transactions on Information Theory, vol. 47, no. 7, pp , 200. [9] F. Flores-Bazán, F. Flores-Bazán, and C. Vera, Maximizing and minimizing quasiconvex functions: related properties, existence and optimality conditions via radial epiderivatives, Journal of Global Optimization, vol. 63, pp , Sep 205. [20] C. Godsil and G. F. Royle, Algebraic graph theory. Graduate text in mathematics, Springer, New York, 200. [2] S. Foucart and H. Rauhut, A mathematical introduction to compressive sensing, vol.. Birkhäuser Basel, 203. [22] A. Itai and M. Rodeh, Finding a minimum circuit in a graph, SIAM Journal on Computing, vol. 7, no. 4, pp , 978. [23] P. Mateti and N. Deo, On algorithms for enumerating all circuits of a graph, SIAM Journal on Computing, vol. 5, no., pp , 976. [24] M. Grant, S. Boyd, and Y. Ye, CVX: Matlab software for disciplined convex programming, 2008.

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Block-Sparse Recovery via Convex Optimization

Block-Sparse Recovery via Convex Optimization 1 Block-Sparse Recovery via Convex Optimization Ehsan Elhamifar, Student Member, IEEE, and René Vidal, Senior Member, IEEE arxiv:11040654v3 [mathoc 13 Apr 2012 Abstract Given a dictionary that consists

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization

Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization Noname manuscript No. (will be inserted by the editor) Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization Hui Zhang Wotao Yin Lizhi Cheng Received: / Accepted: Abstract This

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Sparse and Low-Rank Matrix Decompositions

Sparse and Low-Rank Matrix Decompositions Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems 1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Z Algorithmic Superpower Randomization October 15th, Lecture 12

Z Algorithmic Superpower Randomization October 15th, Lecture 12 15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

A Polynomial-Time Algorithm for Pliable Index Coding

A Polynomial-Time Algorithm for Pliable Index Coding 1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

On Optimal Frame Conditioners

On Optimal Frame Conditioners On Optimal Frame Conditioners Chae A. Clark Department of Mathematics University of Maryland, College Park Email: cclark18@math.umd.edu Kasso A. Okoudjou Department of Mathematics University of Maryland,

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS

THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS YIN ZHANG Abstract. Compressive sensing (CS) is an emerging methodology in computational signal processing that has

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Does Compressed Sensing have applications in Robust Statistics?

Does Compressed Sensing have applications in Robust Statistics? Does Compressed Sensing have applications in Robust Statistics? Salvador Flores December 1, 2014 Abstract The connections between robust linear regression and sparse reconstruction are brought to light.

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4 COMPRESSIVE SAMPLING USING EM ALGORITHM ATANU KUMAR GHOSH, ARNAB CHAKRABORTY Technical Report No: ASU/2014/4 Date: 29 th April, 2014 Applied Statistics Unit Indian Statistical Institute Kolkata- 700018

More information

Necessary and sufficient conditions of solution uniqueness in l 1 minimization

Necessary and sufficient conditions of solution uniqueness in l 1 minimization 1 Necessary and sufficient conditions of solution uniqueness in l 1 minimization Hui Zhang, Wotao Yin, and Lizhi Cheng arxiv:1209.0652v2 [cs.it] 18 Sep 2012 Abstract This paper shows that the solutions

More information

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May University of Luxembourg Master in Mathematics Student project Compressed sensing Author: Lucien May Supervisor: Prof. I. Nourdin Winter semester 2014 1 Introduction Let us consider an s-sparse vector

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

Recoverabilty Conditions for Rankings Under Partial Information

Recoverabilty Conditions for Rankings Under Partial Information Recoverabilty Conditions for Rankings Under Partial Information Srikanth Jagabathula Devavrat Shah Abstract We consider the problem of exact recovery of a function, defined on the space of permutations

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

1 Maximizing a Submodular Function

1 Maximizing a Submodular Function 6.883 Learning with Combinatorial Structure Notes for Lecture 16 Author: Arpit Agarwal 1 Maximizing a Submodular Function In the last lecture we looked at maximization of a monotone submodular function,

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah 00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova Least squares regularized or constrained by L0: relationship between their global minimizers Mila Nikolova CMLA, CNRS, ENS Cachan, Université Paris-Saclay, France nikolova@cmla.ens-cachan.fr SIAM Minisymposium

More information

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in Compressed Sensing

The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in Compressed Sensing THE COMPUTATIONAL COMPLEXITY OF RIP, NSP, AND RELATED CONCEPTS IN COMPRESSED SENSING 1 The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in

More information

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA D Pimentel-Alarcón 1, L Balzano 2, R Marcia 3, R Nowak 1, R Willett 1 1 University of Wisconsin - Madison, 2 University of Michigan - Ann Arbor, 3 University

More information

Solution Recovery via L1 minimization: What are possible and Why?

Solution Recovery via L1 minimization: What are possible and Why? Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization

More information

Inferring Rankings Using Constrained Sensing Srikanth Jagabathula and Devavrat Shah

Inferring Rankings Using Constrained Sensing Srikanth Jagabathula and Devavrat Shah 7288 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 Inferring Rankings Using Constrained Sensing Srikanth Jagabathula Devavrat Shah Abstract We consider the problem of recovering

More information

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

The Strong Largeur d Arborescence

The Strong Largeur d Arborescence The Strong Largeur d Arborescence Rik Steenkamp (5887321) November 12, 2013 Master Thesis Supervisor: prof.dr. Monique Laurent Local Supervisor: prof.dr. Alexander Schrijver KdV Institute for Mathematics

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

Linear Systems. Carlo Tomasi

Linear Systems. Carlo Tomasi Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Sparse representation classification and positive L1 minimization

Sparse representation classification and positive L1 minimization Sparse representation classification and positive L1 minimization Cencheng Shen Joint Work with Li Chen, Carey E. Priebe Applied Mathematics and Statistics Johns Hopkins University, August 5, 2014 Cencheng

More information

Conditions for a Unique Non-negative Solution to an Underdetermined System

Conditions for a Unique Non-negative Solution to an Underdetermined System Conditions for a Unique Non-negative Solution to an Underdetermined System Meng Wang and Ao Tang School of Electrical and Computer Engineering Cornell University Ithaca, NY 14853 Abstract This paper investigates

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser

A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser Presented By: Chris Standish chriss@cs.tamu.edu 23 November 2005 1 Outline Problem Definition Frieze s Generic

More information

Paths and cycles in extended and decomposable digraphs

Paths and cycles in extended and decomposable digraphs Paths and cycles in extended and decomposable digraphs Jørgen Bang-Jensen Gregory Gutin Department of Mathematics and Computer Science Odense University, Denmark Abstract We consider digraphs called extended

More information

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations Exact Topology Identification of arge-scale Interconnected Dynamical Systems from Compressive Observations Borhan M Sanandaji, Tyrone Vincent, and Michael B Wakin Abstract In this paper, we consider the

More information

Bounds for the Zero Forcing Number of Graphs with Large Girth

Bounds for the Zero Forcing Number of Graphs with Large Girth Theory and Applications of Graphs Volume 2 Issue 2 Article 1 2015 Bounds for the Zero Forcing Number of Graphs with Large Girth Randy Davila Rice University, rrd32@txstate.edu Franklin Kenter Rice University,

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Jensen s inequality for multivariate medians

Jensen s inequality for multivariate medians Jensen s inequality for multivariate medians Milan Merkle University of Belgrade, Serbia emerkle@etf.rs Given a probability measure µ on Borel sigma-field of R d, and a function f : R d R, the main issue

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

The Information-Theoretic Requirements of Subspace Clustering with Missing Data

The Information-Theoretic Requirements of Subspace Clustering with Missing Data The Information-Theoretic Requirements of Subspace Clustering with Missing Data Daniel L. Pimentel-Alarcón Robert D. Nowak University of Wisconsin - Madison, 53706 USA PIMENTELALAR@WISC.EDU NOWAK@ECE.WISC.EDU

More information

approximation algorithms I

approximation algorithms I SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning David Steurer Cornell Approximation Algorithms and Hardness, Banff, August 2014 for many problems (e.g., all UG-hard ones): better guarantees

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Compressed Sensing and Related Learning Problems

Compressed Sensing and Related Learning Problems Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Matrix Completion for Structured Observations

Matrix Completion for Structured Observations Matrix Completion for Structured Observations Denali Molitor Department of Mathematics University of California, Los ngeles Los ngeles, C 90095, US Email: dmolitor@math.ucla.edu Deanna Needell Department

More information