Sparse signals recovered by non-convex penalty in quasi-linear systems

Size: px
Start display at page:

Download "Sparse signals recovered by non-convex penalty in quasi-linear systems"

Transcription

1 Cui et al. Journal of Inequalities and Applications 018) 018:59 R E S E A R C H Open Access Sparse signals recovered by non-conve penalty in quasi-linear systems Angang Cui 1,HaiyangLi, Meng Wen and Jigen Peng 1* * Correspondence: jgpengjtu@16.com 1 School of Mathematics and Statistics, Xi an Jiaotong University, Xi an, China Full list of author information is available at the end of the article Abstract The goal of compressed sensing is to reconstruct a sparse signal under a few linear measurements far less than the dimension of the ambient space of the signal. However, many real-life applications in physics and biomedical sciences carry some strongly nonlinear structures, and the linear model is no longer suitable. Compared with the compressed sensing under the linear circumstance, this nonlinear compressed sensing is much more difficult, in fact also NP-hard, combinatorial problem, because of the discrete and discontinuous nature of the l 0 -norm and the nonlinearity. In order to get a convenience for sparse signal recovery, we set the nonlinear models have a smooth quasi-linear nature in this paper, and study a non-conve fraction function ρ a in this quasi-linear compressed sensing. We propose an iterative fraction thresholding algorithm to solve the regularization problem QP λ a ) for all a > 0. With the change of parameter a > 0, our algorithm could get a promising result, which is one of the advantages for our algorithm compared with some state-of-art algorithms. Numerical eperiments show that our method performs much better than some state-of-the-art methods. MSC: 34A34; 78M50; 93C10 Keywords: Compressed sensing; Quasi-linear; Non-conve fraction function; Iterative thresholding algorithm 1 Introduction In compressed sensing see, e.g., [1, ]), the problem of reconstructing a sparse signal under a few linear measurements which are far fewer than the dimension of the ambient space of the signal can be modeled into the following l 0 -minimization: P 0 ) min R n 0 subject to A = b, 1) where A R m n is an m n real matri of full row rank with m < n, andb R m is a nonzero real vector of m-dimension, and 0 is the l 0 -norm of the real vector, which countsthenumberofthenonzeroentriesin see, e.g., [3 5]). In general, the problem P 0 ) is computational and NP-hard because of the discrete and discontinuous nature of the l 0 - norm. However, many real-life applications in physics and biomedical sciences carry some strongly nonlinear structures [6], so that the linear model in problem P 0 )isnolongersuitable. In this nonlinear case, we consider a map A : R n R m, which is no longer necessarily The Authors) 018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original authors) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

2 Cui et al. Journal of Inequalities and Applications 018) 018:59 Page of 11 linear, and reconstruct a sparse vector R n from the measurements b R m given by A)=b. ) Compared with the l 0 -minimization under the linear circumstance, this nonlinear minimization is much more difficult, in fact also NP-hard, combinatorial problem, because of the discrete and discontinuous nature of the l 0 -norm and the nonlinearity. In order to get a convenience for sparse signal recovery, in this paper, we set the nonlinear models have a smooth quasi-linear nature. By this means, there eists a Lipschitz map F : R n R m n 3) such that A)=F) 4) for all R n.so,thel 0 -minimization under the quasi-linear case can be mathematically viewed as the following form: QP 0 ) min R n 0 subject to F) = b. 5) In fact, the minimization QP 0 ) under the quasi-linear case is also combinatorial and NPhard [6, 7]. To overcome this problem, the authors in [6, 7]proposedthel 1 -minimization QP 1 ) min R n 1 subject to F) = b 6) for the constrained problem and QP λ 1 ) { min F) b R n + λ } 1 7) for the regularization problem, where 1 = n i=1 i is the l 1 -norm of vector. In [6, 7], the authors have shown that the l 1 -norm minimization can really make an eact recovery in some specific conditions. In general, however, these conditions are always hard to satisfied in practice. Moreover, the regularization problem QP λ 1 )alwaysleadsto a biased estimation by shrinking all the components of the vector toward zero simultaneously, and sometimes results in over-penalization in the regularization model QP λ 1 )as the l 1 -norm in linear compressed sensing. In pursuit of better reconstruction results, in this paper, we propose the following fraction minimization: QP λ a ) { min F) b R n + λp a) }, 8) where P a )= n ρ a i ), a >0 9) i=1

3 Cui et al. Journal of Inequalities and Applications 018) 018:59 Page 3 of 11 Figure 1 Behavior of the fraction function ρ a t)for various values of a >0 and ρ a t)= a t a t +1 10) is the fraction function which performs outstanding in image restoration [8], linear compressed sensing [9] and matri rank minimization problem [10]. Clearly, with the change of parameter a > 0, the non-conve function P a ) could approimately interpolate the l 0 -norm lim ρ 0 if i =0; a i )= 11) a + 1 if i 0. Figure 1 shows the behavior of the fraction function ρ a t) for various values of a >0. The rest of this paper is organized as follows. Some preliminary results that are used in this paper are given in Sect..InSect.3, we propose an iterative fraction thresholding algorithm to solve the regularization problem QPa λ ) for all a >0.InSect.3, we present some numerical eperiments to demonstrate the effectiveness of our algorithm. The concluding remarks are presented in Sect. 4. Preliminaries In this section, we give some preliminary results that are used in this paper. Define a function of β R as f λ β)=β γ ) + λρ a β) 1) and let pro β a,λ γ ) arg min f λβ). 13) β R Lemma 1 see [9 11]) The operator pro β a,λ defined in 13) can be epressed as pro β a,λ g a,λ γ ) if γ > ta,λ γ )= ; 0 if γ ta,λ, 14)

4 Cui et al. Journal of Inequalities and Applications 018) 018:59 Page 4 of 11 Figure Plot of the threshold function g a,λ for a =1 and λ =0.5 Figure 3 Plot of the threshold function g a,λ for a = and λ =0.5 where g a,λ γ ) is defined as 1+a γ 1 + cos φγ ) 3 g a,λ γ )=signγ ) a 7λa ) φγ )=arccos 41 + a γ ) 1 3 andthethresholdvaluesatisfies 3 π 3 )) 1 ), 15) ta,λ = ta,λ 1 if λ 1 ta,λ if λ > 1 a ; a, 16) where t 1 a,λ = λ a, t a,λ = λ 1 a. 17) Figures, 3, 4, and5 show the plots of the threshold function g a,λ for a =1,,3,5, and λ =0.5. Figures 6 and 7 show the plots of the hard/soft threshold functions with λ =0.5.

5 Cui et al. Journal of Inequalities and Applications 018) 018:59 Page 5 of 11 Figure 4 Plot of the threshold function g a,λ for a =3 and λ =0.5 Figure 5 Plot of the threshold function g a,λ for a =5,andλ =0.5 Figure 6 Plot of the hard threshold function with λ =0.5 Definition 1 The iterative thresholding operator G a,λ can be defined by G a,λ )= pro β a,λ 1),...,pro β a,λ n) ), 18) where pro β a,λ is defined in Lemma 1. 3 Thresholding representation theory and algorithm for problem QP λ a ) In this section, we establish a thresholding representation theory of the problem QPa λ), which underlies the algorithm to be proposed. Then an iterative fraction thresholding algorithm IFTA) is proposed to solve the problem QPa λ ) for all a >0.

6 Cui et al. Journal of Inequalities and Applications 018) 018:59 Page 6 of 11 Figure 7 Plot of the soft threshold function with λ = Thresholding representation theory For any fied positive parameters λ >0,μ >0,a >0and R n,let C 1 )= F) b + λp a) 19) and C, y)=μ Fy) b + λμp a) μ Fy) Fy)y + y. 0) It is clear that C, )=μc 1 ) for all μ >0. Theorem 1 For any λ >0and 0<μ < L 1 with F ) F ) L. If is the optimal solution of min R n C 1 ), then is also the optimal solution of min R n C, ), that is, C, ) C, ) for any R n. Proof By the definition of C, y), we have C, ) = μ F ) b + λμp a) μ F ) F ) + μ F ) b + λμp a) μc 1 ) = C, ). Theorem For any λ >0,μ >0and solution of min R n C 1 ), min R n C, ) is equivalent to { min Bμ ) R n + λμp a) }, 1) where B μ )= + μf ) b F ) ).

7 Cui et al. Journal of Inequalities and Applications 018) 018:59 Page 7 of 11 Proof By the definition, C, y)canberewrittenas C, ) = μf ) F ) + μf ) ) b + λμp a)+μ b + μ F ) μf ) F ) + μf ) b = Bμ ) + λμp a)+μ b + μ F ) Bμ ), which implies that min R n C, ) for any λ >0,μ >0isequivalentto min R n { Bμ ) + λμp a) }. Combining Theorem, Theorem 1 and Lemma 1, the thresholding representation of QP λ a )canbeconcludedby = G a,λμ Bμ )), ) where the operator G a,λμ is defined in Definition 1 and obtained by replacing λ with λμ. With the thresholding representations ), the IFTA for solving the regularization problem QPa λ ) can be naturally defined as k+1 = G a,λμ Bμ k )), k =0,1,,..., 3) where B μ k )= k + μf k ) b F k ) k ). 3. Adjusting the values for the regularization parameter λ > 0 In this subsection, the cross-validation method see [9, 10, 1]) is accepted to automatically adjust the value for the regularization parameter λ > 0. In other words, when some prior information is known for a regularization problem, this selection is more reasonable and intelligent. Suppose that the vector of sparsity r is the optimal solution of the regularization problem QPa λ ), and without loss of generality, set Bμ 1 Bμ Bμ r Bμ r+1 Bμ n 0. Then it follows from 14)that Bμ i > ta,λμ i {1,,...,r}, Bμ i ta,λμ i {r +1,r +,...,n}, where ta,λμ is obtained by replacing λ with λμ in t a,λ. By ta,λμ t1 a,λμ,wehave B μ ) r ta,λμ t a,λμ = λμ 1 a ; B μ ) r+1 < ta,λμ t1 a,λμ = λμ a. 4) It follows that B μ ) r+1 aμ λ a B μ ) r +1). 5) 4a μ

8 Cui et al. Journal of Inequalities and Applications 018) 018:59 Page 8 of 11 From 5), we obtain [ Bμ ) r+1 λ, a B μ ) r +1) ]. aμ 4a μ We denote by λ 1 and λ the left and the right of the above interval, respectively: λ 1 = B μ ) r+1 aμ and λ = a B μ ) r +1). 4a μ Achoiceofλ is λ 1 if λ 1 1 λ = λ if λ 1 > 1 a μ ; a μ. Since is unknown, and k is the best available approimation to,sowecantake λ 1,k = B μ k ) r+1 if λ aμ 1,k 1 λ = ; a μ λ,k = a B μ k ) r +1) if λ 4a μ 1,k > 1, 6) a μ in the kth iteration. That is, 6) can be used to automatically adjust the value of the regularization parameter λ > 0 during iteration. Remark 1 Notice that 6) isvalidforanyμ > 0 satisfying 0 < μ F k ). In general, we can take μ = μ k = 1 ɛ with any small ɛ 0, 1) below. Especially, the threshold value F k ) is ta,λμ = λμ a when λ = λ 1,k,andta,λμ = λμ 1 a when λ = λ,k. 3.3 Iterative fraction thresholding algorithm IFTA) Based on the thresholding representation 3) andthe analyses given in Sect. 3.,the proposed iterative fraction thresholding algorithm IFTA) for regularization problem QP λ a ) can be naturally described in Algorithm 1. Remark The convergence of IFTA is not proved theoretically in this paper, and this is our future work. 4 Numerical eperiments In the section, we carry out a series of simulations to demonstrate the performance of IFTA, and compare them with those obtained with some state-of-art methods iterative soft thresholding algorithm ISTA) [6, 7]), iterative hard thresholding algorithm IHTA) [6, 7]. In our numerical eperiments, we set F)=A 1 + ηf 0 ) A, 7) where A 1 R is a fied Gaussian random matri, 0 R 400 is a reference vector, f :[0, ) R is a positive and smooth Lipschitz continuous function with f t)=lnt +1), η is a sufficiently small scaling factor we set η = 0.003), and A R is a fied matri with every entry equals 1. Then the form of nonlinearity considered in 7) is a quasi-

9 Cui et al. Journal of Inequalities and Applications 018) 018:59 Page 9 of 11 Algorithm 1 Iterative fraction thresholding algorithm IFTA) Initialize: Given 0 R n, μ 0 = 1 ɛ 0 < ɛ <1)anda >0; F 0 ) while not converged do z k := B μk k )= k + μ k F k ) y F k ) k ), λ 1,k = B μ k k ) r+1, λ aμ,k = a B μ k k ) r +1) k μ k = 1 ɛ ; F k ) if λ 1,k 1 then a μ k λ = λ 1,k ; ta,λμ k = λμ ka for i =1:length) 1. zi k > t a,λμ k,then k+1 i = g a,λk μ k zi k);. zi k t a,λμ k,then k+1 i =0; else λ = λ,k ; t = λμ k 1 a ; for i =1:length) 1. zi k > t a,λμ k,then k+1 i = g a,λk μ k zi k);. zi k t a,λμ k,then k+1 i =0; end k k +1; end while return: k+1. 4a μ k, linear, and the more detailed accounts of the setting in the form of 7) canbefoundin [6, 7]. By randomly generating such sufficiently sparse vectors 0 choosing the nonzero locations uniformly over the support in random, and their values from N0, 1)), we generate vectors b. In this way, we know the sparsest solution to F 0 ) 0 = b,andweareableto compare this with algorithmic results. The stopping criterion is usually as follows: k k 1 k Tol, where k and k 1 are numerical results from two continuous iterative steps and Tol is a given small number. The success is measured by computing relative error = 0 0 Re, where is the numerical results generated by IFTA, and Re is also a given small number. In all of our eperiments, we set Tol = 10 8 to indicate the stopping criterion, and set Re = 10 4 to indicate a perfect recovery of the original sparse vector 0. Figure 8 shows the success rate of three algorithms in the recovery of a sparse signal with different cardinality. In this eperiment, we repeatedly perform 30 tests and present average results and take a =.5. Figure 9 shows the relative error between the solution and the given signal 0.Inthis eperiment, we repeatedly perform 30 tests and present average results and take a =.5. The graphs presented in Fig. 8 and Fig. 9 show the performance of the ISTA, IHTA and IFTA in recovering the true sparsest) signals. From Fig. 8, we can see that IFTA performs best, and IST algorithm the second. From Fig. 9, we see that the IFTA has the smallest relative error value with sparsity growing.

10 Cui et al. Journal of Inequalities and Applications 018) 018:59 Page 10 of 11 Figure 8 Success rate of three algorithms in the recovery of a sparse signal with different cardinality. In this eperiment, we repeatedly perform 30 tests and present average results and take a =.5 Figure 9 Relative error between the solution and the given signal 0. In this eperiment, we repeatedly perform 30 tests and present average results and take a =.5 5 Conclusion In this paper, we take the fraction function as the substitution for l 0 -norm in quasi-linear compressed sensing. An iterative fraction thresholding algorithm is proposed to solve the regularization problem QPa λ ) for all a > 0. With the change of parameter a >0,ouralgorithm could get a promising result, which is one of the advantages for our algorithm compared with some state-of-art algorithms. We also provide a series of eperiments to assess performance of our algorithm and the eperiment results have illustrated that our algorithms is able to address the sparse signal recovery problems in nonlinear systems. Compared with ISTA and IHTA, IFTA performs best in sparse signal recovery and has the smallest relative error value with sparsity growing. However, the convergence of our algorithm is not proved theoretically in this paper, and it is our future work. Acknowledgements The work was supported by the National Natural Science Foundations of China , , , ) and the Science Foundations of Shaani Province of China 016JQ109, 015JM101). Competing interests The authors declare that they have no competing interests. Authors contributions All authors contributed equally to this work. All authors read and approved the final manuscript. Author details 1 School of Mathematics and Statistics, Xi an Jiaotong University, Xi an, China. School of Science, Xi an Polytechnic University, Xi an, China.

11 Cui et al. Journal of Inequalities and Applications 018) 018:59 Page 11 of 11 Publisher s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Received: 11 December 017 Accepted: 6 March 018 References 1. Candes, E., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 598), ). Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 54), ) 3. Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modelling of signals and images. SIAM Rev. 511), ) 4. Elad, M.: Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springe, New York 010) 5. Theodoridis, S., Kopsinis, Y., Slavakis, K.: Sparsity-aware learning and compressed sensing: an overview Ehler, M., Fornasier, M., Sigl, J.: Quasi-linear compressed sensing. Multiscale Model. Simul. 1), ) 7. Sigl, J.: Quasilinear compressed sensing. Master s thesis, Technische University München, Munich, Germany 013) 8. Geman, D., Reynolds, G.: Constrained restoration and recovery of discontinuities. IEEE Trans. Pattern Anal. Mach. Intell. 143), ) 9. Li, H., Zhang, Q., Cui, A., Peng, J.: Minimization of fraction function penalty in compressed sensing Cui, A., Peng, J., Li, H., Zhang, C., Yu, Y.: Affine matri rank minimization problem via non-conve fraction function penalty. J. Comput. Appl. Math. 336, ) 11. Xing, F.: Investigation on solutions of cubic equations with one unknown. J. Cent. Univ. Natl. Nat. Sci. Ed.) 13), ) 1. Xu, Z., Chang, X., Xu, F., Zhang, H.: L1/ regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 37), )

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

Color Scheme. swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July / 14

Color Scheme.   swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July / 14 Color Scheme www.cs.wisc.edu/ swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July 2016 1 / 14 Statistical Inference via Optimization Many problems in statistical inference

More information

New hybrid conjugate gradient methods with the generalized Wolfe line search

New hybrid conjugate gradient methods with the generalized Wolfe line search Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p

Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p G. M. FUNG glenn.fung@siemens.com R&D Clinical Systems Siemens Medical

More information

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery Approimate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery arxiv:1606.00901v1 [cs.it] Jun 016 Shuai Huang, Trac D. Tran Department of Electrical and Computer Engineering Johns

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem Kangkang Deng, Zheng Peng Abstract: The main task of genetic regulatory networks is to construct a

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Approximations to inverse tangent function

Approximations to inverse tangent function Qiao Chen Journal of Inequalities Applications (2018 2018:141 https://doiorg/101186/s13660-018-1734-7 R E S E A R C H Open Access Approimations to inverse tangent function Quan-Xi Qiao 1 Chao-Ping Chen

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Structured Sparsity. group testing & compressed sensing a norm that induces structured sparsity. Mittagsseminar 2011 / 11 / 10 Martin Jaggi

Structured Sparsity. group testing & compressed sensing a norm that induces structured sparsity. Mittagsseminar 2011 / 11 / 10 Martin Jaggi Structured Sparsity group testing & compressed sensing a norm that induces structured sparsity (ariv.org/abs/1110.0413 Obozinski, G., Jacob, L., & Vert, J.-P., October 2011) Mittagssear 2011 / 11 / 10

More information

1-Bit Compressive Sensing

1-Bit Compressive Sensing 1-Bit Compressive Sensing Petros T. Boufounos, Richard G. Baraniuk Rice University, Electrical and Computer Engineering 61 Main St. MS 38, Houston, TX 775 Abstract Compressive sensing is a new signal acquisition

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Message Passing Algorithms for Compressed Sensing: II. Analysis and Validation

Message Passing Algorithms for Compressed Sensing: II. Analysis and Validation Message Passing Algorithms for Compressed Sensing: II. Analysis and Validation David L. Donoho Department of Statistics Arian Maleki Department of Electrical Engineering Andrea Montanari Department of

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

On Sparsity, Redundancy and Quality of Frame Representations

On Sparsity, Redundancy and Quality of Frame Representations On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division

More information

A Note on the Complexity of L p Minimization

A Note on the Complexity of L p Minimization Mathematical Programming manuscript No. (will be inserted by the editor) A Note on the Complexity of L p Minimization Dongdong Ge Xiaoye Jiang Yinyu Ye Abstract We discuss the L p (0 p < 1) minimization

More information

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Bo Liu Department of Computer Science, Rutgers Univeristy Xiao-Tong Yuan BDAT Lab, Nanjing University of Information Science and Technology

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Covering functionals of cones and double cones

Covering functionals of cones and double cones Wu and Xu Journal of Inequalities and Applications 08) 08:86 https://doi.org/0.86/s660-08-785-9 RESEARCH Open Access Covering functionals of cones and double cones Senlin Wu* and Ke Xu * Correspondence:

More information

arxiv: v1 [cs.it] 26 Oct 2018

arxiv: v1 [cs.it] 26 Oct 2018 Outlier Detection using Generative Models with Theoretical Performance Guarantees arxiv:1810.11335v1 [cs.it] 6 Oct 018 Jirong Yi Anh Duc Le Tianming Wang Xiaodong Wu Weiyu Xu October 9, 018 Abstract This

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Xu Guanlei Dalian Navy Academy

Xu Guanlei Dalian Navy Academy Xu Guanlei Dalian Navy Academy xgl_86@163.com Werner Heisenberg x p 2 So, it is called as Heisenberg's uncertainty principle 2 2 t u 1/4 2 2 ( ) 2 u u u F u du where t 2 2 2 t t f ( t) dt What will happen

More information

Shifting Inequality and Recovery of Sparse Signals

Shifting Inequality and Recovery of Sparse Signals Shifting Inequality and Recovery of Sparse Signals T. Tony Cai Lie Wang and Guangwu Xu Astract In this paper we present a concise and coherent analysis of the constrained l 1 minimization method for stale

More information

Homotopy methods based on l 0 norm for the compressed sensing problem

Homotopy methods based on l 0 norm for the compressed sensing problem Homotopy methods based on l 0 norm for the compressed sensing problem Wenxing Zhu, Zhengshan Dong Center for Discrete Mathematics and Theoretical Computer Science, Fuzhou University, Fuzhou 350108, China

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Noise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm

Noise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm Sparse Representation and the K-SV Algorithm he CS epartment he echnion Israel Institute of technology Haifa 3, Israel University of Erlangen - Nürnberg April 8 Noise Removal? Our story begins with image

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Compressed Sensing: a Subgradient Descent Method for Missing Data Problems

Compressed Sensing: a Subgradient Descent Method for Missing Data Problems Compressed Sensing: a Subgradient Descent Method for Missing Data Problems ANZIAM, Jan 30 Feb 3, 2011 Jonathan M. Borwein Jointly with D. Russell Luke, University of Goettingen FRSC FAAAS FBAS FAA Director,

More information

Additive functional inequalities in Banach spaces

Additive functional inequalities in Banach spaces Lu and Park Journal o Inequalities and Applications 01, 01:94 http://www.journaloinequalitiesandapplications.com/content/01/1/94 R E S E A R C H Open Access Additive unctional inequalities in Banach spaces

More information

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse

More information

Interpolation-Based Trust-Region Methods for DFO

Interpolation-Based Trust-Region Methods for DFO Interpolation-Based Trust-Region Methods for DFO Luis Nunes Vicente University of Coimbra (joint work with A. Bandeira, A. R. Conn, S. Gratton, and K. Scheinberg) July 27, 2010 ICCOPT, Santiago http//www.mat.uc.pt/~lnv

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting

More information

A high order adaptive finite element method for solving nonlinear hyperbolic conservation laws

A high order adaptive finite element method for solving nonlinear hyperbolic conservation laws A high order adaptive finite element method for solving nonlinear hyperbolic conservation laws Zhengfu Xu, Jinchao Xu and Chi-Wang Shu 0th April 010 Abstract In this note, we apply the h-adaptive streamline

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities J Optim Theory Appl 208) 76:399 409 https://doi.org/0.007/s0957-07-24-0 On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities Phan Tu Vuong Received:

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah 00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES A NEW FRAMEWORK FOR DESIGNING INCOERENT SPARSIFYING DICTIONARIES Gang Li, Zhihui Zhu, 2 uang Bai, 3 and Aihua Yu 3 School of Automation & EE, Zhejiang Univ. of Sci. & Tech., angzhou, Zhejiang, P.R. China

More information

Package R1magic. April 9, 2013

Package R1magic. April 9, 2013 Package R1magic April 9, 2013 Type Package Title Compressive Sampling: Sparse signal recovery utilities Version 0.2 Date 2013-04-09 Author Maintainer Depends stats, utils Provides

More information

Fast Hard Thresholding with Nesterov s Gradient Method

Fast Hard Thresholding with Nesterov s Gradient Method Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science

More information

Introduction to Alternating Direction Method of Multipliers

Introduction to Alternating Direction Method of Multipliers Introduction to Alternating Direction Method of Multipliers Yale Chang Machine Learning Group Meeting September 29, 2016 Yale Chang (Machine Learning Group Meeting) Introduction to Alternating Direction

More information

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23

More information

Package R1magic. August 29, 2016

Package R1magic. August 29, 2016 Type Package Package R1magic August 29, 2016 Title Compressive Sampling: Sparse Signal Recovery Utilities Version 0.3.2 Date 2015-04-19 Maintainer Depends stats, utils Utilities

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Channel Pruning and Other Methods for Compressing CNN

Channel Pruning and Other Methods for Compressing CNN Channel Pruning and Other Methods for Compressing CNN Yihui He Xi an Jiaotong Univ. October 12, 2017 Yihui He (Xi an Jiaotong Univ.) Channel Pruning and Other Methods for Compressing CNN October 12, 2017

More information

Sparse Solutions of Linear Complementarity Problems

Sparse Solutions of Linear Complementarity Problems Sparse Solutions of Linear Complementarity Problems Xiaojun Chen Shuhuang Xiang September 15, 2015 Abstract This paper considers the characterization and computation of sparse solutions and leastp-norm

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Some new sequences that converge to the Ioachimescu constant

Some new sequences that converge to the Ioachimescu constant You et al. Journal of Inequalities and Applications (06) 06:48 DOI 0.86/s3660-06-089-x R E S E A R C H Open Access Some new sequences that converge to the Ioachimescu constant Xu You *, Di-Rong Chen,3

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

Does Compressed Sensing have applications in Robust Statistics?

Does Compressed Sensing have applications in Robust Statistics? Does Compressed Sensing have applications in Robust Statistics? Salvador Flores December 1, 2014 Abstract The connections between robust linear regression and sparse reconstruction are brought to light.

More information

On State Estimation with Bad Data Detection

On State Estimation with Bad Data Detection On State Estimation with Bad Data Detection Weiyu Xu, Meng Wang, and Ao Tang School of ECE, Cornell University, Ithaca, NY 4853 Abstract We consider the problem of state estimation through observations

More information

On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals

On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals Monica Fira, Liviu Goras Institute of Computer Science Romanian Academy Iasi, Romania Liviu Goras, Nicolae Cleju,

More information

A maximum principle for optimal control system with endpoint constraints

A maximum principle for optimal control system with endpoint constraints Wang and Liu Journal of Inequalities and Applications 212, 212: http://www.journalofinequalitiesandapplications.com/content/212/231/ R E S E A R C H Open Access A maimum principle for optimal control system

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images!

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images! Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images! Alfredo Nava-Tudela John J. Benedetto, advisor 1 Happy birthday Lucía! 2 Outline - Problem: Find sparse solutions

More information

An improved generalized Newton method for absolute value equations

An improved generalized Newton method for absolute value equations DOI 10.1186/s40064-016-2720-5 RESEARCH Open Access An improved generalized Newton method for absolute value equations Jingmei Feng 1,2* and Sanyang Liu 1 *Correspondence: fengjingmeilq@hotmail.com 1 School

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals On the l 1 -Norm Invariant Convex -Sparse Decomposition of Signals arxiv:1305.6021v2 [cs.it] 11 Nov 2013 Guangwu Xu and Zhiqiang Xu Abstract Inspired by an interesting idea of Cai and Zhang, we formulate

More information

Iterative reweighted l 1 design of sparse FIR filters

Iterative reweighted l 1 design of sparse FIR filters Iterative reweighted l 1 design of sparse FIR filters Cristian Rusu, Bogdan Dumitrescu Abstract Designing sparse 1D and 2D filters has been the object of research in recent years due mainly to the developments

More information

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization Design of Proection Matrix for Compressive Sensing by Nonsmooth Optimization W.-S. Lu T. Hinamoto Dept. of Electrical & Computer Engineering Graduate School of Engineering University of Victoria Hiroshima

More information

A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables

A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables Niharika Gauraha and Swapan Parui Indian Statistical Institute Abstract. We consider the problem of

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

Conditions for a Unique Non-negative Solution to an Underdetermined System

Conditions for a Unique Non-negative Solution to an Underdetermined System Conditions for a Unique Non-negative Solution to an Underdetermined System Meng Wang and Ao Tang School of Electrical and Computer Engineering Cornell University Ithaca, NY 14853 Abstract This paper investigates

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

Abstract This paper is about the efficient solution of large-scale compressed sensing problems.

Abstract This paper is about the efficient solution of large-scale compressed sensing problems. Noname manuscript No. (will be inserted by the editor) Optimization for Compressed Sensing: New Insights and Alternatives Robert Vanderbei and Han Liu and Lie Wang Received: date / Accepted: date Abstract

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

Strong convergence theorems for total quasi-ϕasymptotically

Strong convergence theorems for total quasi-ϕasymptotically RESEARCH Open Access Strong convergence theorems for total quasi-ϕasymptotically nonexpansive multi-valued mappings in Banach spaces Jinfang Tang 1 and Shih-sen Chang 2* * Correspondence: changss@yahoo.

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information