Supplementary Material of A Novel Sparsity Measure for Tensor Recovery
|
|
- Gervase Flowers
- 5 years ago
- Views:
Transcription
1 Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Qian Zhao 1,2 Deyu Meng 1,2 Xu Kong 3 Qi Xie 1,2 Wenfei Cao 1,2 Yao Wang 1,2 Zongben Xu 1,2 1 School of Mathematics and Statistics, Xi an Jiaotong University 2 Ministry of Education Key Lab of Intelligent Networks and Network Security, Xi an Jiaotong University timmy.zhaoqian@gmail.com dymeng@mail.xjtu.edu.cn xq.liwu@stu.xjtu.edu.cn {caowenf2015, yao.s.wang}@gmail.com zbxu@mail.xjtu.edu.cn 3 School of Mathematical Sciences, Liaocheng University xu.kong@hotmail.com Abstract In this supplementary material, we present details of the algorithms for solving the proposed tensor recovery models with folded-concave relaxations, and present more experimental results. 1. Solving tensor tecovery models with foldedconcave relaxations In the maintext, we have described how to solve the proposed tensor recovery models with the trace norm relaxation. Here, we present the strategy for solving the models with folded-concave relaxations Locally linear approximation for the foldedconcave penalties First, we introduce the idea of the locally linear approximation (LLA) for the folded-concave penalties, which is adopted from [10] and [6]. It is easy to verify that the following result holds: Proposition 1 ([6]) Let f(x) be a concave function on (0, ). Then with equality if x = x 0. f(x) f(x 0 ) + f (x 0 )(x x 0 ), (1) Apply this proposition to the folded-concave penalties, we can get the LLAs for them: ψ mcp (t) ψ mcp (t 0 ) + ψ mcp(t 0 )(t t 0 ) ψ mcp (t t 0 ), (2) and ψ scad (t) ψ scad (t 0 ) + ψ scad(t 0 )(t t 0 ) ψ scad (t t 0 ), (3) where ψ mcp(t 0 ) = with (x) + = max{x, 0}, and ( λ t ) 0, a + ψ scad(t 0 ) = λi{t 0 λ} + (aλ t 0) + I{t 0 > λ}, a 1 with I{A} being the indicator function which equals 1 if A is true and 0 otherwise. Using these LLAs, we can solve the original models with a two-stage approach, as discussed in the next subsection Two-stage algorithms for solving the models with folded-concave relaxations We mainly discuss the algorithm for RP-TC mcp in the maintext, and the idea can be easily generalized to other models. Recall the RP-TC mcp model: where X 1 C 2 S mcp (X ), s.t. X Ω = T Ω, (4) S mcp (X ) = d = d P mcp ( X(n) ) ( ) ψ mcp(σ (n)i ), i with σ (n)i being the ith singular value of X (n). Now assug that we have an initial estimation for X, denoted as X 0, we can apply (2) to ψ mcp (σ (n)i ) to obtain ψ mcp (σ (n)i σ 0 (n)i ) = ψ mcp(σ 0 (n)i )+ψ mcp(σ 0 (n)i )(σ (n)i σ 0 (n)i ), where σ 0 (n)i is the ith singular value of the unfolding matrix X 0 (n) along the nth model of X 0. Then we can replace ψ mcp (5)
2 with ψ mcp, and approximate S mcp (X ) as follows: where Ŝ mcp (X X 0 ) = d ( ) P mcp X (n) X 0 (n), (6) ( ) P mcp X (n) X 0 (n) = ψ mcp (σ (n)i σ 0 i (n)i ). From the above discussion, we can give a two-stage algorithm for solving (4). In the first stage, we solve RP-TC trace to get X 0, and then in the second stage, we solve the following problem: X 1 C 2 Ŝ mcp (X X 0 ), s.t. X Ω = T Ω. (7) Then the result can be used as X 0 to solve (7) again in an iterative way. However, Zou and Li [10] showed that onestep re-estimation already can provide good result in sparse linear regression. Therefore, we also use this one-step strategy which is empirically effective in our experiments. Now we discuss how to solve (7). We use the similar ADMM as for solving RP-TC trace, which has been discussed in the maintext. First, we introduce d auxiliary tensors M n (1 n d) and reformulate (7) as follows: 1 d ( ) P mcp X (n) X 0 X,M 1,,M d C 2 (n) s.t., X Ω = M Ω, X = M n, 1 n d. The augmented Lagrangian function for (8) is L ρ(x, M 1,, M d, Y 1,, Y d ) = 1 d ( P mcp M(n) X 0 ) d (n) + X M n, Y n C 2 + ρ 2 d X M n 2 F, where Y n (1 n d) are the Lagrange multipliers and ρ is a positive scalar. Then the problem can be solved within the ADMM framework. With M l (l n) fixed, M n can be updated by solving the following problem: (8) L ρ (X, M 1,, M d, Y 1,, Y d ), (9) M n which, with some algebra, is equivalent to ) α n Pmcp (M n(n) X 0 (n) + X M n, Y n + ρ M n 2 X M n 2 F, (10) where α n = 1 d ( ) P mcp M l(l) X 0 C 2 l=1,l n (l), 1 n d. Note that ( ) P mcp M n(n) X 0 (n) = ψ mcp (σ (n)k σ 0 k (n)k ) = i ψ mcp(σ 0 (n)k ) + ψ mcp(σ 0 (n)k )(σ (n)k σ 0 (n)k ), where σ (n)k is the k th singular value of M n(n), and then Eq. (10), by discarding constants irrelevant to σ (n)k s, can be further reduced to α n M n k ψ mcp(σ(n)k 0 )σ (n)k+ X M n, Y n + ρ 2 X M n 2 F, (11) which is intrinsically a weighted nuclear (trace) norm imization problem [2, 7]. Since σ(n)k 0 (k = 1, 2,..., r n) are of non-increasing order, and ψ mcp( ) is non-decreasing on (0, ), then ψ mcp(σ (n)k 0 ) (k = 1, 2,..., r n) are of nondecreasing order. Therefore, according to [7], (11) can be analytically solved by ( ( ( 1 M n = fold n D αn unfold ρ,vn n X + ρ Y ) )) n, (12) where v n = ( ψ mcp(σ 0 (n)1 ), ψ mcp(σ 0 (n)2 ),..., ψ mcp(σ 0 (n)r n )) T, and D τ,w (X) = UΣ τ,w V T is the generalized shrinkage operator with Σ τ,w = diag(max(σ i τw i, 0)) (σ i s are the singular values of X, and w i s are elements of w). Similarly, with M n s fixed, X can be updated by solving X L ρ(x, M 1,, M d, Y 1,, Y d ), (13) which also has the closed-form solution: X = 1 d ρd (ρm n Y n ). (14) Then the Lagrangian multipliers are updated by Y n := Y n ρ(m n X ), 1 n d, (15) and ρ is increased to µρ with some constant µ > 1. It is straightforward to generalize this two-stage algorithm to solving the proposed RP-TC scad, RP-TRPCA mcp and RP-TRPCA scad, and we omit them here for conciseness. 2. Additional experimental results 2.1. Detailed results on TC experiments In the maintext, we show the averaged results of the multispectral image completion. Here, we list the results on separated images in Table 1-8 for a more comprehensive comparison. It can be seen that the proposed methods can achieve the best or second best results in all situations in terms of the 5 PQIs utilized.
3 2.2. Additional results on TRPCA experiments In the maintext, we show the averaged results of the simulated multispectral image restoration. Here, we list the results on separated images in Table 9-18, and also show the visual results on 4 sample images in Fig We can see that the proposed methods can produce comparatively better restoration results than other competing methods, both quantitatively and visually. References [1] D. Goldfarb and Z. T. Qin. Robust low-rank tensor recovery: Models and algorithms. SIAM Journal on Matrix Analysis and Applications, 35(1): , [2] S. Gu, L. Zhang, W. Zuo, and X. Feng. Weighted nuclear norm imization with application to image denoising. In CVPR, [3] Z. Lin, M. Chen, and Y. Ma. The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. arxiv preprint arxiv: , [4] J. Liu, P. Musialski, P. Wonka, and J. P. Ye. Tensor completion for estimating missing values in visual data. IEEE TPAMI, 34(1): , [5] Y. Peng, D. Y. Meng, Z. B. Xu, C. Q. Gao, Y. Yang, and B. Zhang. Decomposable nonlocal tensor dictionary learning for multispectral image denoising. In CVPR, [6] S. Wang, D. Liu, and Z. Zhang. Nonconvex relaxation approaches to robust matrix recovery. In IJCAI, [7] Q. Xie, D. Meng, S. Gu, L. Zhang, W. Zuo, X. Feng, and Z. Xu. On the optimal solution of weighted nuclear norm imization. arxiv preprint arxiv: , [8] Y. Xu, R. Hao, W. Yin, and Z. Su. Parallel matrix factorization for low-rank tensor completion. arxiv preprint arxiv: , [9] Z. M. Zhang, G. Ely, S. Aeron, N. Hao, and M. E. Kilmer. Novel methods for multilinear data completion and denoising based on tensor-svd. In CVPR, [10] H. Zou and R. Li. One-step sparse estimates in nonconcave penalized likelihood models. Annals of statistics, 36(4):1509, 2008.
4 Table 1. The performance comparison of 7 competing TC methods with different sampling rates on balloons image. MC-ALM [3] HaLRTC [4] t-svd [9] TMac [8] RP-TC trace RP-TC mcp RP-TC scad Table 2. The performance comparison of 7 competing TC methods with different sampling rates on beads image. MC-ALM [3] HaLRTC [4] t-svd [9] TMac [8] RP-TC trace RP-TC mcp RP-TC scad Table 3. The performance comparison of 7 competing TC methods with different sampling rates on cloth image. MC-ALM [3] HaLRTC [4] t-svd [9] TMac [8] RP-TC trace RP-TC mcp RP-TC scad Table 4. The performance comparison of 7 competing TC methods with different sampling rates on jellybeans image. MC-ALM [3] HaLRTC [4] t-svd [9] TMac [8] RP-TC trace RP-TC mcp RP-TC scad
5 Table 5. The performance comparison of 7 competing TC methods with different sampling rates on peppers image. MC-ALM [3] HaLRTC [4] t-svd [9] TMac [8] RP-TC trace RP-TC mcp RP-TC scad Table 6. The performance comparison of 7 competing TC methods with different sampling rates on watercolors image. MC-ALM [3] HaLRTC [4] t-svd [9] TMac [8] RP-TC trace RP-TC mcp RP-TC scad Table 7. The performance comparison of 7 competing TC methods with different sampling rates on Scene 4 of Natural Scenes MC-ALM [3] HaLRTC [4] t-svd [9] TMac [8] RP-TC trace RP-TC mcp RP-TC scad Table 8. The performance comparison of 7 competing TC methods with different sampling rates on Scene 8 of Natural Scenes MC-ALM [3] HaLRTC [4] t-svd [9] TMac [8] RP-TC trace RP-TC mcp RP-TC scad
6 Table 9. The performance comparison of 7 competing TRPCA methods on balloons image. Noisy image TDL [5] RPCA [3] HoRPCA [1] t-svd [9] RP-TRPCA trace RP-TRPCA mcp RP-TRPCA scad Table 14. The performance comparison of 7 competing TRPCA methods on feathers image. Noisy image TDL [5] RPCA [3] HoRPCA [1] t-svd [9] RP-TRPCA trace RP-TRPCA mcp RP-TRPCA scad Table 10. The performance comparison of 7 competing TRPCA methods on beads image. Noisy image TDL [5] RPCA [3] HoRPCA [1] t-svd [9] RP-TRPCA trace RP-TRPCA mcp RP-TRPCA scad Table 15. The performance comparison of 7 competing TRPCA methods on flowers image. Noisy image TDL [5] RPCA [3] HoRPCA [1] t-svd [9] RP-TRPCA trace RP-TRPCA mcp RP-TRPCA scad Table 11. The performance comparison of 7 competing TRPCA methods on chart and stuffed toy image. Noisy image TDL [5] RPCA [3] HoRPCA [1] t-svd [9] RP-TRPCA trace RP-TRPCA mcp RP-TRPCA scad Table 16. The performance comparison of 7 competing TRPCA methods on photo and face image. Noisy image TDL [5] RPCA [3] HoRPCA [1] t-svd [9] RP-TRPCA trace RP-TRPCA mcp RP-TRPCA scad Table 12. The performance comparison of 7 competing TRPCA methods on cloth image. Noisy image TDL [5] RPCA [3] HoRPCA [1] t-svd [9] RP-TRPCA trace RP-TRPCA mcp RP-TRPCA scad Table 17. The performance comparison of 7 competing TRPCA methods on pompoms image. Noisy image TDL [5] RPCA [3] HoRPCA [1] t-svd [9] RP-TRPCA trace RP-TRPCA mcp RP-TRPCA scad Table 13. The performance comparison of 7 competing TRPCA methods on egyptian statue image. Noisy image TDL [5] RPCA [3] HoRPCA [1] t-svd [9] RP-TRPCA trace RP-TRPCA mcp RP-TRPCA scad Table 18. The performance comparison of 7 competing methods on watercolors image. Noisy image TDL [5] RPCA [3] HoRPCA [1] t-svd [9] RP-TRPCA trace RP-TRPCA mcp RP-TRPCA scad
7 (a) Original image (b) Noisy image (f) t-svd (c) TDL (g) RP-TRPCAtrace (d) RPCA (h) RP-TRPCAmcp (e) HoRPCA (i) RP-TRPCAs cad Figure 1. (a) The original image from chart and stuffed toy; (b) The corresponding noisy image; (c)-(i) The recovered images by TDL[5], RPCA [3], HoRPCA [1], t-svd [9], RP-TRPCAtrace, RP-TRPCAmcp and RP-TRPCAscad, respectively. (a) Original image (b) Noisy image (f) t-svd (g) RP-TRPCAtrace (c) TDL (d) RPCA (h) RP-TRPCAmcp (e) HoRPCA (i) RP-TRPCAs cad Figure 2. (a) The original image from feathers; (b) The corresponding noisy image; (c)-(i) The recovered images by TDL[5], RPCA [3], HoRPCA [1], t-svd [9], RP-TRPCAtrace, RP-TRPCAmcp and RP-TRPCAscad, respectively.
8 (a) Original image (b) Noisy image (f) t-svd (c) TDL (g) RP-TRPCAtrace (d) RPCA (h) RP-TRPCAmcp (e) HoRPCA (i) RP-TRPCAs cad Figure 3. (a) The original image from flowers; (b) The corresponding noisy image; (c)-(i) The recovered images by TDL[5], RPCA [3], HoRPCA [1], t-svd [9], RP-TRPCAtrace, RP-TRPCAmcp and RP-TRPCAscad, respectively. (a) Original image (b) Noisy image (f) t-svd (g) RP-TRPCAtrace (c) TDL (d) RPCA (h) RP-TRPCAmcp (e) HoRPCA (i) RP-TRPCAs cad Figure 4. (a) The original image from photo and face; (b) The corresponding noisy image; (c)-(i) The recovered images by TDL[5], RPCA [3], HoRPCA [1], t-svd [9], RP-TRPCAtrace, RP-TRPCAmcp and RP-TRPCAscad, respectively.
CSC 576: Variants of Sparse Learning
CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in
More informationLow-Rank Tensor Completion by Truncated Nuclear Norm Regularization
Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization Shengke Xue, Wenyuan Qiu, Fan Liu, and Xinyu Jin College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou,
More informationSupplemental Figures: Results for Various Color-image Completion
ANONYMOUS AUTHORS: SUPPLEMENTAL MATERIAL (NOVEMBER 7, 2017) 1 Supplemental Figures: Results for Various Color-image Completion Anonymous authors COMPARISON WITH VARIOUS METHODS IN COLOR-IMAGE COMPLETION
More informationarxiv: v1 [eess.iv] 7 Dec 2018
A Low-rank Tensor Dictionary Learning Method for Multi-spectral Images Denoising arxiv:8.087v [eess.iv] 7 Dec 08 Abstract Xiao Gong and Wei Chen Beijing Jiaotong University Code is available at https://www.dropbox.com/s/80ltjrxrv9maeg/ltdl.zip?dl=0
More informationTensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization
Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Canyi Lu 1, Jiashi Feng 1, Yudong Chen, Wei Liu 3, Zhouchen Lin 4,5,, Shuicheng Yan 6,1
More informationApplication of Tensor and Matrix Completion on Environmental Sensing Data
Application of Tensor and Matrix Completion on Environmental Sensing Data Michalis Giannopoulos 1,, Sofia Savvaki 1,, Grigorios Tsagkatakis 1, and Panagiotis Tsakalides 1, 1- Institute of Computer Science
More informationA Brief Overview of Practical Optimization Algorithms in the Context of Relaxation
A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation Zhouchen Lin Peking University April 22, 2018 Too Many Opt. Problems! Too Many Opt. Algorithms! Zero-th order algorithms:
More informationAdaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise
Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,
More informationRobust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng
Robust PCA CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Robust PCA 1 / 52 Previously...
More informationarxiv: v1 [cs.cv] 3 Dec 2018
Tensor N-tubal rank and its convex relaxation for low-rank tensor recovery Yu-Bang Zheng a, Ting-Zhu Huang a,, Xi-Le Zhao a, Tai-Xiang Jiang a, Teng-Yu Ji b, Tian-Hui Ma c a School of Mathematical Sciences/Research
More informationGeneralized Tensor Total Variation Minimization for Visual Data Recovery
Generalized Tensor Total Variation Minimization for Visual Data Recovery Xiaojie Guo 1 and Yi Ma 2 1 State Key Laboratory of Information Security IIE CAS Beijing 193 China 2 School of Information Science
More informationA Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations
A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations Chuangchuang Sun and Ran Dai Abstract This paper proposes a customized Alternating Direction Method of Multipliers
More informationRobust Tensor Completion and its Applications. Michael K. Ng Department of Mathematics Hong Kong Baptist University
Robust Tensor Completion and its Applications Michael K. Ng Department of Mathematics Hong Kong Baptist University mng@math.hkbu.edu.hk Example Low-dimensional Structure Data in many real applications
More informationAn efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss
An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss arxiv:1811.04545v1 [stat.co] 12 Nov 2018 Cheng Wang School of Mathematical Sciences, Shanghai Jiao
More informationTensor-Tensor Product Toolbox
Tensor-Tensor Product Toolbox 1 version 10 Canyi Lu canyilu@gmailcom Carnegie Mellon University https://githubcom/canyilu/tproduct June, 018 1 INTRODUCTION Tensors are higher-order extensions of matrices
More informationAn Online Tensor Robust PCA Algorithm for Sequential 2D Data
MITSUBISHI EECTRIC RESEARCH ABORATORIES http://www.merl.com An Online Tensor Robust PCA Algorithm for Sequential 2D Data Zhang, Z.; iu, D.; Aeron, S.; Vetro, A. TR206-004 March 206 Abstract Tensor robust
More informationAn Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition
An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition Jinshi Yu, Guoxu Zhou, Qibin Zhao and Kan Xie School of Automation, Guangdong University of Technology, Guangzhou,
More informationarxiv: v1 [cs.cv] 7 Jan 2019
Truncated nuclear norm regularization for low-rank tensor completion Shengke Xue, Wenyuan Qiu, Fan Liu, Xinyu Jin College of Information Science and Electronic Engineering, Zhejiang University, No 38 Zheda
More informationTruncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences
Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy
More informationFrequency-Domain Rank Reduction in Seismic Processing An Overview
Frequency-Domain Rank Reduction in Seismic Processing An Overview Stewart Trickett Absolute Imaging Inc. Summary Over the last fifteen years, a new family of matrix rank reduction methods has been developed
More informationLinearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation
Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Zhouchen Lin Visual Computing Group Microsoft Research Asia Risheng Liu Zhixun Su School of Mathematical Sciences
More informationDonald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016
Optimization for Tensor Models Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 1 Tensors Matrix Tensor: higher-order matrix
More informationRobust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition
Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Gongguo Tang and Arye Nehorai Department of Electrical and Systems Engineering Washington University in St Louis
More informationNovel methods for multilinear data completion and de-noising based on tensor-svd
Novel methods for multilinear data completion and de-noising based on tensor-svd Zemin Zhang, Gregory Ely, Shuchin Aeron Department of ECE, Tufts University Medford, MA 02155 zemin.zhang@tufts.com gregoryely@gmail.com
More informationA Tensor Completion Approach for Efficient and Robust Fingerprint-based Indoor Localization
A Tensor Completion Approach for Efficient and Robust Fingerprint-based Indoor Localization Yu Zhang, Xiao-Yang Liu Shanghai Institute of Measurement and Testing Technology, China Shanghai Jiao Tong University,
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.16
XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of
More informationGeneralized Higher-Order Tensor Decomposition via Parallel ADMM
Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Generalized Higher-Order Tensor Decomposition via Parallel ADMM Fanhua Shang 1, Yuanyuan Liu 2, James Cheng 1 1 Department of
More informationTHE sparse and low-rank priors have been widely used
This article has been accepted for publication in a future issue of this journal, but has not been fully edited Content may change prior to final publication Citation information: DOI 101109/TPAMI017748590,
More informationSupplementary Materials: Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 16 Supplementary Materials: Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications Fanhua Shang, Member, IEEE,
More informationCrowdsourcing via Tensor Augmentation and Completion (TAC)
Crowdsourcing via Tensor Augmentation and Completion (TAC) Presenter: Yao Zhou joint work with: Dr. Jingrui He - 1 - Roadmap Background Related work Crowdsourcing based on TAC Experimental results Conclusion
More informationRecent developments on sparse representation
Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last
More informationBig Data Analytics: Optimization and Randomization
Big Data Analytics: Optimization and Randomization Tianbao Yang Tutorial@ACML 2015 Hong Kong Department of Computer Science, The University of Iowa, IA, USA Nov. 20, 2015 Yang Tutorial for ACML 15 Nov.
More informationarxiv: v1 [cs.cv] 17 Nov 2015
Robust PCA via Nonconvex Rank Approximation Zhao Kang, Chong Peng, Qiang Cheng Department of Computer Science, Southern Illinois University, Carbondale, IL 6901, USA {zhao.kang, pchong, qcheng}@siu.edu
More informationBeyond RPCA: Flattening Complex Noise in the Frequency Domain
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) Beyond RPCA: Flattening Complex Noise in the Frequency Domain Yunhe Wang, 1,3 Chang Xu, Chao Xu, 1,3 Dacheng Tao 1 Key
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)
More informationTensor Low-Rank Completion and Invariance of the Tucker Core
Tensor Low-Rank Completion and Invariance of the Tucker Core Shuzhong Zhang Department of Industrial & Systems Engineering University of Minnesota zhangs@umn.edu Joint work with Bo JIANG, Shiqian MA, and
More informationDivide-and-combine Strategies in Statistical Modeling for Massive Data
Divide-and-combine Strategies in Statistical Modeling for Massive Data Liqun Yu Washington University in St. Louis March 30, 2017 Liqun Yu (WUSTL) D&C Statistical Modeling for Massive Data March 30, 2017
More informationPRINCIPAL Component Analysis (PCA) is a fundamental approach
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Member, IEEE, Zhouchen
More informationRobust Component Analysis via HQ Minimization
Robust Component Analysis via HQ Minimization Ran He, Wei-shi Zheng and Liang Wang 0-08-6 Outline Overview Half-quadratic minimization principal component analysis Robust principal component analysis Robust
More informationTHE sparse and low-rank priors have been widely used
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications Fanhua Shang, Member, IEEE, James Cheng, Yuanyuan Liu,
More informationSmoothly Clipped Absolute Deviation (SCAD) for Correlated Variables
Smoothly Clipped Absolute Deviation (SCAD) for Correlated Variables LIB-MA, FSSM Cadi Ayyad University (Morocco) COMPSTAT 2010 Paris, August 22-27, 2010 Motivations Fan and Li (2001), Zou and Li (2008)
More informationSparsifying Transform Learning for Compressed Sensing MRI
Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois
More informationGraph based Subspace Segmentation. Canyi Lu National University of Singapore Nov. 21, 2013
Graph based Subspace Segmentation Canyi Lu National University of Singapore Nov. 21, 2013 Content Subspace Segmentation Problem Related Work Sparse Subspace Clustering (SSC) Low-Rank Representation (LRR)
More informationFast Nonnegative Matrix Factorization with Rank-one ADMM
Fast Nonnegative Matrix Factorization with Rank-one Dongjin Song, David A. Meyer, Martin Renqiang Min, Department of ECE, UCSD, La Jolla, CA, 9093-0409 dosong@ucsd.edu Department of Mathematics, UCSD,
More informationRobust Principal Component Analysis
ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M
More informationExact Low Tubal Rank Tensor Recovery from Gaussian Measurements
Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Canyi Lu 1, Jiashi Feng, Zhouchen Lin 3,4 Shuicheng Yan 5, 1 Department of Electrical and Computer Engineering, Carnegie Mellon University
More informationSolving DC Programs that Promote Group 1-Sparsity
Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014
More informationarxiv: v2 [cs.na] 10 Feb 2018
Efficient Robust Matrix Factorization with Nonconvex Penalties Quanming Yao, James T. Kwok Department of Computer Science and Engineering Hong Kong University of Science and Technology Hong Kong { qyaoaa,
More informationROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION
ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION Niannan Xue, George Papamakarios, Mehdi Bahri, Yannis Panagakis, Stefanos Zafeiriou Imperial College London, UK University of Edinburgh,
More informationRIP-based performance guarantee for low tubal rank tensor recovery
RIP-based performance guarantee for low tubal rank tensor recovery Feng Zhang a, Wendong Wang a, Jianwen Huang a, Jianjun Wang a,b, a School of Mathematics and Statistics, Southwest University, Chongqing
More informationThe picasso Package for Nonconvex Regularized M-estimation in High Dimensions in R
The picasso Package for Nonconvex Regularized M-estimation in High Dimensions in R Xingguo Li Tuo Zhao Tong Zhang Han Liu Abstract We describe an R package named picasso, which implements a unified framework
More informationExact Low-rank Matrix Recovery via Nonconvex M p -Minimization
Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:
More informationTENSOR COMPLETION VIA ADAPTIVE SAMPLING OF TENSOR FIBERS: APPLICATION TO EFFICIENT INDOOR RF FINGERPRINTING
TENSOR COMPLETION VIA ADAPTIVE SAMPLING OF TENSOR FIBERS: APPLICATION TO EFFICIENT INDOOR RF FINGERPRINTING Xiao-Yang Liu 1,4, Shuchin Aeron 2, Vaneet Aggarwal 3, Xiaodong Wang 4 and, Min-You Wu 1 1 Shanghai-Jiatong
More informationF3-A2/F3-A3: Tensor-based formulation for Spectral Computed Tomography (CT) with novel regularization techniques
F3-A2/F3-A3: Tensor-based formulation for Spectral Computed Tomography (CT) with novel regularization techniques Abstract Spectral computed tomography (CT) has become possible with the development of photon
More informationNon-convex Robust PCA: Provable Bounds
Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing
More informationA Randomized Approach for Crowdsourcing in the Presence of Multiple Views
A Randomized Approach for Crowdsourcing in the Presence of Multiple Views Presenter: Yao Zhou joint work with: Jingrui He - 1 - Roadmap Motivation Proposed framework: M2VW Experimental results Conclusion
More informationRecovering Low-Rank and Sparse Matrices via Robust Bilateral Factorization
2014 IEEE International Conference on Data Mining Recovering Low-Rank and Sparse Matrices via Robust Bilateral Factorization Fanhua Shang, Yuanyuan Liu, James Cheng, Hong Cheng Dept. of Computer Science
More informationEfficient Sparse Low-Rank Tensor Completion Using the Frank-Wolfe Algorithm
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) Efficient Sparse Low-Rank Tensor Completion Using the Frank-Wolfe Algorithm Xiawei Guo, Quanming Yao, James T. Kwok
More informationA Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models
A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models Jingyi Jessica Li Department of Statistics University of California, Los
More informationSOLVING NON-CONVEX LASSO TYPE PROBLEMS WITH DC PROGRAMMING. Gilles Gasso, Alain Rakotomamonjy and Stéphane Canu
SOLVING NON-CONVEX LASSO TYPE PROBLEMS WITH DC PROGRAMMING Gilles Gasso, Alain Rakotomamonjy and Stéphane Canu LITIS - EA 48 - INSA/Universite de Rouen Avenue de l Université - 768 Saint-Etienne du Rouvray
More informationSparse and Low-Rank Matrix Decomposition Via Alternating Direction Method
Sparse and Low-Rank Matrix Decomposition Via Alternating Direction Method Xiaoming Yuan a, 1 and Junfeng Yang b, 2 a Department of Mathematics, Hong Kong Baptist University, Hong Kong, China b Department
More informationCombining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation
UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).
More informationarxiv: v1 [cs.cv] 1 Jun 2014
l 1 -regularized Outlier Isolation and Regression arxiv:1406.0156v1 [cs.cv] 1 Jun 2014 Han Sheng Department of Electrical and Electronic Engineering, The University of Hong Kong, HKU Hong Kong, China sheng4151@gmail.com
More informationFunctional SVD for Big Data
Functional SVD for Big Data Pan Chao April 23, 2014 Pan Chao Functional SVD for Big Data April 23, 2014 1 / 24 Outline 1 One-Way Functional SVD a) Interpretation b) Robustness c) CV/GCV 2 Two-Way Problem
More informationA Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem
A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu
More informationClosed-Form Solutions in Low-Rank Subspace Recovery Models and Their Implications. Zhouchen Lin ( 林宙辰 ) 北京大学 Nov. 7, 2015
Closed-Form Solutions in Low-Rank Subspace Recovery Models and Their Implications Zhouchen Lin ( 林宙辰 ) 北京大学 Nov. 7, 2015 Outline Sparsity vs. Low-rankness Closed-Form Solutions of Low-rank Models Applications
More informationFactor Matrix Trace Norm Minimization for Low-Rank Tensor Completion
Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion Yuanyuan Liu Fanhua Shang Hong Cheng James Cheng Hanghang Tong Abstract Most existing low-n-rank imization algorithms for tensor completion
More informationSparse Optimization Lecture: Dual Methods, Part I
Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration
More informationA Proximal Alternating Direction Method for Semi-Definite Rank Minimization (Supplementary Material)
A Proximal Alternating Direction Method for Semi-Definite Rank Minimization (Supplementary Material) Ganzhao Yuan and Bernard Ghanem King Abdullah University of Science and Technology (KAUST), Saudi Arabia
More informationMinimizing the Difference of L 1 and L 2 Norms with Applications
1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:
More informationRobust Variable Selection Through MAVE
Robust Variable Selection Through MAVE Weixin Yao and Qin Wang Abstract Dimension reduction and variable selection play important roles in high dimensional data analysis. Wang and Yin (2008) proposed sparse
More informationAnalysis of Robust PCA via Local Incoherence
Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu
More informationsparse and low-rank tensor recovery Cubic-Sketching
Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru
More informationBeyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory
Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory Xin Liu(4Ð) State Key Laboratory of Scientific and Engineering Computing Institute of Computational Mathematics
More informationDiscriminative Feature Grouping
Discriative Feature Grouping Lei Han and Yu Zhang, Department of Computer Science, Hong Kong Baptist University, Hong Kong The Institute of Research and Continuing Education, Hong Kong Baptist University
More informationA direct formulation for sparse PCA using semidefinite programming
A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon
More informationExact Low Tubal Rank Tensor Recovery from Gaussian Measurements
Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Canyi Lu, Jiashi Feng 2, Zhouchen Li,4 Shuicheng Yan 5,2 Department of Electrical and Computer Engineering, Carnegie Mellon University 2
More informationUser s Guide for LMaFit: Low-rank Matrix Fitting
User s Guide for LMaFit: Low-rank Matrix Fitting Yin Zhang Department of CAAM Rice University, Houston, Texas, 77005 (CAAM Technical Report TR09-28) (Versions beta-1: August 23, 2009) Abstract This User
More informationSparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28
Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.11
XI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 Alternating direction methods of multipliers for separable convex programming Bingsheng He Department of Mathematics
More informationVariable Selection for Highly Correlated Predictors
Variable Selection for Highly Correlated Predictors Fei Xue and Annie Qu Department of Statistics, University of Illinois at Urbana-Champaign WHOA-PSI, Aug, 2017 St. Louis, Missouri 1 / 30 Background Variable
More informationNonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy
Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille
More informationIT IS well known that the problem of subspace segmentation
IEEE TRANSACTIONS ON CYBERNETICS 1 LRR for Subspace Segmentation via Tractable Schatten-p Norm Minimization and Factorization Hengmin Zhang, Jian Yang, Member, IEEE, Fanhua Shang, Member, IEEE, Chen Gong,
More informationRecent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables
Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop
More informationA Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem
A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem Kangkang Deng, Zheng Peng Abstract: The main task of genetic regulatory networks is to construct a
More informationBlind Deconvolution Using Convex Programming. Jiaming Cao
Blind Deconvolution Using Convex Programming Jiaming Cao Problem Statement The basic problem Consider that the received signal is the circular convolution of two vectors w and x, both of length L. How
More informationA Primal-dual Three-operator Splitting Scheme
Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm
More informationElastic-Net Regularization of Singular Values for Robust Subspace Learning
Elastic-Net Regularization of Singular Values for Robust Subspace Learning Eunwoo Kim Minsik Lee Songhwai Oh Department of ECE, ASRI, Seoul National University, Seoul, Korea Division of EE, Hanyang University,
More informationLow Rank Matrix Completion Formulation and Algorithm
1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9
More informationLow-tubal-rank Tensor Completion using Alternating Minimization
Low-tubal-rank Tensor Completion using Alternating Minimization Xiao-Yang Liu 1,2, Shuchin Aeron 3, Vaneet Aggarwal 4, and Xiaodong Wang 1 1 Department of Electrical Engineering, Columbia University, NY,
More informationA Multi-Affine Model for Tensor Decomposition
Yiqing Yang UW Madison breakds@cs.wisc.edu A Multi-Affine Model for Tensor Decomposition Hongrui Jiang UW Madison hongrui@engr.wisc.edu Li Zhang UW Madison lizhang@cs.wisc.edu Chris J. Murphy UC Davis
More informationGROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA
GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA D Pimentel-Alarcón 1, L Balzano 2, R Marcia 3, R Nowak 1, R Willett 1 1 University of Wisconsin - Madison, 2 University of Michigan - Ann Arbor, 3 University
More informationAn interior-point stochastic approximation method and an L1-regularized delta rule
Photograph from National Geographic, Sept 2008 An interior-point stochastic approximation method and an L1-regularized delta rule Peter Carbonetto, Mark Schmidt and Nando de Freitas University of British
More informationFundamentals of Multilinear Subspace Learning
Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with
More informationDenoising and Completion of 3D Data via Multidimensional Dictionary Learning
Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16) Denoising and Completion of 3D Data via Multidimensional Dictionary Learning Ze Zhang and Shuchin Aeron
More informationAn Alternating Proximal Splitting Method with Global Convergence for Nonconvex Structured Sparsity Optimization
An Alternating Proximal Splitting Method with Global Convergence for Nonconvex Structured Sparsity Optimization Shubao Zhang and Hui Qian and Xiaojin Gong College of Computer Science and Technology Department
More informationOn Stochastic Primal-Dual Hybrid Gradient Approach for Compositely Regularized Minimization
On Stochastic Primal-Dual Hybrid Gradient Approach for Compositely Regularized Minimization Linbo Qiao, and Tianyi Lin 3 and Yu-Gang Jiang and Fan Yang 5 and Wei Liu 6 and Xicheng Lu, Abstract We consider
More informationA UNIFIED APPROACH TO MODEL SELECTION AND SPARS. REGULARIZED LEAST SQUARES by Jinchi Lv and Yingying Fan The annals of Statistics (2009)
A UNIFIED APPROACH TO MODEL SELECTION AND SPARSE RECOVERY USING REGULARIZED LEAST SQUARES by Jinchi Lv and Yingying Fan The annals of Statistics (2009) Mar. 19. 2010 Outline 1 2 Sideline information Notations
More informationBlock Coordinate Descent for Regularized Multi-convex Optimization
Block Coordinate Descent for Regularized Multi-convex Optimization Yangyang Xu and Wotao Yin CAAM Department, Rice University February 15, 2013 Multi-convex optimization Model definition Applications Outline
More informationHomework 5. Convex Optimization /36-725
Homework 5 Convex Optimization 10-725/36-725 Due Tuesday April 21 at 4:00pm submitted to Mallory Deptola in GHC 8001 (make sure to submit each problem separately) ***Instructions: you will solve 4 questions,
More information