Efficient Data-Driven Learning of Sparse Signal Models and Its Applications
|
|
- Adelia Todd
- 6 years ago
- Views:
Transcription
1 Efficient Data-Driven Learning of Sparse Signal Models and Its Applications Saiprasad Ravishankar Department of Electrical Engineering and Computer Science University of Michigan, Ann Arbor Dec 10,
2 Outline of Talk Synthesis & Transform models. Transform learning: Efficient, Scalable, Effective, Guarantees. Transform learning methods: Union of transforms (OCTOBOS) learning Online transform learning for big data Applications: Compression, Denoising, Compressed sensing, Classification. Conclusions Fig. from B. Wen, UIUC. 2
3 Synthesis Model for Sparse Representation Given a signal y R n, and dictionary D R n K, we assume y = Dx with x 0 K. Real world signals modeled as y = Dx +e, e is deviation term. Given D, sparsity level s, the synthesis sparse coding problem is This problem is NP-hard. ˆx = argmin x y Dx 2 2 s.t. x 0 s Dictionary-based approaches are often computationally expensive. 3
4 Transform Model for Sparse Representation Given a signal y R n, and transform W R m n, we model Wy = x +η with x 0 m and η - error term. Natural signals are approximately sparse in Wavelets, DCT. Given W, and sparsity s, transform sparse coding is ˆx = argmin x Wy x 2 2 s.t. x 0 s ˆx = H s(wy) computed by thresholding Wy to the s largest magnitude elements. Sparse coding is cheap! Signal recovered as W ˆx. Sparsifying transforms exploited for compression (JPEG2000), etc. 4
5 Key Topic of Talk: Sparsifying Transform Learning Square Transform Models Unstructured transform learning [IEEE TSP, 2013 & 2015] Doubly sparse transform learning [IEEE TIP, 2013] Online learning for Big Data [IEEE JSTSP, 2015] Convex formulations for transform learning [ICASSP, 2014] Overcomplete Transform Models Unstructured overcomplete transform learning [ICASSP, 2013] Learning structured overcomplete transforms with block cosparsity (OCTOBOS) [IJCV, 2015] Applications: Image & Video denoising, Classification, Magnetic resonance imaging (MRI) [SPIE 2015, ICIP 2015]. 5
6 Square Transform Learning Formulation 1 (P0) min W,X Sparsification Error {}}{ WY X 2 F + s.t. X i 0 s i Regularizer v(w) {}} ){ λ ( W 2F log detw Y = [Y 1 Y 2... Y N ] R n N : matrix of training signals. X = [X 1 X 2... X N ] R n N : matrix of sparse codes of Y i. Sparsification error - measures deviation of data in transform domain from perfect sparsity. λ > 0. Regularizer v(w) prevents trivial solutions and fully controls condition number of W. (P0) is limited due to a single W for all the data. 1 [Ravishankar & Bresler 12] 6
7 Why Union of Transforms? Natural images typically have diverse features or textures. 7
8 Why Union of Transforms? Union of transforms: one for each class of textures or features. 8
9 OCTOBOS Learning Formulation (P1) min { Wk,X i,c k } Sparsification Error { }} { K W ky i X i k=1 i C k s.t. X i 0 s i, {C k} K k=1 G { Regularizer }} { K k=1 λ k ( W k 2 F log detwk ) C k is the set of indices of signals in class k. G is the set of all possible partitions of [1 : N] into K disjoint subsets. (P1) jointly learns the union-of-transforms {W k} and clusters the data Y. Regularizer necessary to control scaling and conditioning (κ) of transforms. λ k = λ 0 Y Ck 2 F, with Y Ck the matrix of all Y i C k, achieves scale invariance of the solution in (P1). As λ 0, κ(w k) 1, W k 2 1/ 2 k for solutions in (P1). 9
10 Alternating Minimization Algorithm for (P1) (P1) min { Wk,X i,c k } Sparsification Error Regularizer {}}{{}}{ K K ) W k Y i X i λ k ( W k 2 F log detw k k=1 i C k k=1 s.t. X i 0 s i, {C k } K k=1 G 10
11 Alternating OCTOBOS Learning Algorithm: Step 1 Transform Update: Solves for only the {W k } in (P1). min {W k } { K k=1 i C k W k Y i X i 2 2 +λ kv(w k ) } (1) Closed-form solution using Singular Value Decomposition (SVD): Ŵ k = 0.5R k (Σ k +(Σ 2 k +2λ ki) 1 2)V T k L 1 k, k (2) I is the identity matrix. λ k = λ 0 Y Ck 2 F. Y Ck Y T C k +λ k I = L k L T k. L k is a matrix square root. SVD: L 1 k Y Ck X T C k = V k Σ k U T k. 11
12 Alternating OCTOBOS Learning Algorithm: Step 2 Sparse Coding & Clustering: Solves for only the {C k, X i } in (P1). min {C k },{X i } K k=1 i C k { } W k Y i X i 2 2 +λ 0 Y i 2 2 v(w k) s.t. X i 0 s i, {C k } G (3) Exact Clustering: finds the global optimum {Ĉ k } in (3) as } {Ĉk = arg min {C k } Clustering Measure M k,i {}}{ K { } W k Y i H s(w k Y i ) 2 2 +λ 0 Y i 2 2 v(w k) k=1 i C k (4) For each Y i, the optimal cluster index ˆk i = argminm k,i. k Exact and Cheap Sparse Coding: ˆX i = H s(w k Y i ) i Ĉ k, k. 12
13 Computational Advantages of OCTOBOS Cost per-iteration for learning OCTOBOS W R Kn n : Assume number of training signals N m = Kn. Cost of Clustering & Sparse coding Step: O(mnN). Cost of Transform Update Step: O(n 2 N). Cost dominated by clustering. Model (m,s n) Square W R n n OCTOBOS W R m n KSVD D R n m Per-iter. Cost O(n 2 N) O(n 2 N) O(n 3 N) In practice, OCTOBOS learning converges in few iterations. OCTOBOS learning is cheaper than dictionary learning by K-SVD 2. 2 [Aharon et al. 06] 13
14 Global Convergence Guarantees for OCTOBOS (P1) min { Wk,X i,c k } Sparsification Error { }} { k W ky i X i k=1 i C k s.t. X i 0 s i, {C k} K k=1 G { Regularizer }} { k k=1 λ k ( W k 2 F log detwk ) The alternating OCTOBOS learning algorithm is globally convergent to the set of partial minimizers of the objective in (P1). These partial minimizers are global minimizers w.r.t. {W k } and {X i,c k }, respectively, and local minimizers w.r.t. {W k, X i }. Under certain (mild) conditions, the algorithm converges to the set of stationary points of the equivalent objective f (W). N } f (W) min { W k Y i H s(w k Y i ) 2 2 +λ 0v(W k ) Y i 2 2 k i=1 14
15 Algorithm Insensitive to Initializations x x 107 Objective Function KLT DCT Random Identity Iteration Number Sparsification Error KLT DCT Random Identity Iteration Number 8 8 patches, K = 2 Various initializations for {W k } 3 x x 106 Objective Function Random K means Equal Single Iteration Number Sparsification Error Random K means Equal Single Iteration Number Various initializations for {C k } and K = 1 (single) case 15
16 Visualization of Learned OCTOBOS The square blocks of a learnt OCTOBOS are NOT similar cluster-specific W k. OCTOBOS W learned with different initializations appear different. The W learned with different initializations sparsify equally well Cross-gram matrix between W 1 and W 2 for KLT Init. Random matrix Init. KLT Init. 16
17 Application: Unsupervised Classification The overlapping image patches are first clustered by OCTOBOS learning. Each image pixel is then classified by a majority vote among the patches that cover that pixel. Input Image K-means OCTOBOS Input Image K-means OCTOBOS 17
18 Application: Compressed Sensing (CS) CS enables accurate recovery of images from far fewer measurements than the number of unknowns Sparsity of image in transform domain or dictionary Measurement procedure incoherent with transform Reconstruction non-linear Reconstruction problem (NP-hard) - min x Data Fidelity Regularizer {}}{{}}{ Ax y 2 2 +λ Ψx 0 (5) x C P : vectorized image, y C m : measurements (m < P). A : fat sensing matrix, Ψ : transform. l 0 norm counts non-zeros. CS with non-adaptive regularizer limited to low undersampling in MRI. 18
19 UNITE-BCS: Union of Transforms Blind CS (P2) Sparsification Error Sparsity Penalty Data Fidelity {}}{{}}{{}}{ K min ν Ax y K W k R j x b j 2 x,b,{w k,c k } 2 +η2 b j 0 k=1 j C k k=1 j C k s.t. W H k W k = I k, {C k } G, x 2 C. R j R n P extracts patches. W k C n n is cluster-specific transform. W kr jx b j, j C k, k with b j C n sparse. B [b 1 b 2... b N]. (P2) learns a union of unitary transforms, reconstructs x, and clusters the patches of x, using only the undersampled y. model adaptive to underlying image. x 2 C is an energy or range constraint. C > 0. 19
20 Block Coordinate Descent (BCD) Algorithm for (P2) Sparse Coding & Clustering: Solves for only {C k } & B in (P2). min {C k },{b j } K { Wk R j x b j 2 } 2 +η2 b j 0 k=1 j C k s.t. {C k } G (6) Exact Clustering: finds the global optimum {Ĉ k } in (6) as } {Ĉk = arg min {C k } K k=1 j C k Clustering Measure M k,j {}}{ Wk R j x H η(w k R j x) 2 2 +η2 Hη(W k R j x) 0 (7) For patch P j x, the optimal cluster index ˆk j = argminm k,j. k Exact Sparse Coding by Hard-thresholding: ˆb j = H η(w k R j x) j Ĉ k, k. 20
21 BCD Algorithm: Transform Update Step Transform Update Step solves (P2) for {W k }. For each k, solve min W k W k X Ck B Ck 2 F s.t. W H k W k = I. (8) X Ck is matrix with columns R jx for j C k. B Ck is matrix of corresponding sparse codes. Closed-form solution: Ŵ k = VU H (9) SVD: X Ck B H C k = UΣV H. 21
22 BCD Algorithm: Image Update Step Image Update Step solves (P2) for x with other variables fixed. min x ν Ax y K k=1 j C k W k R j x b j 2 2 s.t. x 2 C. (10) Least squares problem with l 2 norm constraint. Solve Least squares Lagrangian formulation with Normal Equation: N K Rj T R j + ν A H A+ ˆµI x = Rj T Wk H b j + ν A H y (11) j=1 k=1 j C k The optimal multiplier ˆµ R + is the smallest real such that ˆx 2 C. ˆµ and ˆx can be found cheaply in applications such as MRI. 22
23 BCS Convergence Guarantees - Notations Define the barrier function ϕ(w) as { 0, ϕ(w) = +, W H W = I else χ(x) is the barrier function corresponding to x 2 C. (P2) can be written in unconstrained form: K h(w,b,γ,x) = ν Ax y { Wk R j x b j } K η2 b j 0 + ϕ(w k ) + χ(x) k=1 j C k k=1 OCTOBOS W obtained by stacking the W k s. Γ : row vector whose entries are the cluster indices of patches. 23
24 Union of Transforms BCS Convergence Guarantees Theorem 1 For the sequence {W t,b t,γ t,x t } generated by the BCD Algorithm with initial (W 0,B 0,Γ 0,x 0 ), we have {h(w t,b t,γ t,x t )} h = h (W 0,B 0,Γ 0,x 0 ). {W t,b t,γ t,x t } is bounded, and all its accumulation points are equivalent, i.e., they achieve the same value h of the objective. x t x t as t. Every accumulation point (W, B, Γ, x) satisfies the following partial global optimality conditions W argmin W x argmin h(w,b,γ, x) (12) x ) ( ) h( W,B,Γ,x, (B,Γ) argmin h W, B, Γ,x B, Γ (13) 24
25 UNITE-BCS Convergence Guarantees Theorem 2 Each accumulation point (W,B,Γ,x) of {W t,b t,γ t,x t } also satisfies the following partial local optimality condition for all x C P, and all B C n N satisfying B < η/2. h(w,b + B,Γ,x + x) h(w,b,γ,x) = h (14) 25
26 UNITE-BCS Global Convergence Guarantees Corollary 1 For each initialization, the iterate sequence in the BCD algorithm converges to an equivalence class (same objective values) of accumulation points of the objective that are also partial global and partial local minimizers. Corollary 2 The BCD algorithm is globally convergent to (a subset of) the set of partial minimizers of the objective. 26
27 CS MRI Example - 2.5x Undersampling (K = 3) Sampling mask Zero-filling (24.9 db) UNITE-MRI recon (37.3 db) Reference 27
28 Convergence Behavior: UTMRI (K = 1) & UNITE-MRI Objective Function UTMRI (K = 1) UNITE MRI (K = 3) Iteration Number x t x t 1 2 / x ref Objectives UNITE MRI (K = 3) UTMRI (K = 1) Sparsity Fraction UNITE MRI PSNR UTMRI (K = 1) UNITE MRI (K = 3) Iteration Number Sparsity fractions (B) PSNR HFEN 4 3 UNITE MRI HFEN Iteration Number x t x t Iteration Number UNITE-MRI PSNR, HFEN 28
29 UNITE-MRI Clustering with K = 3 UNITE-MRI recon Cluster 1 Cluster 2 Cluster 3 Real part of Imaginary part of learnt W for cluster 2 learnt W for cluster 2 29
30 UNITE-MRI Clustering with K = 4 Cluster 1 Cluster 2 Cluster 3 Cluster 4 Real part of Imaginary part of learnt W for cluster 4 learnt W for cluster 4 30
31 Reconstructions - Cartesian 2.5x Undersampling (K = 16) UNITE-MRI recon (37.4 db, 631s) DLMRI 3 error (36.6 db, 1797s) UNITE-MRI error UTMRI error (37.2 db, 125s) 3 [Ravishankar & Bresler 11] 31
32 Example - Cartesian 2.5x Undersampling (K = 16) PSNR (db) Sampling mask UTMRI (42.5 db) UNITE-MRI (44.3 db) Number of Clusters PSNR vs. K UTMRI Error UNITE-MRI Error
33 Online Transform Learning 33
34 Why Online Transform Learning? Batch learning: learning using all the training data simultaneously. Big data large training sets batch learning computationally expensive in time and memory. Real-time or streaming data applications data arrives sequentially, and must be processed sequentially to limit latency. Online learning uses sequential model adaptation and signal reconstruction. cheap computations and modest memory requirements. 34
35 Online Transform Learning 35
36 Online Transform Learning Formulation For t = 1, 2, 3,..., solve } (P3) {Ŵt,ˆx t 1 = argmin W,x t t t j=1 { } Wy j x j 2 2 +λ jv(w) s.t. x t 0 s, x j = ˆx j, 1 j t 1. λ j = λ 0 y j 2 2, with λ0 > 0. v(w) W 2 F log detw. λ 0 controls the condition number and scaling of learned Ŵ t. Ŵ 1 t ˆx t is an (e.g., denoised) estimate of y t computed efficiently. For non-stationary data, use forgetting factor ρ [0, 1], to diminish the influence of old data. 1 t t j=1 { } ρ t j Wy j x j 2 2 +λ jv(w) (15) 36
37 Mini-Batch Transform Learning For J = 1, 2, 3,..., solve {ŴJ, ˆX J } 1 = argmin W,X J JM J j=1 { } WY j X j 2 F +Λ jv(w) s.t. x JM M+i 0 s, 1 i M. (P4) Y J = [y JM M+1 y JM M+2... y JM ], with M : mini-batch size. X J = [x JM M+1 x JM M+2... x JM ]. Λ j = λ 0 Y j 2 F. Mini-batch learning can provide reductions in operation count over online learning. increased latency and memory requirements. Alternative: Sparsity constraints can be replaced with l 0 penalties. 37
38 Online Transform Learning Algorithm Sparse Coding: solve for x t in (P3) with fixed W = Ŵ t 1. min x t Wy t x t 2 2 s.t. x t 0 s (16) Cheap Solution: ˆx t = H s(wy t). Transform Update: solves for W in (P3) with x t = ˆx t. 1 t )} min { Wy j x j ( W 22 W t +λj 2F log detw (17) j=1 ( ) Ŵ t = 0.5R t (Σ t + Σ 2 )1 2 t +2β ti Qt T L 1 t (18) t 1 ( ) t j=1 y j yj T + λ 0 y j 2 2 I = L tl T t. Perform rank-1 update. β t = λ 0t 1 t j=1 y j 2 2. QtΣtRT t is full SVD of L 1 t Θ t = t 1 t j=1 L 1 t y j x T j. L 1 t Θ t (1 t 1 )L 1 t 1 Θt 1 + t 1 L 1 t y tx T t rank-1 SVD update. No matrix-matrix products. Approx. error bounded, and cheaply monitored. 38
39 Mini-Batch Transform Learning Algorithm Sparse Coding: solve for X J in (P4) with fixed W = Ŵ J 1. min X J WY J X J 2 F s.t. x JM M+i 0 s i. (19) Cheap Solution: ˆx JM M+i = H s(wy JM M+i) i {1,..,M}. Transform Update: solves for W in (P4) with fixed {X j} J j=1. 1 J )} min { WY j X j ( W 2F W JM +Λj 2F log detw j=1 (20) Closed-form solution involving SVDs. For M n, use rank-m updates. For M O(n), direct SVDs. 39
40 Comparison of Computations, Memory, and Latency Properties Online Mini-batch Batch Small M n Large M Computations per sample O(n 2 log 2 n) O(n 2 log 2 n) O(n 2 ) O(Pn 2 ) Memory O(n 2 ) O(n 2 ) O(nM) O(nN) Latency 0 M 1 M 1 N 1 Latency: max. time between arrival of a signal and generation of the output. P: # batch iterations, N: total samples, M: mini-batch size, n: signal size. log 2 n < P online scheme is computationally cheaper than batch. For big data, online & mini-batch schemes have low memory & latency costs. Online synthesis learning 4 has high computational cost per sample: O(n 3 ). 4 [Mairal et al. 10] 40
41 Online Learning Convergence Analysis: Notations The objective in the transform update step of (P3) is ĝ t (W) = 1 t } { Wy j ˆx j 22 t +λ 0 y j 22 v(w) j=1 (21) The empirical objective function is g t (W) = 1 t t { Wyj H s (Wy j ) 2 2 +λ 0 y j 2 2 v(w)} (22) j=1 This is the objective that is minimized in batch transform learning. In the online setting, the sparse codes of past signals cannot be optimally set at future times t. 41
42 Expected Transform Learning Cost Assumption: y t are i.i.d. random samples from the sphere S n = {y R n : y 2 = 1}, assuming absolutely continuous probability measure p. We consider the minimization of the expected learning cost: g(w) = E y [ Wy H s (Wy) 2 2 +λ 0 y 2 2 v(w) ] (23). It follows from the Assumption that lim t g t (W) = g(w) a.s. Given a specific training set, it is unnecessary to minimize the batch objective g t(w) to high precision, since g t(w) only approximates g(w). Even an inaccurate minimizer of g t (W) could provide the same, or better value of g(w) than a fully optimized one. 42
43 Main Convergence Results Theorem 3 For the sequence { Ŵ t } generated by our online scheme, we have (i) As t, ĝ t (Ŵ t ), g t (Ŵ t ), and g(ŵ t ) all converge a.s. to a common limit, say g. (ii) The sequence { Ŵ t } is bounded. Every accumulation point Ŵ of {Ŵt } satisfies g(ŵ ) = 0 and g(ŵ ) = g with probability 1. (iii) The distance between Ŵ t and the set of stationary points of g(w) converges to 0 a.s. (iv) ĝ t+1 (Ŵ t+1 ) ĝ t (Ŵ t ) and Ŵ t+1 Ŵ t both decay as O(1/t). 43
44 Empirical Convergence Behavior 10 1 Objective Function Online Mini batch No. of Signals Processed x 10 5 Sparsification Error Online Mini batch No. of Signals Processed x 10 5 Norm of changes in W Mini batch No. of Signals Processed x 10 5 Objective Sparsification error Ŵ t+1 Ŵ t F (M = 320) {y t} generated as { W 1 x t } with random unitary W, and random x t with x t 0 = 3. Objective converges quickly for both the online and mini-batch schemes. Sparsification error converges to zero, and κ(w) [1.02, 1.04] for the schemes learned a good model. 44
45 Online Video Denoising by 3D Transform Learning z t is a noisy video frame. ẑ t is its denoised version. G t is a tensor with m frames formed using a sliding window scheme. Overlapping 3D patches in the G t s are denoised sequentially using adaptive mini-batch denoising. Denoised patches averaged at 3D locations to yield frame estimates. 45
46 Video Denoising by Online Transform Learning Video σ DCT SK-SVD VBM3D VBM4D VIDOLSAT VIDOLSAT Salesman Miss America Coastguard (n = 512) (n = 768) Proposed VIDOLSAT is simulated at two patch sizes: (n = 512), and (n = 768). VIDOLSAT provides 1.7 db, 1.2 db, 0.8 db, and 0.8 db better PSNRs than 3D DCT, sparse K-SVD 5, VBM3D 6, and VBM4D 7. 5 [Rubinstein et al. 10] 6 [Dabov et al. 07] 7 [Maggioni et al. 12] 46
47 Video Denoising Example: Salesman Noisy frame 0.3 VIDOLSAT (PSNR = db) VBM4D Error (PSNR = db) 0 VIDOLSAT Error 0 47
48 Conclusions We introduced several data-driven sparse model adaptation techniques. Transform learning methods are highly efficient and scalable enjoy good theoretical and empirical convergence behavior are highly effective in many applications Highly promising results were obtained using transform learning for denoising and compressed sensing. Future work: online blind compressed sensing. Acknowledgments: Yoram Bresler, Bihan Wen. Transform learning webpage: 48
49 Thank you! Questions?? 49
Sparsifying Transform Learning for Compressed Sensing MRI
Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois
More informationLearning Sparsifying Transforms
1 Learning Sparsifying Transforms Saiprasad Ravishankar, Student Member, IEEE, and Yoram Bresler, Fellow, IEEE Abstract The sparsity of signals and images in a certain transform domain or dictionary has
More informationLEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler
LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and the Coordinated Science Laboratory, University
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationOverview. Optimization-Based Data Analysis. Carlos Fernandez-Granda
Overview Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 1/25/2016 Sparsity Denoising Regression Inverse problems Low-rank models Matrix completion
More informationSparse & Redundant Signal Representation, and its Role in Image Processing
Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More information694 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 3, NO. 4, DECEMBER 2017
694 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 3, NO. 4, DECEMBER 2017 Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems Saiprasad Ravishankar,
More informationRegularizing inverse problems using sparsity-based signal models
Regularizing inverse problems using sparsity-based signal models Jeffrey A. Fessler William L. Root Professor of EECS EECS Dept., BME Dept., Dept. of Radiology University of Michigan http://web.eecs.umich.edu/
More informationSparse linear models
Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time
More information2.3. Clustering or vector quantization 57
Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations
Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA
More informationEFFICIENT LEARNING OF DICTIONARIES WITH LOW-RANK ATOMS
EICIENT LEARNING O DICTIONARIES WITH LOW-RANK ATOMS Saiprasad Ravishankar, Brian E. Moore, Raj Rao Nadakuditi, and Jeffrey A. essler Department of Electrical Engineering and Computer Science, University
More informationLecture Notes 5: Multiresolution Analysis
Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More informationThe Iteration-Tuned Dictionary for Sparse Representations
The Iteration-Tuned Dictionary for Sparse Representations Joaquin Zepeda #1, Christine Guillemot #2, Ewa Kijak 3 # INRIA Centre Rennes - Bretagne Atlantique Campus de Beaulieu, 35042 Rennes Cedex, FRANCE
More informationTRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS
TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013
Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationA Multi-task Learning Strategy for Unsupervised Clustering via Explicitly Separating the Commonality
A Multi-task Learning Strategy for Unsupervised Clustering via Explicitly Separating the Commonality Shu Kong, Donghui Wang Dept. of Computer Science and Technology, Zhejiang University, Hangzhou 317,
More informationCSC 576: Variants of Sparse Learning
CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationEdinburgh Research Explorer
Edinburgh Research Explorer Fast Orthonormal Sparsifying Transforms Based on Householder Reflectors Citation for published version: Rusu, C, Gonzalez-Prelcic, N & Heath, R 2016, 'Fast Orthonormal Sparsifying
More informationRandomness-in-Structured Ensembles for Compressed Sensing of Images
Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder
More informationOnline Dictionary Learning with Group Structure Inducing Norms
Online Dictionary Learning with Group Structure Inducing Norms Zoltán Szabó 1, Barnabás Póczos 2, András Lőrincz 1 1 Eötvös Loránd University, Budapest, Hungary 2 Carnegie Mellon University, Pittsburgh,
More informationMinimizing the Difference of L 1 and L 2 Norms with Applications
1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:
More information2 Regularized Image Reconstruction for Compressive Imaging and Beyond
EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationCompressed Sensing and Neural Networks
and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications
More informationStructured matrix factorizations. Example: Eigenfaces
Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix
More informationDeep Learning: Approximation of Functions by Composition
Deep Learning: Approximation of Functions by Composition Zuowei Shen Department of Mathematics National University of Singapore Outline 1 A brief introduction of approximation theory 2 Deep learning: approximation
More informationRémi Gribonval - PANAMA Project-team INRIA Rennes - Bretagne Atlantique, France
Sparse dictionary learning in the presence of noise and outliers Rémi Gribonval - PANAMA Project-team INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Overview Context: inverse problems
More informationA tutorial on sparse modeling. Outline:
A tutorial on sparse modeling. Outline: 1. Why? 2. What? 3. How. 4. no really, why? Sparse modeling is a component in many state of the art signal processing and machine learning tasks. image processing
More informationLearning Task Grouping and Overlap in Multi-Task Learning
Learning Task Grouping and Overlap in Multi-Task Learning Abhishek Kumar Hal Daumé III Department of Computer Science University of Mayland, College Park 20 May 2013 Proceedings of the 29 th International
More informationSparse molecular image representation
Sparse molecular image representation Sofia Karygianni a, Pascal Frossard a a Ecole Polytechnique Fédérale de Lausanne (EPFL), Signal Processing Laboratory (LTS4), CH-115, Lausanne, Switzerland Abstract
More informationSOS Boosting of Image Denoising Algorithms
SOS Boosting of Image Denoising Algorithms Yaniv Romano and Michael Elad The Technion Israel Institute of technology Haifa 32000, Israel The research leading to these results has received funding from
More informationProvable Alternating Minimization Methods for Non-convex Optimization
Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish
More informationA discretized Newton flow for time varying linear inverse problems
A discretized Newton flow for time varying linear inverse problems Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München Arcisstrasse
More informationSparse Recovery of Streaming Signals Using. M. Salman Asif and Justin Romberg. Abstract
Sparse Recovery of Streaming Signals Using l 1 -Homotopy 1 M. Salman Asif and Justin Romberg arxiv:136.3331v1 [cs.it] 14 Jun 213 Abstract Most of the existing methods for sparse signal recovery assume
More informationGeneralized Conditional Gradient and Its Applications
Generalized Conditional Gradient and Its Applications Yaoliang Yu University of Alberta UBC Kelowna, 04/18/13 Y-L. Yu (UofA) GCG and Its Apps. UBC Kelowna, 04/18/13 1 / 25 1 Introduction 2 Generalized
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationGreedy Dictionary Selection for Sparse Representation
Greedy Dictionary Selection for Sparse Representation Volkan Cevher Rice University volkan@rice.edu Andreas Krause Caltech krausea@caltech.edu Abstract We discuss how to construct a dictionary by selecting
More information1 Sparsity and l 1 relaxation
6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the
More informationMachine Learning. Support Vector Machines. Fabio Vandin November 20, 2017
Machine Learning Support Vector Machines Fabio Vandin November 20, 2017 1 Classification and Margin Consider a classification problem with two classes: instance set X = R d label set Y = { 1, 1}. Training
More informationRecent developments on sparse representation
Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last
More informationsparse and low-rank tensor recovery Cubic-Sketching
Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru
More informationOptimization for Compressed Sensing
Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve
More informationApplication to Hyperspectral Imaging
Compressed Sensing of Low Complexity High Dimensional Data Application to Hyperspectral Imaging Kévin Degraux PhD Student, ICTEAM institute Université catholique de Louvain, Belgium 6 November, 2013 Hyperspectral
More informationSparse linear models and denoising
Lecture notes 4 February 22, 2016 Sparse linear models and denoising 1 Introduction 1.1 Definition and motivation Finding representations of signals that allow to process them more effectively is a central
More informationSparse Optimization Lecture: Basic Sparse Optimization Models
Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm
More informationLearning an Adaptive Dictionary Structure for Efficient Image Sparse Coding
Learning an Adaptive Dictionary Structure for Efficient Image Sparse Coding Jérémy Aghaei Mazaheri, Christine Guillemot, Claude Labit To cite this version: Jérémy Aghaei Mazaheri, Christine Guillemot,
More informationTruncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences
Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy
More informationPrimal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector
Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer
More informationBlind Compressed Sensing
1 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE arxiv:1002.2586v2 [cs.it] 28 Apr 2010 Abstract The fundamental principle underlying compressed sensing is that a signal,
More informationarxiv: v1 [cs.it] 26 Oct 2018
Outlier Detection using Generative Models with Theoretical Performance Guarantees arxiv:1810.11335v1 [cs.it] 6 Oct 018 Jirong Yi Anh Duc Le Tianming Wang Xiaodong Wu Weiyu Xu October 9, 018 Abstract This
More informationCompressed Sensing and Related Learning Problems
Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed
More informationMathematical introduction to Compressed Sensing
Mathematical introduction to Compressed Sensing Lesson 1 : measurements and sparsity Guillaume Lecué ENSAE Mardi 31 janvier 2016 Guillaume Lecué (ENSAE) Compressed Sensing Mardi 31 janvier 2016 1 / 31
More informationTutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London
Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation Pier Luigi Dragotti Imperial College London Outline Part 1: Sparse Signal Representation ~90min Part 2: Sparse Sampling ~90min 2
More informationCombining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation
UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationMLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT
MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net
More informationSparse Approximation and Variable Selection
Sparse Approximation and Variable Selection Lorenzo Rosasco 9.520 Class 07 February 26, 2007 About this class Goal To introduce the problem of variable selection, discuss its connection to sparse approximation
More informationEfficiently Implementing Sparsity in Learning
Efficiently Implementing Sparsity in Learning M. Magdon-Ismail Rensselaer Polytechnic Institute (Joint Work) December 9, 2013. Out-of-Sample is What Counts NO YES A pattern exists We don t know it We have
More informationSparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda
Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic
More informationCompressive Sensing of Sparse Tensor
Shmuel Friedland Univ. Illinois at Chicago Matheon Workshop CSA2013 December 12, 2013 joint work with Q. Li and D. Schonfeld, UIC Abstract Conventional Compressive sensing (CS) theory relies on data representation
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationRobust Principal Component Analysis
ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M
More informationSeismic data interpolation and denoising by learning a tensor tight frame
Seismic data interpolation and denoising by learning a tensor tight frame Lina Liu 1, Gerlind Plonka Jianwei Ma 1 1,Department of Mathematics,Harbin Institute of Technology, Harbin, China,Institute for
More informationRobust Sparse Recovery via Non-Convex Optimization
Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationSignal Recovery, Uncertainty Relations, and Minkowski Dimension
Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this
More informationOvercomplete Dictionaries for. Sparse Representation of Signals. Michal Aharon
Overcomplete Dictionaries for Sparse Representation of Signals Michal Aharon ii Overcomplete Dictionaries for Sparse Representation of Signals Reasearch Thesis Submitted in Partial Fulfillment of The Requirements
More informationSparse Approximation of Signals with Highly Coherent Dictionaries
Sparse Approximation of Signals with Highly Coherent Dictionaries Bishnu P. Lamichhane and Laura Rebollo-Neira b.p.lamichhane@aston.ac.uk, rebollol@aston.ac.uk Support from EPSRC (EP/D062632/1) is acknowledged
More informationFast Hard Thresholding with Nesterov s Gradient Method
Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science
More informationProximal Methods for Optimization with Spasity-inducing Norms
Proximal Methods for Optimization with Spasity-inducing Norms Group Learning Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology
More informationStability of Modified-CS over Time for recursive causal sparse reconstruction
Stability of Modified-CS over Time for recursive causal sparse reconstruction Namrata Vaswani Department of Electrical and Computer Engineering Iowa State University http://www.ece.iastate.edu/ namrata
More informationA Randomized Approach for Crowdsourcing in the Presence of Multiple Views
A Randomized Approach for Crowdsourcing in the Presence of Multiple Views Presenter: Yao Zhou joint work with: Jingrui He - 1 - Roadmap Motivation Proposed framework: M2VW Experimental results Conclusion
More informationNon-convex Robust PCA: Provable Bounds
Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing
More informationBayesian Paradigm. Maximum A Posteriori Estimation
Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)
More informationConvolutional Dictionary Learning and Feature Design
1 Convolutional Dictionary Learning and Feature Design Lawrence Carin Duke University 16 September 214 1 1 Background 2 Convolutional Dictionary Learning 3 Hierarchical, Deep Architecture 4 Convolutional
More informationResearch Overview. Kristjan Greenewald. February 2, University of Michigan - Ann Arbor
Research Overview Kristjan Greenewald University of Michigan - Ann Arbor February 2, 2016 2/17 Background and Motivation Want efficient statistical modeling of high-dimensional spatio-temporal data with
More informationSIGNAL SEPARATION USING RE-WEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS
TR-IIS-4-002 SIGNAL SEPARATION USING RE-WEIGHTED AND ADAPTIVE MORPHOLOGICAL COMPONENT ANALYSIS GUAN-JU PENG AND WEN-LIANG HWANG Feb. 24, 204 Technical Report No. TR-IIS-4-002 http://www.iis.sinica.edu.tw/page/library/techreport/tr204/tr4.html
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationA Two-Stage Approach for Learning a Sparse Model with Sharp
A Two-Stage Approach for Learning a Sparse Model with Sharp Excess Risk Analysis The University of Iowa, Nanjing University, Alibaba Group February 3, 2017 1 Problem and Chanllenges 2 The Two-stage Approach
More informationBayesian Nonparametric Dictionary Learning for Compressed Sensing MRI
1 Bayesian Nonparametric Dictionary Learning for Compressed Sensing MRI Yue Huang, John Paisley, Qin Lin, Xinghao Ding, Xueyang Fu and Xiao-ping Zhang arxiv:1302.2712v2 [cs.cv] 9 Oct 2013 Abstract We develop
More informationSpatially adaptive alpha-rooting in BM3D sharpening
Spatially adaptive alpha-rooting in BM3D sharpening Markku Mäkitalo and Alessandro Foi Department of Signal Processing, Tampere University of Technology, P.O. Box FIN-553, 33101, Tampere, Finland e-mail:
More informationConvergence of a distributed asynchronous learning vector quantization algorithm.
Convergence of a distributed asynchronous learning vector quantization algorithm. ENS ULM, NOVEMBER 2010 Benoît Patra (UPMC-Paris VI/Lokad) 1 / 59 Outline. 1 Introduction. 2 Vector quantization, convergence
More informationTensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo
Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery Florian Römer and Giovanni Del Galdo 2nd CoSeRa, Bonn, 17-19 Sept. 2013 Ilmenau University of Technology Institute for Information
More informationResidual Correlation Regularization Based Image Denoising
IEEE SIGNAL PROCESSING LETTERS, VOL. XX, NO. XX, MONTH 20 1 Residual Correlation Regularization Based Image Denoising Gulsher Baloch, Huseyin Ozkaramanli, and Runyi Yu Abstract Patch based denoising algorithms
More informationGuaranteed Rank Minimization via Singular Value Projection: Supplementary Material
Guaranteed Rank Minimization via Singular Value Projection: Supplementary Material Prateek Jain Microsoft Research Bangalore Bangalore, India prajain@microsoft.com Raghu Meka UT Austin Dept. of Computer
More informationApplied Machine Learning for Biomedical Engineering. Enrico Grisan
Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination
More informationIEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,
More informationCoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp
CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell
More informationLINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING
LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationMotivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble
Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting
More informationSparse Estimation and Dictionary Learning
Sparse Estimation and Dictionary Learning (for Biostatistics?) Julien Mairal Biostatistics Seminar, UC Berkeley Julien Mairal Sparse Estimation and Dictionary Learning Methods 1/69 What this talk is about?
More information