Making Flippy Floppy
|
|
- Marjorie Porter
- 5 years ago
- Views:
Transcription
1 Making Flippy Floppy James V. Burke UW Mathematics Aleksandr Y. Aravkin IBM, T.J.Watson Research Michael P. Friedlander UBC Computer Science Current Topics May 2013 Talking Heads (1983)
2 Imaging Application Migration Velocity Analysis
3 Imaging: Migration Velocity Analysis After collecting seismic data, and having a smooth estimate of the velocity model in the subsurface, high-quality images are obtained by solving an optimization problem for the model update. Smallest 2D images: variable size 1/2 million Target 3D images: variable size billions Depth ( 24 meters) Lateral ( 24 meters) Depth ( 24 meters) Lateral ( 24 meters) / 25
4 Sparse Formulation for Migration BP σ : min x 1 st r J C x 2 σ Problem Specification r m J residual at smooth model estimate smooth velocity estimate Jacobian of forward model Depth ( 24 meters) Lateral ( 24 meters) C Curvelet transform x σ curvelet coefficients of the update error level Depth ( 24 meters) Results Improved recovery compared to LS inversion Lateral ( 24 meters) 3 / 25
5 In the Beginning there was BPDN & LASSO
6 BPDN & LASSO A R m n with m << n Basis Pursuit (Mallet and Zhang (1993), Chen, Donoho, Saunders (1998)) BP: min x 1 st Ax = b Basis Pursuit De-Noising (BPDN) (Chen, Donoho, Saunders (1998)) BP σ : min x 1 st b Ax 2 σ LASSO (Least Absolute Shrinkage and Selection Operator) (Tibshirani (1996)) LS τ : min 1 2 b Ax 2 2 st x 1 τ Lagrangian formulation QP λ : min b Ax + λ x 1 4 / 25
7 BPDN & LASSO A R m n with m << n Basis Pursuit (Mallet and Zhang (1993), Chen, Donoho, Saunders (1998)) BP: min x 1 st Ax = b Basis Pursuit De-Noising (BPDN) (Chen, Donoho, Saunders (1998)) BP σ : min x 1 st b Ax 2 σ LASSO (Least Absolute Shrinkage and Selection Operator) (Tibshirani (1996)) LS τ : min 1 2 b Ax 2 2 st x 1 τ Lagrangian formulation QP λ : min b Ax + λ x 1 Candés, Romberg, and Tao (2006): BP gives least support solutions. 4 / 25
8 BPDN & LASSO A R m n with m << n Basis Pursuit (Mallet and Zhang (1993), Chen, Donoho, Saunders (1998)) BP: min x 1 st Ax = b Basis Pursuit De-Noising (BPDN) (Chen, Donoho, Saunders (1998)) BP σ : min x 1 st b Ax 2 σ Target Problem (SPGl 1 ) LASSO (Least Absolute Shrinkage and Selection Operator) (Tibshirani (1996)) LS τ : min 1 2 b Ax 2 2 st x 1 τ Lagrangian formulation QP λ : min b Ax + λ x 1 Candés, Romberg, and Tao (2006): BP gives least support solutions. 4 / 25
9 BPDN & LASSO A R m n with m << n Basis Pursuit (Mallet and Zhang (1993), Chen, Donoho, Saunders (1998)) BP: min x 1 st Ax = b Basis Pursuit De-Noising (BPDN) (Chen, Donoho, Saunders (1998)) BP σ : min x 1 st b Ax 2 σ Target Problem (SPGl 1 ) LASSO (Least Absolute Shrinkage and Selection Operator) (Tibshirani (1996)) LS τ : min 1 2 b Ax 2 2 st x 1 τ Easiest to solve (SPG) Lagrangian formulation QP λ : min b Ax + λ x 1 Candés, Romberg, and Tao (2006): BP gives least support solutions. 4 / 25
10 BPDN & LASSO A R m n with m << n Basis Pursuit (Mallet and Zhang (1993), Chen, Donoho, Saunders (1998)) BP: min x 1 st Ax = b Optimal Value = τ BP Basis Pursuit De-Noising (BPDN) (Chen, Donoho, Saunders (1998)) BP σ : min x 1 st b Ax 2 σ Target Problem (SPGl 1 ) LASSO (Least Absolute Shrinkage and Selection Operator) (Tibshirani (1996)) LS τ : min 1 2 b Ax 2 2 st x 1 τ Easiest to solve (SPG) Lagrangian formulation QP λ : min b Ax + λ x 1 Candés, Romberg, and Tao (2006): BP gives least support solutions. 4 / 25
11 SPGL1: PROBING THE PARETO FRONTIER FOR BASIS PURSUIT SOLUTIONS van den Berg and Friedlander (2008)
12 Optimal Value Function BP σ : min x 1 st 1 2 Ax b 2 2 σ LS τ : min 1 2 Ax b 2 2 st x 1 τ The key is the value function v(τ) := 1 min x 2 Ax b τ v(τ) = 1 2 Axτ b 2 2 (τ, σ) Algorithm 1 Evaluate v(τ) by solving LS τ inexactly projected gradient 2 Compute v (τ) inexactly duality theory 3 Solve v(τ) = σ Inexact Newton s method τ BP 5 / 25
13 Optimal Value Function: Variational Properties v(τ) := 1 min x 2 Ax b τ Theorem [Berg & F., 2008, 2011] v(τ) 1 v(τ) is convex 2 For all τ (0, τ BP ) v is continuously differentiable v (τ) = λ τ with λ τ = A T r τ r τ = Ax τ b where x τ solves LS τ τ BP 6 / 25
14 Root Finding: v(τ) = σ Approximately solve minimize 1 2 Ax b 2 2 subj to x 1 τ k Newton update τ k+1 τ k (v k σ)/v k Early termination monitor duality gap / 25
15 EXTENSIONS Sparse Optimization with Least-Squares Constraints van den Berg and Friedlander (2011)
16 Gauge Functions U R n non-empty, closed and convex (usually, 0 U ). The gauge functional associated with U is given by γ (x U ) := inf {t x tu, t 0}. Examples: 1 U = B the closed unit ball for the norm γ (x B) = x 2 U = K a convex cone γ (x K ) = δ (x K ) := 3 U = B K γ (x B K ) = x + δ (x K ) { 0, x K + x / K 8 / 25
17 Optimal Value Function v(τ) = 1 2 Axτ b 2 2 BP σ : min γ (x U ) st 1 2 Ax b 2 2 σ LS τ : min 1 2 Ax b 2 2 st γ (x U ) τ The key is the value function v(τ) := 1 min γ(x U ) τ 2 Ax b 2 2 (τ, σ) Algorithm 1 Evaluate v(τ) by solving LS τ inexactly projected gradient 2 Compute v (τ) inexactly duality theory 3 Solve v(τ) = σ Inexact Newton s method τ = γ (xτ U ) 9 / 25
18 Applications for Guage Functionals Sparse optimization with least-squares constraints van der Berg and Friedlander (2011) Non-negative Basis Pursuit Source Localization Mass Spectrometry Nuclear-norm Minimization Matrix Completion Problems 10 / 25
19 HOW DANG FAR DOES THIS FLIPPIN IDEA GO?
20 How far does flipping go? ψ i : X R n R, i = 1, 2, arbitrary functions and X an arbitrary set. epi(ψ) := {(x, µ) ψ(x) µ} v 1 (σ) := inf x X ψ 1(x) + δ ((x, σ) epi(ψ 2 )) v 2 (τ) := inf x X ψ 2(x) + δ ((x, τ) epi(ψ 1 )) P 1,2 (σ) P 2,1 (τ) S 1,2 := { σ R = arg min P 1,2 (σ) {x X ψ 2 (x) = σ } } Then, for every σ S 1,2, (a) v 2 (v 1 (σ)) = σ, and (b) arg min P 1,2 (σ) = arg min P 2,1 (v 1 (σ)) {x X ψ 1 (x) = v 1 (σ)}. Moreover, S 2,1 = {v 1 (σ) σ S 1,2 } and {(σ, v 1 (σ)) σ S 1,2 } = {(v 2 (τ), τ) τ S 2,1 }. 11 / 25
21 Making Flippy Floppy (Easier to solve) v 1 (σ) := inf x X ψ 1(x) + δ ((x, σ) epi(ψ 2 )) v 2 (τ) := inf x X ψ 2(x) + δ ((x, τ) epi(ψ 1 )) P 1,2 (σ) P 2,1 (τ) GOAL: Solve P 1,2 (σ) by solving P 2,1 (τ) for perhaps several values of τ. The van den Berg-Friedlander method: Given σ solve the equation v 2 (τ) = σ for τ = τ σ. Then arg min P 2,1 (τ σ ) = arg min P 1,2 (σ). 12 / 25
22 When is the van den Berg-Friedlander method viable? Key considerations: (A) The problem P 2,1 (τ): v 2 (τ) := inf ψ 2(x) + δ ((x, τ) epi(ψ 1 )) x X must be easily and accurately solvable. (B) We must be able to solve equations of the form v 2 (τ) = σ. (C) v 2 (τ) should have reasonable variational properties (continuity, differentiability, subdifferentiability). 13 / 25
23 When is the van den Berg-Friedlander method viable? Key considerations: (A) The problem P 2,1 (τ): v 2 (τ) := inf ψ 2(x) + δ ((x, τ) epi(ψ 1 )) x X must be easily and accurately solvable. (B) We must be able to solve equations of the form v 2 (τ) = σ. (C) v 2 (τ) should have reasonable variational properties (continuity, differentiability, subdifferentiability). Fact: v 2 is non-increasing in τ > τ min, where τ min := inf {τ P 2,1 (τ) is feasible and finite valued} τ max := sup {τ P 2,1 (τ) is feasible and finite valued} and so is differentiable a.e. (τ min, τ max ). 13 / 25
24 What generalizations should we consider? In the motivating models, we minimize a sparsity inducing regularizing function subject to a linear least-squares misfit measure for the data. Data Misfit Statistical Model Error model Ax b 2 2 b = Ax + ɛ ɛ N (0, I ). Some Alternatives: Statistical Model Misfit Measure Error model Gaussian Laplace Huber Vapnik (ɛ insensitive loss) (a T i x b i ) 2 ɛ i N (0, 1) a T i x b i ɛ i L(0, 1) ρh (ai T x b i ) ɛ i H (0, 1) ρv (ai T x b i ) ɛ i H (0, 1) 14 / 25
25 What generalizations should we consider? In the motivating models, we minimize a sparsity inducing regularizing function subject to a linear least-squares misfit measure for the data. Data Misfit Statistical Model Error model Ax b 2 2 b = Ax + ɛ ɛ N (0, I ). Some Alternatives: Statistical Model Misfit Measure Error model Gaussian Laplace Huber Vapnik (ɛ insensitive loss) Gauss-nik? (a T i x b i ) 2 ɛ i N (0, 1) a T i x b i ɛ i L(0, 1) ρh (ai T x b i ) ɛ i H (0, 1) ρv (ai T x b i ) ɛ i H (0, 1) Hube-nik? 14 / 25
26 Gauss, Laplace, Huber, Vapnik y y x x V (x) = 1 2 x2 Gauss V (x) = x Laplace y y K K x ɛ ɛ x V (x) = Kx 1 2 K2 ; x < K V (x) = 1 2 x2 ; K x K V (x) = Kx 1 2 K2 ; K < x V (x) = x ɛ; x < ɛ V (x) = 0; ɛ x ɛ V (x) = x ɛ; ɛ x Huber Vapnik 15 / 25
27 ROBUSTNESS, SPARSNESS, AND BEYOND! Arbitrary Convex Pairs
28 Assume ρ and φ are closed, proper, and convex P 1 (σ): min φ(x) st ρ(b Ax) σ P 2 (τ): min ρ(b Ax) st φ(x) τ ρ(b Ax) P 1 (σ) is the target problem P 2 (τ) is the easier flipped problem. Problems P 1 (σ) and P 2 (τ) are linked by (τ, σ) v 2 (τ) := min ρ(b Ax) + δ ((x, τ) epi(φ)) φ(x) 16 / 25
29 Assume ρ and φ are closed, proper, and convex P 1 (σ): min φ(x) st ρ(b Ax) σ P 2 (τ): min ρ(b Ax) st φ(x) τ ρ(b Ax) P 1 (σ) is the target problem P 2 (τ) is the easier flipped problem. Problems P 1 (σ) and P 2 (τ) are linked by (τ, σ) v 2 (τ) := min ρ(b Ax) + δ ((x, τ) epi(φ)) φ(x) Broad summary of results: 1 v 2 (τ) is always convex, but may not be differentiable. 2 Solving v 2 (τ) = σ can be solved via an inexact secant method. 3 We have precise knowledge of the variational properties of v 2 (τ) for a large classes of problems P 2 (τ). 16 / 25
30 Convexity of general optimal value functions: Inf-Projection Theorem v 2 (τ) is non-increasing and convex. h(b Ax) Proof: f (x, τ) := ρ(b Ax) + δ ((x, τ) epiφ) is convex in (x, τ). (τ, σ) Therefore, the inf-projection in the variable x is convex in τ: v 2 (τ) = inf f (x, τ). x φ(x) 17 / 25
31 Inexact Secant method for v 2 (τ) = σ Theorem The inexact secant method for finding v 2 (τ) = σ, given by τ k+1 τ k l(τ k) σ m k m k = l(τ k) u(τ k 1 ) (τ k τ k 1 ) 0 < l k v 2 (τ k ) u k h(b Ax) is superlinearly convergent as long as 1 u(τ k ) l(τ k ) shrinks fast enough 2 the left Dini derivative of v 2 (τ) at τ σ is negative. σ τ 2 τ 3 τ 4 τ 5 τ σ φ(x) Tuesday, November 15, / 25
32 Derivatives for Quadratic Support Functions
33 Quadratic Support Functions QS Functions φ(x) := sup [ x, u 1 2 ut Bu] u U U R n is nonempty, closed and convex with 0 U B R n n is symmetric positive semi-definite. Examples: 1 Support functionals: B = 0 2 Gauge functionals: γ ( U ) = δ ( U ) 3 Norms: B = closed unit ball, = γ ( B) 4 Least-squares: U = R n, B = I 5 Huber: U = [ ɛ, ɛ] n, B = I 19 / 25
34 Computing Derivatives for QS Functions φ(x) := sup [ x, u 1 2 ut Bu] u U v(b, τ) := min ρ(b Ax) st φ(x) τ ( ) ū v(b, τ) = µ ( x, ū) satisfy the KKT cond. for P(b, τ) and { µ = max γ ( A T ū U ), ū T ABA T ū/ } 2τ. 20 / 25
35 More specific examples of derivative computations v(b,τ) := min 1 2 b Ax 2 2 st φ(x) τ Optimal Solution: x Optimal Residual: r = A x b 1 Support functionals: φ(x) = δ (x U ), 0 U = v 2(τ) = δ ( A T r U ) = γ ( A T r U ) 2 Gauge functionals: φ(x) = γ (x U ), 0 U = v 2(τ) = γ ( A T r U ) = δ ( A T r U ) 3 Norms: φ(x) = X = v 2(τ) = A T r 4 Huber: φ(x) = sup u [ ɛ,ɛ] n [ x, u 1 2 ut u] = v 2(τ) = max{ɛ A T r, A T r 2 / 2τ} 5 Vapnik: φ(x) = (x ɛ) + + ( x ɛ) + = v 2(τ) = ( A T r + ɛ A T r 1 ) 21 / 25
36 Sparse and Robust Formulation HBP σ : min x 1 st ρ(b Ax) σ Huber 3 2 Signal Recovery Problem Specification x 20-sparse spike train in R 512 b measurements in R 120 A Measurement matrix satisfying RIP ρ Huber function σ error level set at.01 Truth LS LS Truth Residuals 5 outliers Results In the presence of outliers, the robust formulation recovers the spike train, while the standard formulation does not. Huber / 25
37 Sparse and Robust Formulation HBP σ : min 0 x x 1 st ρ(b Ax) σ 4 3 Signal Recovery Problem Specification LS 2 x 20-sparse spike train in R Truth 1 0 b measurements in R 120 Huber 1 A ρ Measurement matrix satisfying RIP Huber function LS Residuals σ error level set at.01 5 outliers Results In the presence of outliers, the robust formulation recovers the spike train, while the standard formulation does not. Truth Huber / 25
38 Comparison with Student s t v N(0, 1) v L(0, 1) T (ν = 1) v 0.5v 2 k 2 vk log(1 + v 2 k) vk 2vk/ vk 2vk/(1 + v 2 k) Gaussian, Laplace, and Student s t Densities, Corresponding Negative Log Likelihoods, and Influence Functions (for scalar v k ). 24 / 25
39 Comparison with Student s t minimize x 0 x 1 subj to ρ(b Ax) σ, Figure: Left, top to bottom: True signal, and reconstructions via least-squares, Huber, and Student s t. Right, top to bottom: true errors, and least-squares, Huber, and Student s t residuals. 25 / 25
Making Flippy Floppy
Making Flippy Floppy James V. Burke UW Mathematics jvburke@uw.edu Aleksandr Y. Aravkin IBM, T.J.Watson Research sasha.aravkin@gmail.com Michael P. Friedlander UBC Computer Science mpf@cs.ubc.ca Vietnam
More informationOptimal Value Function Methods in Numerical Optimization Level Set Methods
Optimal Value Function Methods in Numerical Optimization Level Set Methods James V Burke Mathematics, University of Washington, (jvburke@uw.edu) Joint work with Aravkin (UW), Drusvyatskiy (UW), Friedlander
More informationOptimal Value Function Methods in Numerical Optimization Level Set Methods
Optimal Value Function Methods in Numerical Optimization Level Set Methods James V Burke Mathematics, University of Washington, (jvburke@uw.edu) Joint work with Aravkin (UW), Drusvyatskiy (UW), Friedlander
More informationFWI with Compressive Updates Aleksandr Aravkin, Felix Herrmann, Tristan van Leeuwen, Xiang Li, James Burke
Consortium 2010 FWI with Compressive Updates Aleksandr Aravkin, Felix Herrmann, Tristan van Leeuwen, Xiang Li, James Burke SLIM University of British Columbia Full Waveform Inversion The Full Waveform
More informationBayesian Models for Regularization in Optimization
Bayesian Models for Regularization in Optimization Aleksandr Aravkin, UBC Bradley Bell, UW Alessandro Chiuso, Padova Michael Friedlander, UBC Gianluigi Pilloneto, Padova Jim Burke, UW MOPTA, Lehigh University,
More informationSparsity Regularization
Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization. A Lasso Solver
Stanford University, Dept of Management Science and Engineering MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2011 Final Project Due Friday June 10 A Lasso Solver
More informationLarge-Scale L1-Related Minimization in Compressive Sensing and Beyond
Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationGauge optimization and duality
1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel
More informationSeismic data interpolation and denoising using SVD-free low-rank matrix factorization
Seismic data interpolation and denoising using SVD-free low-rank matrix factorization R. Kumar, A.Y. Aravkin,, H. Mansour,, B. Recht and F.J. Herrmann Dept. of Earth and Ocean sciences, University of British
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More informationCompressed Sensing and Neural Networks
and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications
More informationBasis Pursuit Denoising and the Dantzig Selector
BPDN and DS p. 1/16 Basis Pursuit Denoising and the Dantzig Selector West Coast Optimization Meeting University of Washington Seattle, WA, April 28 29, 2007 Michael Friedlander and Michael Saunders Dept
More informationDepartment of Computer Science, University of British Columbia Technical Report TR , January 2008
Department of Computer Science, University of British Columbia Technical Report TR-2008-01, January 2008 PROBING THE PARETO FRONTIER FOR BASIS PURSUIT SOLUTIONS EWOUT VAN DEN BERG AND MICHAEL P. FRIEDLANDER
More informationSparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28
Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:
More informationConvex Optimization and l 1 -minimization
Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationSparsity-promoting migration with multiples
Sparsity-promoting migration with multiples Tim Lin, Ning Tu and Felix Herrmann SLIM Seismic Laboratory for Imaging and Modeling the University of British Columbia Courtesy of Verschuur, 29 SLIM Motivation..
More informationSource estimation for frequency-domain FWI with robust penalties
Source estimation for frequency-domain FWI with robust penalties Aleksandr Y. Aravkin, Tristan van Leeuwen, Henri Calandra, and Felix J. Herrmann Dept. of Earth and Ocean sciences University of British
More informationOptimizaton and Kalman-Bucy Smoothing. The Chinese University of Hong Kong, March 4, 2016
Optimizaton and Kalman-Bucy Smoothing Aleksandr Y. Aravkin University of Washington sasha.aravkin@gmail.com James V. Burke University of Washington jvburke@uw.edu Bradley Bell University of Washington
More informationAbout Split Proximal Algorithms for the Q-Lasso
Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S
More informationParNes: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals
Preprint manuscript No. (will be inserted by the editor) ParNes: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals Ming Gu ek-heng im Cinna Julie Wu Received:
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationAccelerated large-scale inversion with message passing Felix J. Herrmann, the University of British Columbia, Canada
Accelerated large-scale inversion with message passing Felix J. Herrmann, the University of British Columbia, Canada SUMMARY To meet current-day challenges, exploration seismology increasingly relies on
More informationRecent developments on sparse representation
Recent developments on sparse representation Zeng Tieyong Department of Mathematics, Hong Kong Baptist University Email: zeng@hkbu.edu.hk Hong Kong Baptist University Dec. 8, 2008 First Previous Next Last
More informationOptimization Algorithms for Compressed Sensing
Optimization Algorithms for Compressed Sensing Stephen Wright University of Wisconsin-Madison SIAM Gator Student Conference, Gainesville, March 2009 Stephen Wright (UW-Madison) Optimization and Compressed
More informationFast Hard Thresholding with Nesterov s Gradient Method
Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science
More informationFighting the Curse of Dimensionality: Compressive Sensing in Exploration Seismology
Fighting the Curse of Dimensionality: Compressive Sensing in Exploration Seismology Herrmann, F.J.; Friedlander, M.P.; Yilmat, O. Signal Processing Magazine, IEEE, vol.29, no.3, pp.88-100 Andreas Gaich,
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More information6. Approximation and fitting
6. Approximation and fitting Convex Optimization Boyd & Vandenberghe norm approximation least-norm problems regularized approximation robust approximation 6 Norm approximation minimize Ax b (A R m n with
More informationIntroduction to Alternating Direction Method of Multipliers
Introduction to Alternating Direction Method of Multipliers Yale Chang Machine Learning Group Meeting September 29, 2016 Yale Chang (Machine Learning Group Meeting) Introduction to Alternating Direction
More informationData Sparse Matrix Computation - Lecture 20
Data Sparse Matrix Computation - Lecture 20 Yao Cheng, Dongping Qi, Tianyi Shi November 9, 207 Contents Introduction 2 Theorems on Sparsity 2. Example: A = [Φ Ψ]......................... 2.2 General Matrix
More informationSparse Optimization Lecture: Dual Certificate in l 1 Minimization
Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Instructor: Wotao Yin July 2013 Note scriber: Zheng Sun Those who complete this lecture will know what is a dual certificate for l 1 minimization
More informationL-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise
L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard
More informationMatrix Support Functional and its Applications
Matrix Support Functional and its Applications James V Burke Mathematics, University of Washington Joint work with Yuan Gao (UW) and Tim Hoheisel (McGill), CORS, Banff 2016 June 1, 2016 Connections What
More informationLeast Sparsity of p-norm based Optimization Problems with p > 1
Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from
More informationNecessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization
Noname manuscript No. (will be inserted by the editor) Necessary and Sufficient Conditions of Solution Uniqueness in 1-Norm Minimization Hui Zhang Wotao Yin Lizhi Cheng Received: / Accepted: Abstract This
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationSparse Optimization Lecture: Basic Sparse Optimization Models
Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationSparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda
Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic
More informationNear Optimal Signal Recovery from Random Projections
1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:
More informationSparse Solutions of an Undetermined Linear System
1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More information1 Sparsity and l 1 relaxation
6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the
More informationNecessary and sufficient conditions of solution uniqueness in l 1 minimization
1 Necessary and sufficient conditions of solution uniqueness in l 1 minimization Hui Zhang, Wotao Yin, and Lizhi Cheng arxiv:1209.0652v2 [cs.it] 18 Sep 2012 Abstract This paper shows that the solutions
More informationBhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego
Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational
More information1 Regression with High Dimensional Data
6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationImproving Approximate Message Passing Recovery of Sparse Binary Vectors by Post Processing
10th International ITG Conference on Systems, Communications and Coding (SCC 2015) Improving Approximate Message Passing Recovery of Sparse Binary Vectors by Post Processing Martin Mayer and Norbert Goertz
More informationZ Algorithmic Superpower Randomization October 15th, Lecture 12
15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem
More informationExtended Reconstruction Approaches for Saturation Measurements Using Reserved Quantization Indices Li, Peng; Arildsen, Thomas; Larsen, Torben
Aalborg Universitet Extended Reconstruction Approaches for Saturation Measurements Using Reserved Quantization Indices Li, Peng; Arildsen, Thomas; Larsen, Torben Published in: Proceedings of the 12th IEEE
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationCompressive Sensing Applied to Full-wave Form Inversion
Compressive Sensing Applied to Full-wave Form Inversion Felix J. Herrmann* fherrmann@eos.ubc.ca Joint work with Yogi Erlangga, and Tim Lin *Seismic Laboratory for Imaging & Modeling Department of Earth
More informationThe lasso. Patrick Breheny. February 15. The lasso Convex optimization Soft thresholding
Patrick Breheny February 15 Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 1/24 Introduction Last week, we introduced penalized regression and discussed ridge regression, in which the penalty
More informationABSTRACT. Recovering Data with Group Sparsity by Alternating Direction Methods. Wei Deng
ABSTRACT Recovering Data with Group Sparsity by Alternating Direction Methods by Wei Deng Group sparsity reveals underlying sparsity patterns and contains rich structural information in data. Hence, exploiting
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationRobust Principal Component Analysis
ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M
More informationSparse Linear Models (10/7/13)
STA56: Probabilistic machine learning Sparse Linear Models (0/7/) Lecturer: Barbara Engelhardt Scribes: Jiaji Huang, Xin Jiang, Albert Oh Sparsity Sparsity has been a hot topic in statistics and machine
More informationLinear Regression with Strongly Correlated Designs Using Ordered Weigthed l 1
Linear Regression with Strongly Correlated Designs Using Ordered Weigthed l 1 ( OWL ) Regularization Mário A. T. Figueiredo Instituto de Telecomunicações and Instituto Superior Técnico, Universidade de
More informationFrank-Wolfe Method. Ryan Tibshirani Convex Optimization
Frank-Wolfe Method Ryan Tibshirani Convex Optimization 10-725 Last time: ADMM For the problem min x,z f(x) + g(z) subject to Ax + Bz = c we form augmented Lagrangian (scaled form): L ρ (x, z, w) = f(x)
More informationInfeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel
More informationMotivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble
Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting
More informationSuper-resolution via Convex Programming
Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation
More informationOptimization methods
Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to
More informationParNes: a rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals
Numer Algor (2013) 64:321 347 DOI 10.1007/s11075-012-9668-5 ORIGINA PAPER ParNes: a rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals Ming Gu ek-heng im Cinna
More informationSCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD
EE-731: ADVANCED TOPICS IN DATA SCIENCES LABORATORY FOR INFORMATION AND INFERENCE SYSTEMS SPRING 2016 INSTRUCTOR: VOLKAN CEVHER SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD STRUCTURED SPARSITY
More informationICS-E4030 Kernel Methods in Machine Learning
ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This
More informationEnhanced Compressive Sensing and More
Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationarxiv: v1 [stat.ml] 1 Mar 2016
DUAL SMOOTHING AND LEVEL SET TECHNIQUES FOR VARIATIONAL MATRIX DECOMPOSITION Dual Smoothing and Level Set Techniques for Variational Matrix Decomposition arxiv:1603.00284v1 [stat.ml] 1 Mar 2016 Aleksandr
More informationComputing approximate PageRank vectors by Basis Pursuit Denoising
Computing approximate PageRank vectors by Basis Pursuit Denoising Michael Saunders Systems Optimization Laboratory, Stanford University Joint work with Holly Jin, LinkedIn Corp SIAM Annual Meeting San
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationSEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS
SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS XIANTAO XIAO, YONGFENG LI, ZAIWEN WEN, AND LIWEI ZHANG Abstract. The goal of this paper is to study approaches to bridge the gap between
More informationarxiv: v3 [cs.it] 7 Nov 2013
Compressed Sensing with Linear Correlation Between Signal and Measurement Noise arxiv:1301.0213v3 [cs.it] 7 Nov 2013 Abstract Thomas Arildsen a, and Torben Larsen b Aalborg University, Faculty of Engineering
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationOptimization for Learning and Big Data
Optimization for Learning and Big Data Donald Goldfarb Department of IEOR Columbia University Department of Mathematics Distinguished Lecture Series May 17-19, 2016. Lecture 1. First-Order Methods for
More informationRecovery of Simultaneously Structured Models using Convex Optimization
Recovery of Simultaneously Structured Models using Convex Optimization Maryam Fazel University of Washington Joint work with: Amin Jalali (UW), Samet Oymak and Babak Hassibi (Caltech) Yonina Eldar (Technion)
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationQuantized Iterative Hard Thresholding:
Quantized Iterative Hard Thresholding: ridging 1-bit and High Resolution Quantized Compressed Sensing Laurent Jacques, Kévin Degraux, Christophe De Vleeschouwer Louvain University (UCL), Louvain-la-Neuve,
More informationRobust high-dimensional linear regression: A statistical perspective
Robust high-dimensional linear regression: A statistical perspective Po-Ling Loh University of Wisconsin - Madison Departments of ECE & Statistics STOC workshop on robustness and nonconvexity Montreal,
More informationPrimal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector
Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer
More informationSimultaneous Sparsity
Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,
More informationSolving Corrupted Quadratic Equations, Provably
Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin
More informationLow-Rank Matrix Recovery
ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery
More informationInexact Alternating Direction Method of Multipliers for Separable Convex Optimization
Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State
More informationAnalysis of Greedy Algorithms
Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm
More informationCompressed Sensing and Related Learning Problems
Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationConditions for Robust Principal Component Analysis
Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and
More informationSparsity in Underdetermined Systems
Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2
More informationOptimisation Combinatoire et Convexe.
Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix
More informationMinimizing the Difference of L 1 and L 2 Norms with Applications
1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More information