Blind Deconvolution Using Convex Programming. Jiaming Cao

Similar documents
Strengthened Sobolev inequalities for a random subspace of functions

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Blind Deconvolution using Convex Programming

Self-Calibration and Biconvex Compressive Sensing

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming)

Lecture Notes 9: Constrained Optimization

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

arxiv: v1 [cs.it] 21 Feb 2013

Compressed Sensing and Sparse Recovery

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Introduction to Compressed Sensing

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

A New Estimate of Restricted Isometry Constants for Sparse Solutions

Sparse Recovery Beyond Compressed Sensing

Three Generalizations of Compressed Sensing

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

DS-GA 1002 Lecture notes 10 November 23, Linear models

Optimization-based sparse recovery: Compressed sensing vs. super-resolution

Recovering overcomplete sparse representations from structured sensing

SPARSE signal representations have gained popularity in recent

Compressive Sensing and Beyond

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Compressed Sensing and Robust Recovery of Low Rank Matrices

Low-Rank Matrix Recovery

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sensing systems limited by constraints: physical size, time, cost, energy

Lecture 3. Random Fourier measurements

Constrained optimization

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

An algebraic perspective on integer sparse recovery

Signal Recovery from Permuted Observations

Universal low-rank matrix recovery from Pauli measurements

Sparse Legendre expansions via l 1 minimization

Chapter 3 Transformations

LOW-RANK SPECTRAL OPTIMIZATION

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

The Singular Value Decomposition

CSC 576: Variants of Sparse Learning

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017

6 The SVD Applied to Signal and Image Deblurring

Analysis of Robust PCA via Local Incoherence

8 The SVD Applied to Signal and Image Deblurring

Conditions for Robust Principal Component Analysis

8 The SVD Applied to Signal and Image Deblurring

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

1 Regression with High Dimensional Data

Stability and Robustness of Weak Orthogonal Matching Pursuits

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Structured matrix factorizations. Example: Eigenfaces

Tensor-Tensor Product Toolbox

EE731 Lecture Notes: Matrix Computations for Signal Processing

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations

Principal Component Analysis

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

AN INTRODUCTION TO COMPRESSIVE SENSING

of Orthogonal Matching Pursuit

Review of some mathematical tools

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

Statistical Pattern Recognition

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Sparse and Low-Rank Matrix Decompositions

October 25, 2013 INNER PRODUCT SPACES

AIR FORCE RESEARCH LABORATORY Directed Energy Directorate 3550 Aberdeen Ave SE AIR FORCE MATERIEL COMMAND KIRTLAND AIR FORCE BASE, NM

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

Towards a Mathematical Theory of Super-resolution

Gauge optimization and duality

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

On the exponential convergence of. the Kaczmarz algorithm

1 Linearity and Linear Systems

INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

An Introduction to Sparse Approximation

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.

Lecture 22: More On Compressed Sensing

Large Scale Data Analysis Using Deep Learning

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations

Contents. 0.1 Notation... 3

Compressed sensing techniques for hyperspectral image recovery

14 Singular Value Decomposition

Fast and Robust Phase Retrieval

ECE 275A Homework #3 Solutions

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering

2. Every linear system with the same number of equations as unknowns has a unique solution.

Transcription:

Blind Deconvolution Using Convex Programming Jiaming Cao

Problem Statement The basic problem Consider that the received signal is the circular convolution of two vectors w and x, both of length L. How can we recover the vectors w and x from the single received signal? 0 y = w x (or y ll = ll w ll * x[ll ll * 1 + 1] ) w =? x =?

Problem Statement Structural assumptions Assume that w and x live in subspaces with dimensions K and N respectively, i.e. where B is a L K matrix, and C is a L N matrix. Knowing matrices B and C, reconstructing w and x is euivalent to reconstructing m and h.

Problem Statement An intuition Ahmed et al (2014)

Proposed Algorithm Matrix Observation Expand the convolution euation using the structural assumption, y = m 1 w C < + + m N w C > = circ C < circ C > m 1 w m N w where circ C C denotes the L L circulant matrix constructed be nth column of matrix C.

Proposed Algorithm Matrix Observation Take the Fourier transform, let the DFT matrix by F. Then use CF = FC, BF = FB, and, yg = Fy = Δ < BF Δ > BF m 1 h m N h where Δ C = diag( LCF C ) Related to outer product of h and m, hm = m 1 h m N h

Proposed Algorithm Matrix Observation yg = Fy = Δ < BF Δ > BF m 1 h m N h Let X Q = hm, using the observation that the operation to get yg is linear, we can note the expression as, yg = A(X Q ) Further, X Q is a rank 1 matrix by definition. Now we have a way to formulate the recovery of X Q.

Proposed Algorithm Formulation arg min rank X s. t. yg = A(X) Xe = Convex relaxation arg min X s. t. yg = A(X) Let σ_u_ < v_ < be the best rank 1 approximation to Xc, then set h c = σ_u_ < and md = σ_v_ <

Performance Guarantee Definitions and Assumptions WOLG, assume columns in B to be orthonormal, such that, Define, B B = Bg Bg = h b g jb g j = I 0 jk< Let, μ nop μ y = L max <sjs0 μ nwc = L K max <sjs0 bg j h, bg j = L K min <sjs0 bg j [1, L/K] [0,1] [1, Kμnop ](h unity norm) C l, n ~N(0, L < )

Performance Guarantee Theorem 1 Under the above assumptions, fix α 1. Then there exists a constant C = O(α) depending only on α such that if, max (μ nop K, μ L y N) C (log L)ˆ then X Q = hm is the uniue solution to the neuclear norm minimization problem with probability 1 O(L < ), and we can recover both w and x within a scalar multiple from y = w x. When the coherences are low (i.e. μ nop and μ y are on the order of a constant), the ineuality is tight to within a logarithmic factor, as we always have max (K, N) L

Performance Guarantee Theorem 1 max μ nop K, μ y N L C (log L)ˆ As we would like to have the lower bound low, small μ nop and μ y (i.e. B spread out in freuency domain, or incoherent ) are preferred. Eg. when B = I K, μ 0 nop = μ nwc = 1

Performance Guarantee Theorem 2 (stability in presence of noise) Let the noisy observation be, yg = A X Q + z where z is an unknown noise vector with z δ. The optimization problem is now, arg min s. t. X yg A(X) δ

Performance Guarantee Theorem 2 (stability in presence of noise) Let λ nwc and λ nop be the smallest and largest non- zero eigenvalues of AA, then with probability 1 L <, the solution to the modified optimization problem will obey, Xc X Q for a fixed constant C. C λ nop λ nwc min (K, N)δ When A is sufficiently underdetermined, NK L(log L), then with high probability, λ nop λ nwc ~ μ nop μ nwc

Performance Guarantee Theorem 2 (stability in presence of noise) Set δ = Xc X Q, the there exists a constant C such that, h αh c Cmin ( m 1 α md Cmin ( δ, h h ) δ, m m ) for some scalar multiple α.

Numerical Simulations Phase Transition Ahmed et al (2014)

Numerical Simulations In Presence of Noise Ahmed et al (2014)

Toy Example Image Deblurring x R 0 represents an image of 256x256 pixels, and w R 0 represents a blur kernel with the same dimension. Therefore, L = 256 256 = 65536. Let C be a set wavelet basis, and m be the active coefficients in wavelet domain. Let B be formed by a subset of columns in I matrix, and h is an unknown short vector.

Toy Example Image Deblurring

Toy Example Image Deblurring Knowing the support of the original image in wavelet domain Not knowing the support of the original image in wavelet domain

Comments and Related Works Novelty: casting blind deconvolution to a low- rank matrix recovery problem Drawback: it is known that SDP is feasible but very expensive, esp. at large scales (Li et al.) Speed up using non- convex methods (in presence of noise): min n,y yÿ A(mh )

Sketch of Proof (Theorem 2) Let Xc = X Q + h, P A be the projection operator onto the row space of A. By triangular ineuality and definition, A(h) yÿ A X Q + A Xc yÿ 2δ Recovery error can be decomposed as, h = P A (h) + P A (h) It can be shown that (details not included, Proposition 1) since P A (h) lies in the null space of A, X Q + P A h X Q C P P A h By triangular ineuality, after rearranging, P P A h C P A h C min (K, N) P A (h) It can be shown that (details not included) since P A (h) lies in the null space of A, P P A h 2λnop P P A h Therefore, P A (h) = P P A h + P P A h (2λnop + 1) P P A h

Sketch of Proof (Theorem 2) Plug into the expression for recovery error, we get, h P A h + 2λ nop + 1 P P A h Knowing that Frobenius norm is no greater than nuclear norm, by applying the previous bound on nuclear norm, we have, A(h) P A h + C(2λ nop + 1)min (K, N) P A (h) Absorbing all constants into C, h Cλ nop min (K, N) P A h Cλ nop min (K, N) A A(h) where A is the pseudoinverse of A, whose norm is λ nwc. Also use the previously established ineuality on A(h), the conclusion follows.

Discussion What is the benefit of viewing blind deconvolutionas a low rank recovery problem?

References 1. Ahmed, Ali, Benjamin Recht, and Justin Romberg. "Blind deconvolution using convex programming." IEEE Transactions on Information Theory 60.3 (2014): 1711-1732. 2. Li, Xiaodong, et al. "Rapid, robust, and reliable blind deconvolution via nonconvex optimization." arxiv preprint arxiv:1606.04933 (2016).