Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Similar documents
Sharp Time Data Tradeoffs for Linear Inverse Problems

Randomized Recovery for Boolean Compressed Sensing

Lower Bounds for Quantized Matrix Completion

Weighted- 1 minimization with multiple weighting sets

CS Lecture 13. More Maximum Likelihood

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Kernel Methods and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Feature Extraction Techniques

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion

Support recovery in compressed sensing: An estimation theoretic approach

Block designs and statistics

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010

COS 424: Interacting with Data. Written Exercises

A Simple Regression Problem

A Probabilistic and RIPless Theory of Compressed Sensing

Statistical clustering and Mineral Spectral Unmixing in Aviris Hyperspectral Image of Cuprite, NV

Stochastic Subgradient Methods

Lecture 20 November 7, 2013

On Constant Power Water-filling

Hamming Compressed Sensing

On the Use of A Priori Information for Sparse Signal Approximations

Hybrid System Identification: An SDP Approach

Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization

Introduction to Machine Learning. Recitation 11

A note on the multiplication of sparse matrices

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

Convex Programming for Scheduling Unrelated Parallel Machines

Multi-Scale/Multi-Resolution: Wavelet Transform

On the theoretical analysis of cross validation in compressive sensing

Ch 12: Variations on Backpropagation

Least Squares Fitting of Data

Non-Parametric Non-Line-of-Sight Identification 1

Complex Quadratic Optimization and Semidefinite Programming

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION

Data-Driven Imaging in Anisotropic Media

Using a De-Convolution Window for Operating Modal Analysis

Bayes Decision Rule and Naïve Bayes Classifier

arxiv: v1 [math.na] 10 Oct 2016

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Compressive Sensing Over Networks

Fairness via priority scheduling

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Solving Most Systems of Random Quadratic Equations

Recovery of Sparsely Corrupted Signals

Support Vector Machines. Goals for the lecture

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

1 Bounding the Margin

arxiv: v1 [cs.ds] 29 Jan 2012

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Transformation-invariant Collaborative Sub-representation

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

Shannon Sampling II. Connections to Learning Theory

Probability Distributions

Bipartite subgraphs and the smallest eigenvalue

A remark on a success rate model for DPA and CPA

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Pattern Recognition and Machine Learning. Artificial Neural networks

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Distributed Subgradient Methods for Multi-agent Optimization

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization

An incremental mirror descent subgradient algorithm with random sweeping and proximal step

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Robust Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen, Student Member, IEEE, and Yuejie Chi, Member, IEEE

arxiv: v1 [cs.lg] 8 Jan 2019

Tracking using CONDENSATION: Conditional Density Propagation

Support Vector Machines MIT Course Notes Cynthia Rudin

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

Exact tensor completion with sum-of-squares

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

IN modern society that various systems have become more

An incremental mirror descent subgradient algorithm with random sweeping and proximal step

GLOBALLY CONVERGENT LEVENBERG-MARQUARDT METHOD FOR PHASE RETRIEVAL

An Adaptive UKF Algorithm for the State and Parameter Estimations of a Mobile Robot

Asynchronous Gossip Algorithms for Stochastic Optimization

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China

Detection and Estimation Theory

Pattern Recognition and Machine Learning. Artificial Neural networks

arxiv: v1 [cs.ds] 3 Feb 2014

Provable Non-convex Phase Retrieval with Outliers: Median Truncated Wirtinger Flow

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

Tail Estimation of the Spectral Density under Fixed-Domain Asymptotics

P016 Toward Gauss-Newton and Exact Newton Optimization for Full Waveform Inversion

Lecture 21. Interior Point Methods Setup and Algorithm

arxiv: v1 [stat.ot] 7 Jul 2010

Sparse Signal Reconstruction via Iterative Support Detection

PAC-Bayes Analysis Of Maximum Entropy Learning

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material

Highly Robust Error Correction by Convex Programming

PULSE-TRAIN BASED TIME-DELAY ESTIMATION IMPROVES RESILIENCY TO NOISE

Lecture 9: Multi Kernel SVM

Boosting with log-loss

Transcription:

Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains soe type of low diensional structure that enables intelligent representation and processing That is, the data is on an unknown low diensional structure in a very high diensional space However, in any engineering disciplines we do not have access to the data directly, but only have access to easureents of the data In soe of these engineering disciplines we would like to get as few easureents as possible, while preserving quality (a good exaple is MR iaging where the nuber of easureents corresponds to MR scan tie) In others, the nubers of easureents are liited by the nature of the proble or soe physical constraint (eg the Netflix challenge where we only have access to only a few user ratings) Therefore in any practical settings we face a very challenging proble: recovering data fro underdeterined easureents The past decade has witnessed a treendous breakthrough in the recovery of low-diensional data (sparse/low rank) fro underdeterined linear easureents (eg copressed sensing/atrix copletion) However in any engineering applications the easureents are not linear, one such exaple is phase retrieval Phase retrieval is the proble of recovering a general signal, eg an iage fro the agnitude of its Fourier transfor (equivalent to quadratic easureent of the signal) Phase retrieval has nuerous applications: X-ray crystallography, iaging science, diffraction iaging, optics, astronoical iaging, and icroscopy, to nae just a few A nuber of ethods have been developed to address the phase retrieval proble over the last century There are two classes of ethods for the phase retrieval proble: The first class are ethods that are algebraic in nature and thus not robust to noise The second, class are ethods that rely on non-convex optiization schees and often get stuck in local inia In this class there has been heavy use of achine learning type algoriths in the literature eg Neural Networks, graphical odeling, etc Very recently new ethods have been developed (based on SDP relaxations), that are both robust to noise and are convex in nature Despite these advances these new ethods do not utilize the low diensional structure inherent in any phase retrieval applications (eg sparsity of iages in a given frae) Using these inherent low diensional structures of iage signals one ight be able to draatically decrease the nuber of easureents needed for any phase retrieval applications In this paper we present a unified view of how data can be recovered fro quadratic easureents We show that the aforeentioned SDP relaxation in the context of phase retrieval can be derived as a special case of this ore general forulation We will show that our general forulation leads to new algoriths and insights in noisy phase retrieval We also consider the case where the quadratic easureents are underdeterined ie there is no unique solution In this case, we will see that one can still recover the data using apriori inforation in the for of sparsity of data in a dictionary We will show that this forulation leads to a new algorith (Sparse Phase Lift) for the phase retrieval proble, that utilizes the inherent low diensional structure of the data to decrease the nuber of easureents needed in phase retrieval 2 Unsupervised Learning of data fro Quadratic Measureents In the first part of this section we show that one can use Shor s Seidefinite Relaxation schee [2], to recover a vector fro quadratic easureents as a convex optiization proble In the second part we study the underdeterined case and present a convex optiization forulation for recovery of data fro underdeterined quadratic easureents 21 General Case Assue that we wish to find a data vector x R n that satisfies a set of quadratic equations We forulate this as a convex optiization proble by adding a quadratic objective function 1 in f 0 (x) = x T A 0 x + 2b T 0 x + c 0 y 1 = x T A 1 x + 2b T 1 x + c 1 y = x T A x + 2b T x + c Notice that this prial proble is non-convex To bound fro bellow the optial value we build the dual proble where f λ (x) = f 0 (x) + λ i f i (x) = x T A(λ)x + 2b T (λ)x + c(λ) A(λ) = A 0 + λ i A i ; b(λ) = b 0 + λ i b i ; c(λ) = c 0 + λ i c i Notice that in x R nf λ(x) is a lower bound for the original proble Now the ethod we are going to use in order to relax (21) is by finding the best lower bound of f λ (x) η, by an appropriate convex search over all λ and η Now notice that x T A(λ)x + 2b T (λ)x + c(λ) η 0, if and only if (λ, η) R R satisfy c(λ) b(λ)t [ b(λ) T A(λ) ] 0 1 Notice that if one is only interested in recovery fro quadratic equality constraints one could set the objective to 0, ie A 0 = 0, b 0 = 0, and c 0 = 0 We keep the general for because it will prove useful in deriving results in noisy phase retrieval

So inorder to find a good lower bound we axiize the above expression, ie ax {η [c 0 + λ i c i η b T 0 + λ i b T i η,λ b 0 + λ i b i A 0 + ] 0} λ i A i If we write the seidefinite dual of the above proble we get in T r(a 0 X) X R (n+1) (n+1) subject to Tr( [ c i b i b T i X 0; X 11 = 1 A i ] X) = y i, i = 1,, Figure 1: President Obaa Figure 2: President Bush where we have excluded the details of the last derivation due to space constraints 22 Underdeterined Case Assue that the aforeentioned quadratic equality constraints are underdeterined, ie even if one is able to use cobinatorial search, there is not a unique solution to the aforeentioned quadratic equations (In fact in any probles of interest there are exponentially any such solutions) Furtherore, assue we have soe additional inforation about the data vector x in the for of sparsity in a given dictionary D R n N with N n; ie x = Ds, where s is a sparse vector We will assue that D is a frae and therefore DD T = I In this case we have X = xx T = Dss T D T, and using the fact that D is a frae, D T XD = ss T This iplies that if s is sparse, then so is D T XD Based on this observation we add a regularization ter to (22) to proote sparsity We do this by adding the convex surrogate of sparsity which is the l 1 nor Thus, we arrive at the following optiization forulation for recovering data fro underdeterined quadratic easureents, in X R (n+1) (n+1) T r(a 0 X) + λ DXD T subject to Tr( [ c i b i b T i X 0; X 11 = 1 A i ] X) = y i, i = 1,, 3 Phase Retrieval 31 Basic Forulation Phase retrieval is about recovering a general signal, fro the agnitude of its Fourier transfor This proble arises because detectors can often only record the squared odulus of the Fresnel or Fraunhofer diffraction pattern of the radiation that is scattered fro an object Therefore, the phase of the signal is lost and one faces the very challenging proble of recovering the phase of a signal given agnitude easureents We now give a 1-D forulation of the proble using finite length signals Suppose we have a 1D signal x = (x[0], x[1],, x[n 1]) C n and write its Fourier transfor ˆx[ω] = 1 x[t]e i2πωt/n, ω Ω n 0 t<n where Ω is a grid of sapled frequencies The phase retrieval proble is finding x fro the agnitude coefficients ˆx[ω] Before going further one should notice that there are soe inherent abiguities in phase retrieval in this basic for: Figure 3: Magnitude of obaa, phase of bush Figure 4: agnitude of bush, phase of obaa (1) For any a C with a = 1, x and cx have the sae agnitude (2) The tie reversed for of the signal also has the sae agnitude (3) the shifted version of the signal in tie also has the sae agnitude To avoid soe of these abiguities in phase retrieval applications one usually easures the agnitude of diffracted patterns (rather than just the DFT of the signal) ie one easures y i = a i, x 2 i = 1, 2,, where the diffracting wavefor a i [t] can be written as a i [t] w[t]e i2π w k,t Therefore the goal is to recover x fro easureents of the for y i = a, x 2 i = 1, 2,, The careful reader will notice that there is still one abiguity left, even in this generalized version, ie one cannot recover the global phase of x Therefore the goal of phase retrieval is to recover up to this global phase abiguity 32 The challenge To ephasize the challenging aspect of this proble consider Figures 1 and 2 which contains the portraits of the two recent presidents of the US Now consider Figure 3, this picture is obtained by cobining the agnitude of the Obaa picture and the phase of the Bush picture As can be seen the doinant picture here is that of Bush Figure 4 is also siilar It contains the agnitude of Bush along with the phase of Obaa These figures highlight the iportant role that the phase of a signal plays Therefore if the phase of the figure (the coponent that sees to carry a lot inforation) is lost, retracting the phase only fro agnitude easureents sees to be a very challenging proble 33 PhaseLift Inspired by the atrix copletion proble, in a breakthrough result [3], Candes et al present a novel technique for the phase retrieval proble up to global phase The convex relaxation they propose in the presence of noise is depicted in Algorith 1 The algorith offers a ore general fraework for different noisy scenarios What is rearkable about the result of [3] is that they

Algorith 1 Phase Lift [3] Input: Magnitude easureents y 1, y 2,, y and regularization ter λ 1 Solve in X R n n 1 (y 2σi 2 i µ i ) 2 + λt r(x) subject to µ i = a T i Xa i, i = 1, 2,, X 0 2 ˆx = σ 1 v 1, where σ 1 is the largest eigenvalue of the solution to the above optiization proble ( ˆX) and v 1 is the corresponding eigenvector Output: Signal ˆx up to a global phase prove in the noise less case and when the easureent vectors a i are gaussian rando vectors (either real or coplex) and as long as the nuber of easureents exceeds > cn log n, then ˆx = x up to the global phase abiguity we described earlier on! 34 Derivation of PhaseLift as a special case of the general Forulation Now we will show that one can get to the noiseless Phase Lift algorith using the general relaxation schee we presented earlier Notice that in the noiseless case the easureents are y i = a i, x y 2 i = x T a i a T i x (31) Notice that the above equation is of the for of the general forulation, and therefore the convex relaxation in this case reduces to in X R n n 0 subject to y i = a T i Xa i, i = 1, 2,, X 0 (32) Notice that this is exactly the phase lift forulation if one disregards the trace (In face the trace in [3] is only inspired by atrix copletion and as the authors note theselves is not necessary) Therefore we can recover Phase Lift as a special case of our general fraework 35 A new Relaxation for Noisy input Phase Retrieval The noise odel considered in [3] and all of its generalizations are based on the assuption that y i = µ i + z i where z i are iid gaussian and µ i = a i, x 2 Depending on the likelihood of z i, then one iniized an appropriate cost function For exaple the version of phase lift shown above denotes the case when z i is iid N (0, σi 2 ) However, a ore realistic noise odel is y i = a i, x + z i 2 (33) ie the noise ter is added to the easureents before the agnitude is taken This for of noise aongst other things odels the non-linearity in the input While phase Lift does not offer any guidance in this case the general convex relaxation schee we presented earlier provides a natural guidance For exaple consider the case that z i are iid N (0, σi 2 ) Notice that y i = x T a i a T i x + z 2 i + 2z i a i, x = [x z] T [ a ia T i B i B T i I i ] [ x z ] (34) where B i is a atrix with all entries zero except the i th colun which is a i Also, I i is a diagonal atrix with all entries 0 except the entry (i, i) which is one Therefore, the axiu likelihood estiator for the gaussian noise odel results in in z 2 i x,z R n 2σi 2 subject to y i = [x z] T [ a ia T i B i Bi T ] [ x I i z ] Notice that this is of the for of the general forulation (iniize a quadratic subject to quadratic equality constraints), and therefore one can use the general convex relaxation schee to handle this case as well Due to space liitations we will not write the final convex relaxation here 4 Proposed Method (Sparse Phase Lift) As entioned earlier Phase Lift boasts any advantages, the optiization schee is convex and therefore has a global solution, it is guaranteed to recover the original signal up to a global phase, and it is also robust to certain kind of noise However, it has two ajor flaws (1) the optiization schee is slow; (2) It does not take advantage of the inherent structures in any practical applications We will address the coputational cost in the next section Here we present an algorith that takes advantage of the inherent low diensionality of signals We will call this algorith Sparse Phase Lift This algorith utilizes the inherent sparsity that exists in any real world signals of interest eg iages So the basic assuption is that the input to Sparse Phase Lift is s-sparse with respect to a dictionary D R n N Given the low diensional nature of the signal one expects to be able to recover the signal fro uch lower nuber of easureents We will deonstrate that Sparse Phase Lift is capable of recovering the signal with uch lower nuber of easureents The ain steps of our suggested Sparse Phase Lift algorith is depicted in 2 Algorith 2 Sparse Phase Lift(SPL) Input: Magnitude easureents y 1, y 2,, y 1 Solve in D T XD X R n n subject to y i = a T i Xa i, i = 1, 2,, X 0 2 ˆx = σ 1 v 1, where σ 1 is the largest eigenvalue of the solution to the above optiization proble ( ˆX) and v 1 is the corresponding eigenvector Output: Signal ˆx up to a global phase 5 Efficient Ipleentation In this section we present a ethod that reduces the coputational coplexity involved in Sparse Phase Lift The ain steps of the algorith are depicted in 3 This derivation is inspired by the general schee of [5, 1] adapted to a dual forulation for this proble It should be noted that standard optiization ethods can not be applied here To be concrete, even when the size of the signal is as sall as n = 70, the proble becoes prohibitive for standard Interior Point ethod solvers eg [4] With the algorith we present here we can handle signals of size 16, 384 (128 128 pixel iages) Scaling this up to larger sizes is an iportant question that we hope to address in the future In the next paragraphs we quickly go through the derivation of this algorith Since the objective of SPL is non-sooth we add a soothing ter of the for 1 2 µ X X 0 2 F We also introduce the epigraph variable t R

Algorith 3 Efficient Ipleentation of Sparse Phase Lift Input: Magnitude easureents y 1, y 2,, y, initial prial and dual solutions X 0, Λ 0 R n n, λ 0 R, Γ 0 R N N, soothing paraeter µ, and step sizes {t k } Initialization: θ 0 1, λ0 λ 0, Λ0 Λ 0, Γ0 Γ 0 Iteration: for k = 0, 1, 2, do λ k (1 ) λ k + λ k Λ k (1 ) Λ k + Λ k Γ k (1 ) Γ k + Γ k X k X 0 + 1 µ ( Λk + D Γ k D T ( λ k ) i a i a T i ) λ k+1 λ y 1 a k t k T 1 X k a 1 y a T X k a Λ k+1 S( Λk t k X k ) Γ k+1 T ( Γ k t k D T X k D, t k ) λ k+1 (1 ) λ k + λ k+1 Λ k+1 (1 ) Λ k + Λ k+1 Γ k+1 (1 ) Γ k + Γ k+1 +1 2 (1+ 1+ 4 θ k 2 ) end for Output: Signal ˆx up to a global phase Therefore, the optiization in SPL can be approxiated by in t R, X R n n t + 1 2 µ X X 0 2 F subject to y i = a T i Xa i, D T XD t Putting this into the Lagrangian for we have ax ν R, λ R, Λ R n n, Γ R N N in t R, X R n n t νt + 1 2 µ X X 0 2 F λ i (y i a T i Xa i ) X, Λ D T XD, Γ subject to Λ 0, Γ l ν Notice that, the Lagrangian is unbounded unless ν = 1, which results in ax in λ, Λ, Γ X 1 2 µ X X 0 2 F λ i y i X, Λ+DΓD T λ i a i a T i subject to Λ 0, Γ l 1 Define the dual function g µ (λ, Λ, Γ) as the objective of the above axiization proble The iniizer X(λ, Λ, Γ) is X(λ, Λ, Γ) = X 0 + 1 µ (Λ + DΓDT λ i a i a T i ) For axiizing the dual function we use a Nestrov style gradient update [5], ie λ k+1 λ λ k Λ k+1 = arg in Λ Λ Γ k+1 λ,λ,γ k t k g µ (λ, Λ, Γ) Γ Γ k l 2 Notice that the updates are separable so we need to solve the following optiizations λ update: λ k+1 = arg in λ λ k+1 = λ k t k λ λ k 2 l 2t 2 + λ i (y i a T i X k a i ), k y 1 a T 1 X k a 1 y a T X k a Λ update: Λ k+1 = arg in Λ Λ 0 Λ k+1 = S(Λ k t k X k ), 2t k Λ Λ k 2 F + Λ, X k, where S is the projection onto the positive seidefinite cone, which can be calculated by taking SVD and keeping the positive eigenvalues and their corresponding eigenvectors Γ update: Γ k+1 = arg in Γ Γ l 1 Γ k+1 = T (Γ k t k D T X k D, t k ), 2t k Γ Γ k 2 F + Γ, DT X k D, where the operator T (Z, τ) applies sgn(z) in{ z, τ} to each eleent Notice that these are alost the update rules in Algorith 3 We get the algorith by adding Nestrov style oent updates for the dual variables and to ake the algorith converge faster This is the reason for the auxilary variables with and on top 2 We have used a backtracking ethod for the step sizes t k 6 Siulations on Synthetic Data In this section we perfor soe siulations on synthetic data to better understand the regie of applicability of Phase Lift and Sparse Phase Lift 61 Phase Lift vs Sparse Phase Lift on a siple exaple In this section we wish to deonstrate the perforance of Sparse Phase Lift on a sall synthetic iage For this purpose we generate a rando s = 5-sparse signal of size 8 8, 3 and take its wavelet transfor to get a synthetic iage x which is spare in a wavelet frae The wavelet frae we used was Coiflet fro Stanford Wavelab [6] We used = n rando gaussian agnitude easureents The synthetic iage x with its sparse coefficients in the wavelet frae s are shown in Figure 5(a) and 5(b) Using Phase Lift and Sparse Phase Lift the reconstructed iage ˆx with its sparse coefficients in the wavelet frae ŝ are shown in Figures 6(a), 6(b), 7(a) and 7(b) As can be seen Sparse Phase Lift is successful but Phase Lift fails This is to be expected since Phase Lift does not utilize the low diensional structure of the signal In this exaple Phase Lift and Sparse Phase Lift achieved relative errors 4 of 08747 and 44523 10 5 62 Phase Lift vs Sparse Phase Lift in ters of iniu easureents In this section we copare the nuber of easureents needed for Phase Lift and Sparse Phase Lift For this purpose we consider an s-sparse signal x R n For now we assue D = I The easureents a i where chosen iid rando vectors with N (0, 1) entries The nuber of easureents was varied fro sall to large For each value of, 100 trials were run A value of was considered successful if in at least 90 trials out of the total 100 trials that value of resulted in successful recovery (this corresponds to probability of success 09) The sallest such value for both ethods 2 Due to space liitations we we refer to [5, 1] for further details 3 By rando sparse signal we ean that the support of the signal was chosen uniforly at rando aong the all possible ( n ) possible supports of size s s The values on the support where chosen iid N (0, 1) Here, n = 8 8 = 64 4 We calculate relative error using ˆX X F X F

(a) Original Figure 5 (b) Sparse coeff Table 2: average relative error on 10 grayscale iages fro USC iage data base (down-sapled to 64 64) using Coiflet wavelet of Stanford Wavelab Algorith Gaussian = n Fourier = n Phase Lift 094 097 Spare Phase Lift 009 092 Algorith F &F W = 2n F, F W &F W = 3n Phase Lift 073 0008 Spare Phase Lift 006 0003 (a) Reconstructed iage (b) Sparse coeff ˆx Figure 6: Phase Lift (a) Reconstructed (b) Sparse coeff 4- = 3n easureents consisting of coluns of F and F W, and F W where W and W are independent diagonal atrices with each diagonal entry a coplex noral rando variable As can be seen in the Table, for case (4), both algoriths work well In case (1) and (3) Sparse Phase Lift shows superior perforance Again, this is to be expected since Phase Lift does not incorporate the sparsity of the iages in wavelet doain Both ethods fail in (2) This says that Sparse Phase Lift is still not able to reconstruct the iages fro only agnitudes of the Fourier transfor of the signal However, Sparse Phase Lift still draatically decreases the nuber of agnitude easureents needed Finally, we test Sparse Phase Lift on a boat iage fro the sae data base This tie we down saple the iage to size 128 128, and use case (3) for our easureents The original iage along with its reconstruction is shown in Figures 8(a) and 8(b) Figure 7: Sparse Phase Lift was tabulated in Table 1, for n = 50 and s = 5, 10, 20 As can be seen Sparse Phase Lift can achieve uch lower sapling rates than PhaseLift by exploiting the sparsity of the signal However, unlike copressed sensing where the nuber of easureents is roughly proportional to the sparsity level s, here the relationship sees to be quadratic (O(s 2 )) s Phase Lift Sparse Phase Lift 2 122 18 5 122 26 10 122 80 Table 1: Coparison of iniu nuber of easureents for different sparsity levels with n = 50 7 Siulations on Real Iages As stated in previous sections, using our proposed efficient solver we are able to increase the range of applicability of Sparse Phase Lift Despite this iproveent, it takes a long tie to perfor Sparse Phase Lift on large iage sizes Therefore, due to tie constraints, we chose 10 grayscale 512 512 iages fro the Miscellaneous category of the USC-SIPI Iage Database [7] and down sapled the to 64 64 5 We present in Table 2 the average relative errors of Phase Lift and Sparse Phase Lift for various fors of easureent echaniss The dictionary used for the iages was Coiflet fro [6] The easureent ethods are: 1- = n rando gaussian easureents 2- = n easureents consisting of coluns of F where F is the DFT atrix 3- = 2n easureents consisting of coluns of F and F W, where W is a diagonal atrix with each diagonal entry a coplex noral rando variable (a) Original (b) Reconstructed Figure 8: Perforance of Sparse Phase Lift on downsized boat with easureents F and F W References [1] S Becker, E J Candès, and M Grant Teplates for convex cone probles with applications to sparse signal recovery To appear in, Matheatical Prograing Coputation 3(3), 165 218 2010 [2] A Ben-Tal and A Neirovski Lectures on Modern Convex Optiization, Analysis, Algoriths and Engineering Applications Philadelphia, PA: SIAM, 2001 [3] E J Candes, T Stroher, V Voroninski PhaseLift: Exact and Stable Signal Recovery fro Magnitude Measureents via Convex Prograing, arxiv:10102955v2 [4] M Grant and S Boyd CVX: Matlab Software for Disciplined Convex Prograing, version 121, http://cvxrco/cvx, Apr 2011 [5] Y Nesterov Sooth iniization of non-sooth funcitons Math Progra, Serie A, 103:127-152, 2005 [6] Available at http://www-statstanfordedu/~wavelab [7] USC-SIPI Iage Database Available at http://sipiuscedu/ database/ 5 The reader should be reinded that this is equivalent to solving SDP of size 4096 which is tie consuing even with our efficient solver