Fast and Robust Phase Retrieval

Similar documents
Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms

Phase Retrieval from Local Measurements in Two Dimensions

Nonlinear Analysis with Frames. Part III: Algorithms

Phase recovery with PhaseCut and the wavelet transform case

Coherent imaging without phases

Fast Compressive Phase Retrieval

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

Recovery of Compactly Supported Functions from Spectrogram Measurements via Lifting

PHASE RETRIEVAL FROM LOCAL MEASUREMENTS: IMPROVED ROBUSTNESS VIA EIGENVECTOR-BASED ANGULAR SYNCHRONIZATION

PhaseLift: A novel framework for phase retrieval. Vladislav Voroninski

arxiv: v2 [cs.it] 20 Sep 2011

Provable Alternating Minimization Methods for Non-convex Optimization

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences

Solving Corrupted Quadratic Equations, Provably

arxiv: v1 [cs.it] 26 Oct 2015

The Phase Problem: A Mathematical Tour from Norbert Wiener to Random Matrices and Convex Optimization

Nonconvex Methods for Phase Retrieval

Sparse Solutions of an Undetermined Linear System

Homework 2 Foundations of Computational Math 2 Spring 2019

Sparse and Low Rank Recovery via Null Space Properties

Orientation Assignment in Cryo-EM

Phase Retrieval using Alternating Minimization

Solving PhaseLift by low-rank Riemannian optimization methods for complex semidefinite constraints

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Fundamental Limits of PhaseMax for Phase Retrieval: A Replica Analysis

A Unified Theorem on SDP Rank Reduction. yyye

Phase Retrieval via Matrix Completion

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Self-Calibration and Biconvex Compressive Sensing

INEXACT ALTERNATING OPTIMIZATION FOR PHASE RETRIEVAL WITH OUTLIERS

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University

Convex Optimization M2

The Iterative and Regularized Least Squares Algorithm for Phase Retrieval

A Geometric Analysis of Phase Retrieval

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Fundamental Limits of Weak Recovery with Applications to Phase Retrieval

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Super-resolution via Convex Programming

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

STA141C: Big Data & High Performance Statistical Computing

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

Gauge optimization and duality

PHASE RETRIEVAL FOR IMAGING PROBLEMS.

Phase Retrieval using the Iterative Regularized Least-Squares Algorithm

Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming

Convex Phase Retrieval without Lifting via PhaseMax

PHASELIFTOFF: AN ACCURATE AND STABLE PHASE RETRIEVAL METHOD BASED ON DIFFERENCE OF TRACE AND FROBENIUS NORMS

Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints

Fourier phase retrieval with random phase illumination

STA141C: Big Data & High Performance Statistical Computing

Properties of Matrices and Operations on Matrices

Scientific Computing: An Introductory Survey

Covariance Sketching via Quadratic Sampling

A direct formulation for sparse PCA using semidefinite programming

EUSIPCO

Globally Convergent Levenberg-Marquardt Method For Phase Retrieval

Compressed Sensing and Robust Recovery of Low Rank Matrices

Low-Rank Matrix Recovery

Dictionary Learning Using Tensor Methods

Convention Paper Presented at the 133rd Convention 2012 October San Francisco, USA

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs

L26: Advanced dimensionality reduction

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Rank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities

Tensor-Tensor Product Toolbox

Lecture Notes 5: Multiresolution Analysis

Constructing Approximation Kernels for Non-Harmonic Fourier Data

Distance Geometry-Matrix Completion

arxiv: v1 [math.na] 16 Apr 2019

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

PHASE RETRIEVAL FROM STFT MEASUREMENTS VIA NON-CONVEX OPTIMIZATION. Tamir Bendory and Yonina C. Eldar, Fellow IEEE

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Relaxations and Randomized Methods for Nonconvex QCQPs

Linear Subspace Models

The Singular Value Decomposition

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Introduction to Compressed Sensing

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

Sparse PCA with applications in finance

Probabilistic Latent Semantic Analysis

Sparse Phase Retrieval Using Partial Nested Fourier Samplers

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition

Phase retrieval with random phase illumination

Rank minimization via the γ 2 norm

Reshaped Wirtinger Flow for Solving Quadratic System of Equations

Algorithms for phase retrieval problems

Sparse phase retrieval via group-sparse optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Lecture 3: QR-Factorization

7. Symmetric Matrices and Quadratic Forms

Transcription:

Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27

Joint work with Yang Wang Mark Iwen Research supported in part by National Science Foundation grant DMS 1043034. 0 / 27

Outline 1 The Phase Retrieval Problem 2 Existing Approaches 3 Computational Framework 4 Mathematical Foundations Block Circulant Matrices Angular Synchronization 5 Numerical Results 6 Future Directions 0 / 27

The Phase Retrieval Problem where find x C d given Mx = b R D, b R D are the magnitude or intensity measurements. M C D d is a measurement matrix associated with these measurements. Let A : R D C d denote the recovery method. The phase retrieval problem involves designing measurement matrix and recovery method pairs. Note: We are interested in recovering the signal modulo trivial ambiguities such as multiplication by a unimodular constant. 1 / 27

Applications of Phase Retrieval Important applications of Phase Retrieval X-ray crystallography Diffraction imaging Transmission Electron Microscopy (TEM) In many molecular imaging applications, the detectors only capture intensity measurements. Indeed, the design of such detectors is often significantly simpler than those that capture phase information. 2 / 27

The Importance of Phase An Illustration The phase encapsulates vital information about a signal Key features of the signal are retained even if the magnitude is lost 3 / 27

The Importance of Phase An Illustration 3 / 27

The Importance of Phase An Illustration 3 / 27

Objectives Computational Efficiency Can the recovery algorithm A be computed in O(d)-time? Computational Robustness: The recovery algorithm, A, should be robust to additive measurement errors (i.e., noise). Minimal Measurements: The number of linear measurements, D, should be minimized to the greatest extent possible. 4 / 27

Outline 1 The Phase Retrieval Problem 2 Existing Approaches 3 Computational Framework 4 Mathematical Foundations Block Circulant Matrices Angular Synchronization 5 Numerical Results 6 Future Directions 4 / 27

Alternating Projection Methods These methods operate by alternately projecting the current iterate of the signal estimate over two sets of constraints. One of the constraints is the magnitude of the measurements. The other constraint depends on the application positivity, support constraints,... 5 / 27

Gerchberg Saxton Algorithm Oversampled Fourier magnitude measurements {b[ω]} ω Ω Known support T (supp(x) T ) 1 Choose initial guess x 0. Set 2 For k = 1, 2,... x k [t] = ŷ 0 [ω] = b[ω] x 0 [ω] x 0 [ω] { (F 1ŷ k 1 )[t] t T 0 else ŷ k [ω] = b[ω] x k [ω] x k [ω] 6 / 27

Gerchberg Saxton Algorithm Convergence is slow the algorithm is likely to stagnate at stationary points Requires careful selection of and tuning of the parameters Mathematical aspects of the algorithm not well known. If there is proof of convergence, it is only for special cases. 6 / 27

PhaseLift Modify the problem to that of finding the rank-1 matrix X = xx Let w m be a mask. Then the measurements may be written as w m, x 2 = Tr(x w m (w m ) x) = Tr(w m (w m ) xx ) := Tr(A m X). Let A be the linear operator mapping positive semidefinite matrices into {Tr(A m X) : k = 0,..., L}. The phase retrieval problem then becomes find subject to X A(X) = b X 0 rank(x) = 1 7 / 27

PhaseLift Modify the problem to that of finding the rank-1 matrix X = xx Let w m be a mask. Then the measurements may be written as w m, x 2 = Tr(x w m (w m ) x) = Tr(w m (w m ) xx ) := Tr(A m X). Let A be the linear operator mapping positive semidefinite matrices into {Tr(A m X) : k = 0,..., L}. The phase retrieval problem then becomes minimize subject to rank(x) A(X) = b X 0 7 / 27

PhaseLift Modify the problem to that of finding the rank-1 matrix X = xx Let w m be a mask. Then the measurements may be written as w m, x 2 = Tr(x w m (w m ) x) = Tr(w m (w m ) xx ) := Tr(A m X). Let A be the linear operator mapping positive semidefinite matrices into {Tr(A m X) : k = 0,..., L}. The phase retrieval problem then becomes minimize subject to rank(x) A(X) = b X 0 Unfortunately, this problem is NP hard! 7 / 27

PhaseLift Modify the problem to that of finding the rank-1 matrix X = xx Let w m be a mask. Then the measurements may be written as w m, x 2 = Tr(x w m (w m ) x) = Tr(w m (w m ) xx ) := Tr(A m X). Let A be the linear operator mapping positive semidefinite matrices into {Tr(A m X) : k = 0,..., L}. Use the convex relaxation minimize subject to trace(x) A(X) = b X 0 Implemented using a semidefinite program (SDP). 7 / 27

Outline 1 The Phase Retrieval Problem 2 Existing Approaches 3 Computational Framework 4 Mathematical Foundations Block Circulant Matrices Angular Synchronization 5 Numerical Results 6 Future Directions 7 / 27

Overview of the Computational Framework 1 Use compactly supported masks and correlation measurements to obtain phase difference estimates. corr(w, x) x j x j+k, k = 0,..., δ w is a mask or window function with δ + 1 non-zero entries. x j x j+k gives us the (scaled) difference in phase between entries x j and x j+k. 2 Solve an angular synchronization problem on the phase differences to obtain the unknown signal. x j x j+k x j Constraints on x: We require x to be non-sparse. (The number of consecutive zeros in x should be less than δ) 8 / 27

Correlations with Support-Limited Functions Let x = [x 0 x 1... x d 1 ] T C d be the unknown signal. Let w = [w 0 w 1... w δ 0... 0] T denote a support-limited mask. It has δ + 1 non-zero entries. Define the shift operator τ(x) = [x d 1 x 0 x 1... x d 2 ] T v k = τ k (w) denotes a (circular) k-shift of the mask. Then, the entries of corr(w, x) are given by v k, x, k = 0,..., d 1. 9 / 27

Correlation Measurements Now consider the correlation measurements vk m, x = bm k, k = 0,..., d 1, m = 0,..., L. Here, L + 1 distinct masks are used. We have (b m k )2 = vk m, x 2 = τ k (w m 2 δ ), x = j=0 w m j δ δ = wi m w m j x k+j x k+i =: w i w j Z k+j,i j i,j=0 i,j=0 where Z n,l := x n x n+l, δ l δ. x k+j 2 10 / 27

Solving for Phase Differences Ordering {Z n,l } lexicographically, second index first, we obtain a linear system of equations. Example: x R d, d = 4, δ = 1 System Matrix (w0 0)2 2w0 0w0 1 (w 0 1 )2 0 0 0 0 0 0 0 (w0 0)2 2w0 0w0 1 (w1 0)2 0 0 0 0 0 0 0 (w0 0)2 2w0 0w0 1 (w 0 1 )2 0 (w1 0 )2 0 0 0 0 0 2w0 0w0 1 (w1 0)2 (w0 1 )2 2w0 1w1 1 (w1 1)2 0 0 0 0 0 0 0 (w0 1 )2 2w0 1w1 1 (w1 1)2 0 0 0 0 0 0 0 (w0 1)2 2w0 1w1 1 (w1 1)2 0 (w1 1)2 0 0 0 0 0 2w0 1w1 1 (w1 1)2 Unknowns: [Z 0,0 Z 0,1 Z 1,0 Z 1,1 Z 2,0 Z 2,1 Z 3,0 Z 3,1 ] T [ Measurements: b 0 0 b 0 1 b 0 2 b 0 3 b 1 0 b 1 1 b 1 2 b 1 T 3] 11 / 27

Angular Synchronization Re-ordering the phase difference variables {Z n,l }, we obtain entries of the rank-1 matrix xx T along a band. x 0 2 x 0 x 1... x 0 x δ 0 0 0 0 x 1 2 x 1 x 2... x 1 x δ+1 0 0.................. x d 1 x 0... x d 1 x δ 1......... x d 1 2 The magnitudes of each component of the signal can be estimated from the diagonal entries. The phase of each entry can be computed as the phase of the leading eigenvector of this matrix. A few iterations of least-squares can also be performed to correct for magnitude errors on the diagonal entries. 12 / 27

Angular Synchronization Re-ordering the phase difference variables {Z n,l }, we obtain entries of the rank-1 matrix xx T along a band. 0 x 0 x 1... x 0 x δ 0 0 0 0 0 x 1 x 2... x 1 x δ+1 0 0.................. x d 1 x 0... x d 1 x δ 1......... 0 The magnitudes of each component of the signal can be estimated from the diagonal entries. The phase of each entry can be computed as the phase of the leading eigenvector of this matrix. A few iterations of least-squares can also be performed to correct for magnitude errors on the diagonal entries. 12 / 27

Outline 1 The Phase Retrieval Problem 2 Existing Approaches 3 Computational Framework 4 Mathematical Foundations Block Circulant Matrices Angular Synchronization 5 Numerical Results 6 Future Directions 12 / 27

Block Circulant Matrices Rewriting the system matrix from before, (w0 0)2 2w0 0w0 1 (w 0 1 )2 0 0 0 0 0 (w0 1)2 2w0 1w1 1 (w1 1)2 0 0 0 0 0 0 0 (w0 0)2 2w0 0w0 1 (w 0 1 )2 0 0 0 0 0 (w0 1 )2 2w0 1w1 1 (w1 1)2 0 0 0 0 0 0 0 (w0 0 )2 2w0 0w0 1 (w1 0)2 0 0 0 0 0 (w0 1 )2 2w0 1w1 1 (w1 1)2 0 (w1 0)2 0 0 0 0 0 2w0 0w0 1 (w1 0)2 (w1 1)2 0 0 0 0 0 2w0 1w1 1 (w1 1)2 This a block circulant matrix. 13 / 27

Spectral Properties of Block Circulant Matrices Consider the block circulant matrix A 0 A 1......... A d 1 A d 1 A 0......... A d 2 M =............ A 1 A 2......... A 0 Here, A k M p q (C) are the constituent blocks. We are interested in the singular values of M, and, in particular, the condition number of M. 14 / 27

A Simple Reduction Let ω d = e 2πi/d be the primitive n-th root of unity. Denote τ k := ωd k, k = 0, 1,..., n 1. Then I r I r......... I r U n,r = 1 I r τ 1 I r......... τ d 1 I r n......, r 1...... I r τ1 d 1 I r......... τ d 1 d 1 I r is unitary. Moreover, we can easily show that U d,p M U d,q = block-diag [J(1), J(τ 1 ),..., J(τ d 1 )], where J(t) := A 0 + ta 1 + + t n 1 A d 1. 15 / 27

FFT Implementation The unitary matrix U d,r in the previous reduction can be efficiently computed using r FFTs. Therefore, the linear system for the phase differences {Z n,l } can be solved using FFTs in conjunction with a block diagonal solve. Operation count is (δ + 1)O(d log d) + (L + 1)O(d log d) + O ((δ + 1)(L + 1)d) }{{}}{{}}{{} to evaluate U d,q block-diagonal solve to evaluate U d,p δ + 1 is the number of non-zero entries in a mask L + 1 is the number of masks used Typical values for δ and L are 8 and 12 respectively. 16 / 27

Singular Value Decomposition Assume that for each t C the SVD of J(t) is given by Q(t) diag p q [σ 1 (t),..., σ r (t)] R(t), r = min(p, q) where Q(t) is p p and unitary, R(t) is q q and unitary, and [diag [ [c 1, c 2,..., c r ] ] 0], q p diag [c 1, c 2,..., c r ] = diag [c1, c 2,..., c r ], q < p. 0 Then, we have the following theorem 17 / 27

Singular Value Decomposition Theorem 1 M has the following singular value decomposition M = Q U d,p D U d,q R, where Q = block-diag [Q(1), Q(τ 1 ),..., Q(τ d 1 )], R = block-diag [R(1), R(τ 1 ),..., R(τ d 1 )], D = block-diag [ diag p q (σ 1 (1),..., σ r (1)),...,... diag p q (σ 1 (τ n 1 ),..., σ r (τ d 1 )) ]. Remark: D is in fact not true diagonal if p q. The true SVD will require permutations to make D a diagonal matrix. But this is just academic. 17 / 27

Singular Value Decomposition Corollary 1 The singular values of M are Σ M = {σ j (τ k ) : 1 j min(p, q), 0 k d 1}. The condition number of M when p q is Corollary 2 C = max (Σ M) min (Σ M ). Let σ 1 (t) σ 2 (t) σ q (t) be the singular values of J(t). Let σ 1 = max σ 1(t), σq = min σ q(t). t C, t =1 t C, t =1 Then the condition number C of M is C σ 1/σ q 17 / 27

Block Circulant Matrices Banded Case Of particular interest is the case with A k = 0 for k > L. Assuming that A 0,..., A L are given and fixed (the size d may vary), the condition number of M is (assuming p q) max σ 1 (τ k ) k min σ q (τ k ) k Since τ k = ω k d, 0 k d 1, as d, the set {τ k} becomes increasingly dense on z = 1. Now, σ 1 (t), σ q (t) are that largest and smallest singular values of J(t) = A 0 + ta 1 + + t L 1 A L, and they are continuous functions of t. 18 / 27

Condition Number In the banded case, the condition number of M is asymptotically max t =1 σ 1(t) min σ q(t) t =1 as d becomes large. Representative Condition Numbers q = 3 q = 6 q = 9 p = q 34.89 124.05 465.32 p = 2q 3.73 11.73 13.30 p = 3q 3.29 6.08 8.60 19 / 27

Angular Synchronization The Angular Synchronization Problem Estimate n unknown angles θ 1, θ 2,..., θ n [0, 2π) from m noisy measurements of their differences θ ij := θ i θ j mod 2π. Applications include time synchronization in distributed computer networks and computer vision. Problem formulation similar to that of partitioning a weighted graph. Problem can be cast as a semidefinite program (SDP). There also exists an eigen-problem formulation. 20 / 27

The Eigenvector Method (Singer, 2011) Let S be the {i, j} indices for which the phase angle differences are known. Start with the n n matrix H { e iθ ij {i, j} S H ij = 0 {i, j} / S Since θ ij = θ ij, H is Hermitian. Now consider the maximization problem n max e iθ i H ij e iθ j. (1) θ 1,...,θ n [0,2π) i,j=1 Each correctly determined phase difference contributes to the sum. e iθ i e i(θ i θ j ) e iθ j = 1 21 / 27

The Eigenvector Method (Singer, 2011) Unfortunately, (1) is non-convex. Instead, we solve the following relaxation max z i H ij z j. z 1,...,z n n C i=1 z i 2 =n i.e., max z 2 =n z H z. This is a quadratic maximization problem whose solution is the leading eigenvector of H. The phase angles are then given by e ıˆθ i = v 1(i), i = 1,..., n, v 1 (i) where v 1 is the eigenvector corresponding to the largest eigenvalue of H. 21 / 27

Outline 1 The Phase Retrieval Problem 2 Existing Approaches 3 Computational Framework 4 Mathematical Foundations Block Circulant Matrices Angular Synchronization 5 Numerical Results 6 Future Directions 21 / 27

Numerical Results Test signals iid Gaussian, uniform random, sinusoidal signals Noise model b = b + ñ, Value of a determines SNR ( ) noise power SNR = 10 log 10 signal power Errors reported as SNR (db) ñ U[0, a]. ( a 2 ) /3 = 10 log 10 b 2 /d ( ) ˆx x 2 Error (db) = 10 log 10 (ˆx recovered signal, x true signal) x 2 22 / 27

Noiseless Case iid Gaussian signal d = 128 δ = 1 (2d measurements) No noise Reconstruction Error ˆx x 2 x 2 = 1.214 10 16 23 / 27

Efficiency 24 / 27

Robustness iid Gaussian signal d = 256 δ = 8 oversampling factor 1.5 25 / 27

Robustness iid Gaussian signal d = 1024 δ = 8 oversampling factor 1.5 25 / 27

Robustness iid Gaussian signal d = 64 4d measurements (oversampling factor of 1.5 for our method) 25 / 27

Outline 1 The Phase Retrieval Problem 2 Existing Approaches 3 Computational Framework 4 Mathematical Foundations Block Circulant Matrices Angular Synchronization 5 Numerical Results 6 Future Directions 25 / 27

Current and Future Research Directions Use deterministic masks. This will also allow us to write out the condition number explicitly. Efficient implementations for two dimensional signals. Extension to sparse signals number of required measurements can be reduced. Formulation and prescriptions for Fourier measurements convolutions, bandlimited window/mask functions. 26 / 27

References 1 R.W. Gerchberg and W.O. Saxton. A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik, 35:237 246, 1972 2 J.R. Fienup. Reconstruction of an object from the modulus of its Fourier transform. Optics Letters, 3:27 29, 1978. 3 E. J. Candes, T. Strohmer, and V. Voroninski. Phaselift: exact and stable signal recovery from magnitude measurements via convex programming. Comm. Pure Appl. Math., 66, 1241 1274 4 I. Waldspurger, A. d Aspremont, and S. Mallat. Phase recovery, MaxCut and complex semidefinite programming. Math. Prog., 2013. 5 A. Singer, Angular synchronization by eigenvectors and semidefinite programming. Appl. Comp. Harm. Anal., 30:20 36, 2011. 27 / 27