Convex Phase Retrieval without Lifting via PhaseMax

Size: px
Start display at page:

Download "Convex Phase Retrieval without Lifting via PhaseMax"

Transcription

1 Convex Phase etrieval without Lifting via PhaseMax Tom Goldstein * 1 Christoph Studer * 2 Abstract Semidefinite relaxation methods transform a variety of non-convex optimization problems into convex problems, but square the number of variables. We study a new type of convex relaxation for phase retrieval problems, called PhaseMax, that convexifies the underlying problem without lifting. The resulting problem formulation can be solved using standard convex optimization routines, while still working in the original, lowdimensional variable space. We prove, using a random spherical distribution measurement model, that PhaseMax succeeds with high probability for a sufficiently large number of measurements. We compare our approach to other phase retrieval methods and demonstrate that our theory accurately predicts the success of PhaseMax. 1. Introduction Semidefinite relaxation is the technique of replacing a broad range of non-convex optimization problems involving a vector of (possibly discrete variables with convex problems involving matrices. The relaxed convex problem is then solved to global optimality, and the solution is used to extract a global minimizer of the original non-convex problem. Unfortunately, convexity comes at a steep cost: semidefinite relaxation squares the dimensionality of the problem, resulting in formulations that are convex but computationally intractable in many situations. In fact, the increase in dimensionality, which is called lifting, often prevents the use of this technique in machine learning and computer vision applications involving thousands to millions of variables. This article studies convex relaxation for phase retrieval, a canonical non-convex problem that can be solved via semidefinite relaxation. We present a relaxation approach that convexifies the problem without lifting, i.e., it solves the problem in its original, low-dimensional space. * Equal contribution 1 University of Maryland 2 Cornell University. Correspondence to: Tom Goldstein <tomg at cs.umd.edu>. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PML 70, Copyright 2017 by the author(s. 2. Phase etrieval Problems Phase retrieval deals with the recovery of an n-dimensional signal x 0 H n, with H either or C, from m n magnitude measurements of the form (Candès et al., 2013 b i = a i, x 0, i = 1, 2,..., m, (1 where a i H n, and i = 1, 2,..., m are the (known measurement vectors. Because of the nonlinearity caused by measuring the magnitude of the linear terms in (1, the phase retrieval problem is non-convex. A variety of algorithms exist for finding x 0 in (1. Classical methods, such as the Gerchberg-Saxton and Feinup algorithms, search for a solution via alternating least-squares steps that are efficient when the measurement ensemble {a i } forms a tight frame (e.g., a collection of Fourier matrices. More recently, there has been significant interest in convex methods for phase retrieval that provably find global solutions without the danger of getting trapped in local minima. These methods, including PhaseLift (Candès et al., 2013 and its dual formulation PhaseCut (Waldspurger et al., 2015, rely on semidefinite relaxation and replace the unknown n-dimensional vector x with a (much larger n n matrix of unknowns that is recovered using semi-definite programming. While such methods come with strong theoretical guarantees and do not get trapped in local minima, they require lifting (semidefinite relaxation squares the number of unknowns, which makes this approach intractable for real-world image processing applications involving thousands or millions of variables. As a result, there has been a flurry of interest in global convergence properties of nonconvex solvers that operate in the original feature space, such as the methods in (Netrapalli et al., 2013; Schniter & angan, 2015; Candès et al., 2015; Chen & Candès, 2015; Wang et al., While these methods come with theoretical guarantees, non-convexity makes it difficult to combine them with commonly used regularizers, and typically precludes the use of sophisticated optimization methods that can handle constraints, non-differentiable terms, etc. We note that the PhaseMax formulation has also been proposed by (Bahmani & omberg, 2016 and was recently analyzed by (Hand & Voroninski, We discuss and compare these results in Section 8.2.

2 3. A Word on Notation Scalars are lower-case letters, and vector quantities are boldface, lower-case letters. Matrices are boldface uppercase letters, and sets are written using upper-case script font. We denote the inner product between the vectors x, y H n as x, y = x T y, and x T is the transpose in the real case or the Hermitian transpose in the complex case. We denote the real part of the inner product as x, y = (x T y and the imaginary part as x, y I so that x, y = x, y + j x, y I with j 2 = 1. The non-negative reals are denoted + 0. We denote the unit sphere embedded in n as S n 1, and use S n 1 C for the sphere in complex space. To address both of these cases at once, we will often write S n 1 H = {x Hn x 2 = 1}, where H {, C} is either the real or complex numbers. 4. Proposed elaxation We propose PhaseMax, a formulation of the phase retrieval problem that avoids lifting. Put simply, PhaseMax relaxes the non-convex equality constraints a i, x = b i into convex inequality constraints of the form a i, x b i. The resulting convex problem can then be solved using linear programming in the real case or using second-order cone programming in the complex case. At first glance, the proposed relaxation seems too simplistic to recover x 0. Indeed, the relaxed system of inequalities has a trivial solution: the all-zeros vector. To obtain a meaningful solution, we need to force the solution vector to lie on the boundary of the constraint set, where the inequality constraints hold with equality. To force the solution to lie on the boundary, we rely on some intelligent guess ˆx H n of the solution to (1; we will discuss methods for producing such guesses in Section 8.1. We then find the feasible point that lies as far in the direction of ˆx H n as possible. This results in the convex optimization problem { maximize x, ˆx (PhaseMax x H n subject to a i, x b i, i = 1,..., m. The key idea behind PhaseMax is that the objective forces the solution vector to lie along the boundary of the constraint set, where the constraints are active. Evidently, if all of the constraints are active at this solution, then we have recovered a solution to the original non-convex problem (1. Quite surprisingly, the PhaseMax relaxation provably recovers the true solution to (1 in many situations. In particular, we have the following main result: Theorem 1. Consider the case of recovering a signal x H n from m measurements of the form (1 with measurement vectors a i, i = 1, 2,..., m, sampled independently and uniformly from the unit sphere. Let ( x angle(x 0 0, ˆx, ˆx = arccos x 0 2 ˆx 2 be the angle between the true vector x 0 and the guess ˆx, and define the constant α = 1 2 π angle(x0, ˆx that measures the accuracy of our guess. If H = C, then whenever αm > 4n 1, the probability that PhaseMax recovers the true signal x 0 is at least 1 exp ( (αm 4n2. (2 4m Similar results for the real case are derived below. 5. Formulation using Geometric Probability Conditions for which PhaseMax enables exact recovery can be formulated as a classical problem from geometric probability involving random hemispheres, or caps. We begin with a few simple definitions. ecall that we use S n 1 H to denote the unit sphere embedded in Hn. Given a vector a S n 1, the hemisphere cap centered at a is H C H (a = {δ S n 1 H a, δ 0}. (3 This cap contains all vectors that form an acute angle with a. We also need the concept of aligned vectors. A complex vector a S n 1 H is said to be aligned with x Sn 1 H if x, a + 0. In words, two vectors are aligned if their inner product is real valued and non-negative. Given a vector x and a measurement vector a, we have a i, x = ωa i, x 0 for any unit-magnitude ω C. For this reason, we will often consider the set {ã i } = {sign ( a i, x 0 a i } of measurement vectors aligned with x 0 without loss of generality. Note that replacing {a i } with {ã i } in PhaseMax does not change the feasible set, and thus does not impact the recovered solution. Finally, observe that a solution x to (PM must be aligned with ˆx. This is because ωx, ˆx = x, ˆx when ω = sign( x, ˆx. It follows that x cannot be optimal unless sign( x, ˆx = 1. We are now ready to formulate the following exact recovery condition for PhaseMax. Theorem 2. Consider the recovery of a vector x 0 using PhaseMax with guess ˆx. Assume, without loss of generality, that ˆx and the measurement ensemble {a i } are aligned with x 0. Let D = { δ δ, ˆx + 0 } be the set of unit vectors aligned with ˆx. Then, x 0 is the unique solution of PhaseMax if m D C H (a i. (4 i=1

3 Proof. Suppose the conditions of this theorem hold, and let x be a solution to (PM. Since x 0 is in the feasible set, x must produce an objective value at least as large as x 0, and so x, ˆx x 0, ˆx. We know that x is aligned with ˆx. Since x 0 is assumed to be aligned with ˆx, the vector = x x 0 is also aligned with ˆx, and satisfies, ˆx = x, ˆx x 0, ˆx 0. Since x is a feasible solution for (PM, we have a i, x = a i, x [ a i, x 0 a i, ] + a i, 2 b 2 i, i. (5 Now, recall that a i, x 0 2 = b 2 i, and a i is aligned with x 0. Hence, we get a i, 1 2 a i, 2 0, i. (6 If 2 > 0, then we see from (6 that the unit-length vector δ = / 2 satisfies δ C(a i, i, which contradicts the covering condition (4. It follows that 2 = 0 and x = x Sphere Covering esults Theorem 2 states that exact recovery of x 0 occurs when the measurement ensemble {a i } is aligned with x 0, and the set D is covered by the caps {C H (a i }. We now study the probability that this condition holds. To do so, we need a few elementary sphere covering results. Our proof builds on the following simple result. Lemma 1. Suppose we slice the sphere S n 1 n using k planes through the origin. Then we divide the sphere into as most n 1 ( k 1 r(n, k = 2 (7 i regions. i=0 Proof. The proof is by induction. As a base case, we have r(n, 1 = 2 and r(2, k = 2k for k 1. Now suppose we have a sphere S n 1 in n dimensions sliced by k 1 planes into r(n, k 1 original regions. Consider the effect of adding a kth plane, p k. The number of new regions created is equal to the number of original regions that are intersected by p k. To count the number of new regions, we project the k 1 original normal vectors into p k, and count the minimal number of regions formed inside p k by the resulting projected planes, which is at most r(n 1, k 1. Adding this to the number of original regions yields r(n, k = r(n, k 1 + r(n 1, k 1. We leave it to the reader to verify that (7 satisfies this recurrence relation. Lemma 1 appears to have been first proved by (Schläfli, 1953, and simple induction arguments can be found in (Wendel, 1962; Gilbert, 1965; Füredi, Before we attack the problem of when (4 holds, we begin with a simpler question: how often is a sphere covered by caps with centers chosen at random from the sphere? This question has been studied in detail by Gilbert (Gilbert, 1965 in the case n = 3. For our purposes, we need to study the covering probability in the more general case when caps are only chosen from a subset of the sphere, and the sphere can have arbitrarily high dimension. We show below that calculating this probability is easy when the caps are chosen from a symmetric subset of the sphere. We say that the set A is symmetric if, for all x A, we also have x A. A probability measure defined over A is symmetric if the measure of a is the same as a for every a A. Lemma 2. Consider some non-empty symmetric set A S n 1. Choose some set of m A measurements {a i } m A i=1 at random from A using a symmetric measure. Then, the caps {C (a i } cover the sphere S n 1 with probability p cover (m A, n m A 1 n 1 ( ma 1 k k=0. This is the probability of getting n or more heads when flipping m A 1 fair coins. Proof. Let {a i } be a collection of m i.i.d. vectors sampled from A using a symmetric measure. Define a i = c i a i, where {c i } are i.i.d. Bernoulli variables that take value +1 or 1 with probability 1 2. Clearly, the random vectors {a i} are i.i.d. with the same distribution as {a i }. Consider the set C ( a i = C ( c i a i, (8 i i which contains all points not covered by the caps {C (a i }. The caps {C (a i } cover the sphere whenever the intersection (8 is empty. There are 2 m A such intersections that can be formed, one for each choice of the sequence {c i }. Now, { from Lemma 1, we know that the m A random planes {x ai, x = 0} } divide the sphere into at most r(n, m A non-empty regions. Each of these regions corresponds to the intersection (8 for one possible choice of {c i }. Therefore, of the 2 m A possible intersections, at most r(n, m A of them are non-empty. Since each intersection is equally likely, the probability of covering the sphere is at least p cover (m A, n 1 r(n, m A 2 m. A

4 emark: It can be shown that the bound in Lemma 1 is tight/exact when the slicing planes are chosen from a continuous probability distribution. Similarly, the bound in Lemma 2 is exact when the set {a i } is sampled from a continuous distribution over A. We are now ready to present our main result. Exact recovery theorems for PhaseMax will follow immediately from the following geometric theorem. The result considers the case where the measurement vectors are drawn from only one hemisphere. Lemma 3. Consider two vectors x, y S n 1, and the caps C (x and C (y. Let α = 1 2 π angle(x, y be a measure of the similarity between the vectors x and y. Draw some collection {a i C (x} m i=1 of m vectors uniformly from C (x so that α(m 1 > 2n. Then, C (y i Submission and Formatting Instructions for ICML 2017 C (a i x y x ha, xi < 0 ha, x i < 0 ha, xi < 0 and ha, x i < 0 S n 1 Figure 1. An illustration showing the construction of x, y, and x in the proof of Lemma 3. holds with probability at least (αm α 2n2 p cover (m, n; x, y 1 exp (. 2m 2 Proof. To simplify notation, we assume y = [1, 0,..., 0] T. Because of rotational symmetries this does not change the generality of our proof. Consider the reflection of x over y given by x = [x 1, x 2,..., x n ] T. Suppose we have some collection {a i } independently and uniformly distributed on the entire sphere. Consider the collection of vectors a i, if a i, x 0 (9 a i = a i 2 a i, y y, if a i, x < 0, a i, x < 0 (10 a i, if a i, x < 0, a i, x 0. (11 The mapping a i a i maps the half sphere {a a, x < 0} onto the half sphere {a a, x > 0} using a combination of reflections and translations (see Figure 1. This makes the mapping a i a i onto and (piecewise isometric, and so {a i } will be uniformly distributed over the half sphere {a a, x 0 > 0} whenever {a i } is independently and uniformly distributed over the entire sphere. Consider the hourglass shaped, symmetric set A = {a a, x 0, a, x 0} {a a, x 0, a, x 0}. We claim that C (y i C (a i whenever C (a i. (12 S n 1 a i A In words, if the caps defined by the subset of {a i } in A cover the entire sphere, then the caps {C (a i } (which have centers in C (x not only cover C (x, but also cover the nearby cap C (y. To justify this claim, suppose that (12 holds. Choose some δ C (y. This point is covered by some cap C (a i with a i A. If a i, x 0, then a i = a i and δ is covered by C (a i. If a i, x < 0, then δ, a i = δ, a i 2 a i, y y = δ, a i 2 a i, y δ, y δ, a i 0. Note we have used the fact that δ, y is real-valued and nonnegative because δ C (y. We have also used a i, y = [a i ] 1 = 1 2 ( a i, x + a i, x < 0, which follows from the definition of x and the definition of A. Since δ, a i 0, we have δ C (a i, which proves our claim. We can now see that the probability that C (y i C (a i is at least as high as the probability that (12 holds. Let p cover (m, n; x, y m A denote the probability of covering C(y conditioned on the number m A of points lying in A. From Lemma 2, we know that p cover (m, n; x, y m A p cover (m A, n. As noted in Lemma 2, this is the chance of turning up n or more heads when flipping m A 1 fair coins, which is one coin for every measurement a i in A. This probability p cover (m, n; x, y is p cover (m, n; x, y = E ma [p cover (m, n; x, y m A ] E ma [p cover (m A, n]. Let s evaluate this expectation. The region A is defined by two planes that intersect at an angle of β = angle(x, x = 2 angle(x, y. The probability of a random point a i lying in A is given by α = 2π 2β 2π = 1 2β π, which is the fraction of the unit sphere that lies either above or below both planes. The probability of a measurement a i contributing to the heads count is half the probability of it lying in A, or 1 2 α.

5 The probability of turning up n or more heads is therefore given by p cover (m, n; x, y = (13 n 1 ( k ( α 1 1 m k 1 ( m 1 2 α. k k=0 Our final result is obtained by applying Hoeffding s inequality to ( ecovery Guarantees for PhaseMax Using the covering result of Lemma 3, together with the covering formulation of the exact recovery condition (4, proving exact recovery of PhaseMax is now rather straightforward. We begin with the real-valued case. Theorem 3. Consider the case of recovering a real-valued signal x 0 n from m measurements of the form (1 with i.i.d. and uniform vectors {a i S n 1 }. PhaseMax recovers the true signal x 0, with probability at least ( (αm α 2n 2 p (m, n 1 exp, 2m 2 where α = 1 2 π angle(x0, ˆx and α(m 1 > 2n. Proof. Consider the set of m independent and uniformly sampled measurements {a i S n 1 } m i=1. The aligned vectors {ã i = phase( a i, x 0 a i } are uniformly distributed over the half sphere C (x 0. Exact reconstruction happens when the condition in Lemma 2 holds. To bound this probability, we invoke Lemma 3 with x = x 0 and y = ˆx. We have an analogous result in the complex case. Theorem 4. Consider the case of recovering a complexvalued signal x 0 n from m measurements of the form (1 with i.i.d. and uniform vectors {a i S n 1 }. PhaseMax recovers the true signal x 0, with probability at least ( (αm 4n 2 p C (m, n 1 exp, 4m where α = 1 2 π angle(x0, ˆx and αm > 4n + 1. Proof. Let {ã i } = {phase( a i, x 0 a i } be aligned measurement vectors. Define the half sphere of aligned ascent directions D C = {δ S n 1 C δ, ˆx + 0 }. By Lemma 2, PhaseMax recovers x 0 if D C i C C (ã i. (14 Let us bound the probability of this event. Consider the set A = {δ δ, x 0 I = 0}. We now claim that (14 holds whenever C C (ˆx A i C C (ã i. (15 To prove this claim, consider some δ D C. To keep notation light, we will assume without loss of generality that x 0 2 = 1. Form the vector δ = δ + j δ, x 0 I x 0, which is the projection of δ onto A. Suppose now that (15 holds. Since δ C C (ˆx A, there is some i with δ C C (ã i. But then 0 δ, ã i = δ, ã i + j δ, x 0 I x 0, ã i (16 = δ, ã i. (17 We have used the fact that j δ, x 0 I x 0, ã i = j δ, x 0 I ã i, x 0, which is imaginary valued and thus has no real component. We see that δ C C (ã i, and the claim is proved. We now know that exact reconstruction happens whenever condition (15 holds. Note that the sphere S n 1 C is isomorphic to S 2n 1, and the set A is isomorphic to the sphere S 2n 2. The aligned vectors {ã i } are uniformly distributed over a half sphere in C C (x 0 A, which is isomorphic to the upper half sphere in S 2n 2. The probability of these vectors covering the cap C C (ˆx A is thus given by p cover (m, 2n 1; x 0, ˆx from Lemma 3, which is at least ( (αm α 4n exp. 4m 4 We can simplify this by using the fact that α < 1. We also throw away the small constants in the numerator and denominator, which weakens the bound very slightly but tidies up the result. 8. Experiments and Discussion 8.1. Initialization Methods A variety of methods exist for generating the initial guess x 0. The simplest approach is to use a random vector; see (Goldstein & Studer, 2016 for a corresponding analysis and exact recovery proof in this case. A more powerful method is the truncated spectral initializer (Chen & Candès, 2015, which is a refinement of the method put forward in (Netrapalli et al., A detailed analysis of such initializers is provided in (Lu & Li, As proved in Prop. 8 of (Chen & Candès, 2015, for any δ < 2, there is a constant c 0 such that, with probability exceeding 1 exp( c 0 m, a unit-length version of the approximation vector ˆx computed by the truncated spectral initializer satisfies 1 δ2 2 x0, ˆx, provided that m > c 1 n for

6 Table 1. Comparison of theoretical recovery guarantees for noiseless phase retrieval. Algorithm Sample complexity Lower bound on p C (m, n PhaseMax m > (4n 1/α e (α(m 1 4n+22 /(2m 2 PhaseLift (Candès & Li, 2014 m c 0n 1 c 1e c 2m TWF (Candès & Li, 2014 m c 0n 1 c 1e c 2m TAF (Wang et al., 2016 m c 0n 1 (m + 5e n/2 ( c 1e c2m ( 1/n 2 (Bahmani & omberg, 2016 m > 32 8e sin n 1 8e 4 (α M 32 sin 4 (α log 8e sin 4 (α log( sin 4 (α sin 4 (α (Hand & Voroninski, 2016 a m > c 0n 1 6e c 2m a The theory only guarantees recovery for real-valued measurements. N /16 some constant c 1 > 0. This implies that the approximation accuracy satisfies with high probability. α =1 2 π angle(x0, ˆx 1 2 ( π arccos 1 δ2 > 0 (18 2 This motivates the following process that recovers x 0 using a number of measurements that grows linearly with n. First, choose some δ < 2 and calculate α from (18. Next, generate a spectral initializer ˆx using c 1 random Gaussian measurements. This initializer has accuracy α with high probability. Finally, using 5n/α (the constant in this expression must be larger than 4 to guarantee recovery with high probability additional random Gaussian measurements, recover the vector x 0 using PhaseMax. This recovery step succeeds with high probability when n is large. This process recovers x 0 using (5/α + c 1 n measurements, which is linear in n. A simpler recovery approach only samples max{5/α, c 1 }n measurements, and then uses the same set of measurements for both the initialization and recovery. This process works well in practice, and is used in the experiments below. Technically, our results assume the measurements {a i } are independent of ˆx, and so our theory does not formally guarantee convergence in this case. For an analysis that considers the case where {a i } and ˆx are dependent, we refer to the analysis in (Bahmani & omberg, Comparison to Other Phase etrieval Methods We compare PhaseMax with other recovery methods, including lifted convex methods and non-convex approaches. Table 1 lists the sample complexity (measurements needed to achieve exact recovery with 0.5 probability of various phase retrieval methods as a function of the number of unknowns n. We also list the probability of reconstruction from m measurements. We see that PhaseMax requires the same sample complexity (O(n as compared to PhaseLift, TWF, and TAW, when used together with the truncated spectral initializer proposed in (Candès & Li, The recovery bounds for all other methods require unspecified constants (c i in Table 1 that are generally extremely large and require a lower bound on the initialization accuracy. In contrast, the bounds for PhaseMax contain no unspecified constants, explicitly depend on the approximation factor α, and our new analytical approach yields extremely sharp bounds (see below for the details. We also compare in Table 1 to a different analysis of the PhaseMax formulation by (Bahmani & omberg, 2016 that appeared shortly before our own, and to a later analysis by (Hand & Voroninski, By using methods from machine learning theory, (Bahmani & omberg, 2016 produce exact reconstruction bounds that, for a specified value of α, are uniform with respect to the initialization ˆx, and thus guarantee exact signal recovery in the case that ˆx is dependent on the measurement vectors. The analysis presented here is weaker in the sense that is does not have this uniformity property, but stronger in the sense that it produces tighter bounds without unspecified constants. The bounds by (Hand & Voroninski, 2016 require unspecified constants, but the authors show that PhaseMax can be analyzed using standard concentration of measure arguments Tightness of Guarantees By using strong concentration bounds for the unit sphere, our analysis produces sharp recovery guarantees that lie close to behavior observed in practice. In Figure 2, we use random Gaussian test problems and the accelerated gradientbased solver described in (Goldstein et al., 2014 to plot the empirical and theoretical probabilities of exact signal recovery for n = 100 and n = 500 measurements while varying the accuracy β = angle(ˆx, x 0 of the initial guess. We declared exact recovery when the relative error of the

7 1 1 Success probability pc(m, n Success probability pc(m, n β = 45 β = 36 β = Oversampling ratio m/n 0.2 β = 45 β = 36 β = Oversampling ratio m/n Figure 2. Comparison between the empirical success probability (solid lines and our theoretical lower bound (dashed lines for varying angles β between the true signal and the approximation vector. Our theoretical results accurately characterize the empirical success probability of PhaseMax. Furthermore, PhaseMax exhibits a sharp phase transition for larger dimensions. recovered signal fell below Our theoretical bounds tend to agree fairly closely with observations, and generally require fewer than 50% more measurements than is needed in practice. We also observe a sharp phase transition between inaccurate and accurate recovery, as predicted by our theory Performance Limits of PhaseMax To compare PhaseMax to other phase retrieval methods, we observe the accuracy of signal reconstruction as a function of the number of measurements. We emphasize that this comparison is only done in the random Gaussian problem setting, and results may differ with different types of signal, measurement, and noise models. The sole purpose of this experiment is to explore the efficacy and limits of PhaseMax, and to test the tightness of the predicted recovery guarantees. For an extensive comparison between existing methods, see (Waldspurger et al., 2015; Jaganathan et al., We compare the Gerchberg-Saxton algorithm (Gerchberg & Saxton, 1972, the Fienup algorithm (Fienup, 1982, the truncated Wirtinger flow (Chen & Candès, 2015, and PhaseMax. All methods were initialized using the truncated spectral initializer (Chen & Candès, We also run simulations using the semidefinite relaxation method PhaseLift (Candès et al., 2013 implemented using a proximal gradient solver. PhaseLift, and its equivalent dual formulation PhaseCut (Waldspurger et al., 2015, is the only convex alternative to PhaseMax. However, unlike Phase- Max, PhaseLift/PhaseCut lifts the problem to a higher dimension and squares the number of unknowns. Figure 3 reveals that PhaseMax requires larger oversampling ratios m/n to enable faithful signal recovery compared to non-convex phase-retrieval algorithms that operate in the original signal dimension. This is because the truncated spectral initializer requires oversampling ratios of about six or higher to yield sufficiently accurate approximation vectors ˆx that enable PhaseMax to succeed. While PhaseMax does not achieve exact reconstruction with the lowest number of measurements, it is convex, operates in the original signal dimension, can be implemented via solvers for Basis Pursuit, and comes with sharp performance guarantees that do not sweep constants under the rug (cf. Figure 2. The convexity of PhaseMax enables a natural extension to sparse phase retrieval (Jaganathan et al., 2013; Shechtman et al., 2014 or other signal priors (e.g., total variation, group sparsity, or bounded infinity norm that can be formulated with convex functions. Such non-differentiable priors cannot be efficiently minimized using simple gradient descent methods (which form the basis of Wirtinger or amplitude flow, and many other methods, but can potentially be solved using standard convex solvers when combined with the PhaseMax formulation. 9. Conclusions We have proposed a convex relaxation for phase retrieval problems called PhaseMax that does not require lifting. Using methods from geometric probability, we have provided tight bounds on the probability of correct signal recovery. The proposed problem and its analysis also represents a

8 elative reconstruction error [db] GS Fienup TWF TAF PhaseLift PhaseMax elative reconstruction error [db] GS Fienup TWF TAF PhaseMax Oversampling ratio m/n (a n = Oversampling ratio m/n (b n = 500 Figure 3. Comparison of the relative reconstruction error. We use the truncated spectral initializer for Gerchberg-Saxton (GS, Fienup, truncated Wirtinger flow (TWF, truncated amplitude flow (TAF, and PhaseMax. PhaseMax does not achieve exact recovery for the lowest number of measurements among the considered methods, but is convex, operates in the original dimension, and comes with sharp performance guarantees. PhaseLift only terminates in reasonable computation time for n = 100. radical departure, both in theory and algorithms, from conventional methods for convex or semidefinite relaxation. By providing a convex relaxation for phase retrieval in the native parameter space, our approach opens the door for using a broad range of convex optimization routines, regularizers, and priors to solve phase retrieval or related problems in machine learning, computer vision, or signal processing. Finally, the new analytical methods used in this paper have recently been used to prove tight reconstruction bounds for bi-convex problems outside the field of phase retrieval (Aghasi et al., 2017, and may be broadly applicable to a wide range of signal processing problems. Acknowledgments The work of T. Goldstein was supported in part by the US National Science Foundation (NSF under grant CCF , by the US Office of Naval esearch under grant N , and by the Sloan Foundation. The work of C. Studer was supported in part by Xilinx, Inc. and by the US NSF under grants ECCS , CCF , and CAEE CCF eferences Aghasi, A., Ahmed, A., and Hand, P. Branchhull: Convex bilinear inversion from the entrywise product of signals with known signs. arxiv preprint : , Feb Bahmani, S. and omberg, J. Phase retrieval meets statistical learning theory: A flexible convex relaxation. arxiv preprint: , Oct Candès, E. J. and Li, X. Solving quadratic equations via PhaseLift when there are about as many equations as unknowns. Found. Comput. Math., 14(5: , Oct Candès, E. J., Strohmer, T., and Voroninski, V. PhaseLift: Exact and stable signal recovery from magnitude measurements via convex programming. Commun. Pure Appl. Math., 66(8: , Candès, J. E., Li, X., and Soltanolkotabi, M. Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Trans. Inf. Theory, 61(4: , Feb Chen, Y. and Candès, E. Solving random quadratic systems of equations is nearly as easy as solving linear systems. In Adv. Neural Inf. Process. Syst., pp , Fienup, J.. Phase retrieval algorithms: a comparison. Appl. Opt., 21(15: , Aug Füredi, Z. andom polytopes in the d-dimensional cube. Disc. Comput. Geom., 1(4: , Dec Gerchberg,. W. and Saxton, W. O. A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik, 35: , Aug Gilbert, E. N. The probability of covering a sphere with n circular caps. Biometrika, 52(3/4: , Dec

9 Goldstein, T. and Studer, C. PhaseMax: convex phase retrieval via basis pursuit. arxiv preprint: , Oct Goldstein, T., Studer, C., and Baraniuk,. A field guide to forward-backward splitting with a FASTA implementation. arxiv preprint: , Feb Hand, P. and Voroninski, V. An elementary proof of convex phase retrieval in the natural parameter space via the linear program PhaseMax. arxiv preprint: , Nov Jaganathan, K., Oymak, S., and Hassibi, B. Sparse phase retrieval: Convex algorithms and limitations. In Proc. IEEE Int. Symp. Inf. Theory (ISIT, pp , Jul Jaganathan, K., Eldar, Y. C., and Hassibi, B. Phase retrieval: An overview of recent developments. arxiv: , Oct Lu, Y. M. and Li, G. Phase transitions of spectral initialization for high-dimensional nonconvex estimation. arxiv preprint: , Netrapalli, P., Jain, P., and Sanghavi, S. Phase retrieval using alternating minimization. In Adv. Neural Inf. Process. Syst., pp , Schläfli, L. Gesammelte Mathematische Abhandlungen I. Springer Basel, Schniter, P. and angan, S. Compressive phase retrieval via generalized approximate message passing. IEEE Trans. Sig. Process., 63(4: , Feb Shechtman, Y., Beck, A., and Eldar, Y. C. GESPA: efficient phase retrieval of sparse signals. IEEE Trans. Sig. Process., 62(4: , Jan Studer, C., Goldstein, T., Yin, W., and Baraniuk,. G. Democratic representations. arxiv preprint: , Jan Waldspurger, I., d Aspremont, A., and Mallat, S. Phase recovery, maxcut and complex semidefinite programming. Math. Prog., 149(1-2:47 81, Feb Wang, G., Giannakis, G. B., and Eldar, Y. C. Solving systems of random quadratic equations via truncated amplitude flow. arxiv: , Jul Wendel, J. G. A problem in geometric probability. Math. Scand., 11: , 1962.

Fundamental Limits of PhaseMax for Phase Retrieval: A Replica Analysis

Fundamental Limits of PhaseMax for Phase Retrieval: A Replica Analysis Fundamental Limits of PhaseMax for Phase Retrieval: A Replica Analysis Oussama Dhifallah and Yue M. Lu John A. Paulson School of Engineering and Applied Sciences Harvard University, Cambridge, MA 0238,

More information

Fundamental Limits of Weak Recovery with Applications to Phase Retrieval

Fundamental Limits of Weak Recovery with Applications to Phase Retrieval Proceedings of Machine Learning Research vol 75:1 6, 2018 31st Annual Conference on Learning Theory Fundamental Limits of Weak Recovery with Applications to Phase Retrieval Marco Mondelli Department of

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Phase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation

Phase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation hase Retrieval eets Statistical Learning Theory: A Flexible Convex Relaxation Sohail Bahmani Justin Romberg School of ECE Georgia Institute of Technology {sohail.bahmani,jrom}@ece.gatech.edu Abstract We

More information

Phase recovery with PhaseCut and the wavelet transform case

Phase recovery with PhaseCut and the wavelet transform case Phase recovery with PhaseCut and the wavelet transform case Irène Waldspurger Joint work with Alexandre d Aspremont and Stéphane Mallat Introduction 2 / 35 Goal : Solve the non-linear inverse problem Reconstruct

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems

Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems Yuxin Chen Department of Statistics Stanford University Stanford, CA 94305 yxchen@stanfor.edu Emmanuel J. Candès

More information

Optimal Spectral Initialization for Signal Recovery with Applications to Phase Retrieval

Optimal Spectral Initialization for Signal Recovery with Applications to Phase Retrieval Optimal Spectral Initialization for Signal Recovery with Applications to Phase Retrieval Wangyu Luo, Wael Alghamdi Yue M. Lu Abstract We present the optimal design of a spectral method widely used to initialize

More information

Spectral Initialization for Nonconvex Estimation: High-Dimensional Limit and Phase Transitions

Spectral Initialization for Nonconvex Estimation: High-Dimensional Limit and Phase Transitions Spectral Initialization for Nonconvex Estimation: High-Dimensional Limit and hase Transitions Yue M. Lu aulson School of Engineering and Applied Sciences Harvard University, USA Email: yuelu@seas.harvard.edu

More information

Solving Large-scale Systems of Random Quadratic Equations via Stochastic Truncated Amplitude Flow

Solving Large-scale Systems of Random Quadratic Equations via Stochastic Truncated Amplitude Flow Solving Large-scale Systems of Random Quadratic Equations via Stochastic Truncated Amplitude Flow Gang Wang,, Georgios B. Giannakis, and Jie Chen Dept. of ECE and Digital Tech. Center, Univ. of Minnesota,

More information

A Geometric Analysis of Phase Retrieval

A Geometric Analysis of Phase Retrieval A Geometric Analysis of Phase Retrieval Ju Sun, Qing Qu, John Wright js4038, qq2105, jw2966}@columbiaedu Dept of Electrical Engineering, Columbia University, New York NY 10027, USA Abstract Given measurements

More information

Nonconvex Methods for Phase Retrieval

Nonconvex Methods for Phase Retrieval Nonconvex Methods for Phase Retrieval Vince Monardo Carnegie Mellon University March 28, 2018 Vince Monardo (CMU) 18.898 Midterm March 28, 2018 1 / 43 Paper Choice Main reference paper: Solving Random

More information

Fast Compressive Phase Retrieval

Fast Compressive Phase Retrieval Fast Compressive Phase Retrieval Aditya Viswanathan Department of Mathematics Michigan State University East Lansing, Michigan 4884 Mar Iwen Department of Mathematics and Department of ECE Michigan State

More information

Reshaped Wirtinger Flow for Solving Quadratic System of Equations

Reshaped Wirtinger Flow for Solving Quadratic System of Equations Reshaped Wirtinger Flow for Solving Quadratic System of Equations Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 344 hzhan3@syr.edu Yingbin Liang Department of EECS Syracuse University

More information

Nonlinear Analysis with Frames. Part III: Algorithms

Nonlinear Analysis with Frames. Part III: Algorithms Nonlinear Analysis with Frames. Part III: Algorithms Radu Balan Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD July 28-30, 2015 Modern Harmonic Analysis and Applications

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations Dian Mo Department of Electrical and Computer Engineering University of Massachusetts Amherst, MA 3 mo@umass.edu Marco F.

More information

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University Statistical Issues in Searches: Photon Science Response Rebecca Willett, Duke University 1 / 34 Photon science seen this week Applications tomography ptychography nanochrystalography coherent diffraction

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Fast, Sample-Efficient Algorithms for Structured Phase Retrieval

Fast, Sample-Efficient Algorithms for Structured Phase Retrieval Fast, Sample-Efficient Algorithms for Structured Phase Retrieval Gauri jagatap Electrical and Computer Engineering Iowa State University Chinmay Hegde Electrical and Computer Engineering Iowa State University

More information

INEXACT ALTERNATING OPTIMIZATION FOR PHASE RETRIEVAL WITH OUTLIERS

INEXACT ALTERNATING OPTIMIZATION FOR PHASE RETRIEVAL WITH OUTLIERS INEXACT ALTERNATING OPTIMIZATION FOR PHASE RETRIEVAL WITH OUTLIERS Cheng Qian Xiao Fu Nicholas D. Sidiropoulos Lei Huang Dept. of Electronics and Information Engineering, Harbin Institute of Technology,

More information

Approximating scalable frames

Approximating scalable frames Kasso Okoudjou joint with X. Chen, G. Kutyniok, F. Philipp, R. Wang Department of Mathematics & Norbert Wiener Center University of Maryland, College Park 5 th International Conference on Computational

More information

PhasePack: A Phase Retrieval Library

PhasePack: A Phase Retrieval Library PhasePack: A Phase Retrieval Library https://github.com/tomgoldstein/phasepack-matlab Rohan Chandra, Ziyuan Zhong, Justin Hontz, Val McCulloch, Christoph Studer and Tom Goldstein University of Maryland,

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

The Iterative and Regularized Least Squares Algorithm for Phase Retrieval

The Iterative and Regularized Least Squares Algorithm for Phase Retrieval The Iterative and Regularized Least Squares Algorithm for Phase Retrieval Radu Balan Naveed Haghani Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD April 17, 2016

More information

COR-OPT Seminar Reading List Sp 18

COR-OPT Seminar Reading List Sp 18 COR-OPT Seminar Reading List Sp 18 Damek Davis January 28, 2018 References [1] S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi, and B. Recht. Low-rank Solutions of Linear Matrix Equations via Procrustes

More information

Phase Retrieval by Linear Algebra

Phase Retrieval by Linear Algebra Phase Retrieval by Linear Algebra Pengwen Chen Albert Fannjiang Gi-Ren Liu December 12, 2016 arxiv:1607.07484 Abstract The null vector method, based on a simple linear algebraic concept, is proposed as

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

arxiv: v2 [stat.ml] 12 Jun 2015

arxiv: v2 [stat.ml] 12 Jun 2015 Phase Retrieval using Alternating Minimization Praneeth Netrapalli Prateek Jain Sujay Sanghavi June 15, 015 arxiv:1306.0160v [stat.ml] 1 Jun 015 Abstract Phase retrieval problems involve solving linear

More information

Phase Retrieval using Alternating Minimization

Phase Retrieval using Alternating Minimization Phase Retrieval using Alternating Minimization Praneeth Netrapalli Department of ECE The University of Texas at Austin Austin, TX 787 praneethn@utexas.edu Prateek Jain Microsoft Research India Bangalore,

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Incremental Reshaped Wirtinger Flow and Its Connection to Kaczmarz Method

Incremental Reshaped Wirtinger Flow and Its Connection to Kaczmarz Method Incremental Reshaped Wirtinger Flow and Its Connection to Kaczmarz Method Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yingbin Liang Department of EECS Syracuse

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

Performance Regions in Compressed Sensing from Noisy Measurements

Performance Regions in Compressed Sensing from Noisy Measurements Performance egions in Compressed Sensing from Noisy Measurements Junan Zhu and Dror Baron Department of Electrical and Computer Engineering North Carolina State University; aleigh, NC 27695, USA Email:

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Linear Spectral Estimators and an Application to Phase Retrieval

Linear Spectral Estimators and an Application to Phase Retrieval Ramina Ghods Andrew S. Lan Tom Goldstein 3 Christoph Studer Abstract Phase retrieval refers to the problem of recovering real- or complex-valued vectors from magnitude measurements. The best-known algorithms

More information

Provable Non-convex Phase Retrieval with Outliers: Median Truncated Wirtinger Flow

Provable Non-convex Phase Retrieval with Outliers: Median Truncated Wirtinger Flow Provable Non-convex Phase Retrieval with Outliers: Median Truncated Wirtinger Flow Huishuai Zhang Department of EECS, Syracuse University, Syracuse, NY 3244 USA Yuejie Chi Department of ECE, The Ohio State

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Linear Spectral Estimators and an Application to Phase Retrieval

Linear Spectral Estimators and an Application to Phase Retrieval Ramina Ghods Andrew S. Lan Tom Goldstein 3 Christoph Studer Abstract Phase retrieval refers to the problem of recovering real- or complex-valued vectors from magnitude measurements. The best-known algorithms

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Frequency Subspace Amplitude Flow for Phase Retrieval

Frequency Subspace Amplitude Flow for Phase Retrieval Research Article Journal of the Optical Society of America A 1 Frequency Subspace Amplitude Flow for Phase Retrieval ZHUN WEI 1, WEN CHEN 2,3, AND XUDONG CHEN 1,* 1 Department of Electrical and Computer

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

PHASE retrieval (PR) is encountered in several imaging

PHASE retrieval (PR) is encountered in several imaging Phase Retrieval From Binary Measurements Subhadip Mukherjee and Chandra Sekhar Seelamantula, Senior Member, IEEE arxiv:78.62v2 [cs.it] 6 Nov 27 Abstract We consider the problem of signal reconstruction

More information

arxiv: v1 [cs.it] 26 Oct 2015

arxiv: v1 [cs.it] 26 Oct 2015 Phase Retrieval: An Overview of Recent Developments Kishore Jaganathan Yonina C. Eldar Babak Hassibi Department of Electrical Engineering, Caltech Department of Electrical Engineering, Technion, Israel

More information

Sparse phase retrieval via group-sparse optimization

Sparse phase retrieval via group-sparse optimization Sparse phase retrieval via group-sparse optimization Fabien Lauer, Henrik Ohlsson To cite this version: Fabien Lauer, Henrik Ohlsson. Sparse phase retrieval via group-sparse optimization. 2014.

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation

More information

Nonconvex Matrix Factorization from Rank-One Measurements

Nonconvex Matrix Factorization from Rank-One Measurements Yuanxin Li Cong Ma Yuxin Chen Yuejie Chi CMU Princeton Princeton CMU Abstract We consider the problem of recovering lowrank matrices from random rank-one measurements, which spans numerous applications

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Bo Liu Department of Computer Science, Rutgers Univeristy Xiao-Tong Yuan BDAT Lab, Nanjing University of Information Science and Technology

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Phase Retrieval using the Iterative Regularized Least-Squares Algorithm

Phase Retrieval using the Iterative Regularized Least-Squares Algorithm Phase Retrieval using the Iterative Regularized Least-Squares Algorithm Radu Balan Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD August 16, 2017 Workshop on Phaseless

More information

Recovery of Compactly Supported Functions from Spectrogram Measurements via Lifting

Recovery of Compactly Supported Functions from Spectrogram Measurements via Lifting Recovery of Compactly Supported Functions from Spectrogram Measurements via Lifting Mark Iwen markiwen@math.msu.edu 2017 Friday, July 7 th, 2017 Joint work with... Sami Merhi (Michigan State University)

More information

On The 2D Phase Retrieval Problem

On The 2D Phase Retrieval Problem 1 On The 2D Phase Retrieval Problem Dani Kogan, Yonina C. Eldar, Fellow, IEEE, and Dan Oron arxiv:1605.08487v1 [cs.it] 27 May 2016 Abstract The recovery of a signal from the magnitude of its Fourier transform,

More information

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms Aditya Viswanathan Department of Mathematics and Statistics adityavv@umich.edu www-personal.umich.edu/

More information

Coherent imaging without phases

Coherent imaging without phases Coherent imaging without phases Miguel Moscoso Joint work with Alexei Novikov Chrysoula Tsogka and George Papanicolaou Waves and Imaging in Random Media, September 2017 Outline 1 The phase retrieval problem

More information

Phase retrieval with random Gaussian sensing vectors by alternating projections

Phase retrieval with random Gaussian sensing vectors by alternating projections Phase retrieval with random Gaussian sensing vectors by alternating projections arxiv:609.03088v [math.st] 0 Sep 06 Irène Waldspurger Abstract We consider a phase retrieval problem, where we want to reconstruct

More information

Capacity-Approaching PhaseCode for Low-Complexity Compressive Phase Retrieval

Capacity-Approaching PhaseCode for Low-Complexity Compressive Phase Retrieval Capacity-Approaching PhaseCode for Low-Complexity Compressive Phase Retrieval Ramtin Pedarsani, Kangwook Lee, and Kannan Ramchandran Dept of Electrical Engineering and Computer Sciences University of California,

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction

Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction Namrata Vaswani ECE Dept., Iowa State University, Ames, IA 50011, USA, Email: namrata@iastate.edu Abstract In this

More information

SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS. Eva L. Dyer, Christoph Studer, Richard G. Baraniuk

SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS. Eva L. Dyer, Christoph Studer, Richard G. Baraniuk SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS Eva L. Dyer, Christoph Studer, Richard G. Baraniuk Rice University; e-mail: {e.dyer, studer, richb@rice.edu} ABSTRACT Unions of subspaces have recently been

More information

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL Yuxin Chen Yonina C. Eldar Andrea J. Goldsmith Department

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

The Phase Problem: A Mathematical Tour from Norbert Wiener to Random Matrices and Convex Optimization

The Phase Problem: A Mathematical Tour from Norbert Wiener to Random Matrices and Convex Optimization The Phase Problem: A Mathematical Tour from Norbert Wiener to Random Matrices and Convex Optimization Thomas Strohmer Department of Mathematics University of California, Davis Norbert Wiener Colloquium

More information

SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS. Eva L. Dyer, Christoph Studer, Richard G. Baraniuk. ECE Department, Rice University, Houston, TX

SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS. Eva L. Dyer, Christoph Studer, Richard G. Baraniuk. ECE Department, Rice University, Houston, TX SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS Eva L. Dyer, Christoph Studer, Richard G. Baraniuk ECE Department, Rice University, Houston, TX ABSTRACT Unions of subspaces have recently been shown to provide

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Research Statement Qing Qu

Research Statement Qing Qu Qing Qu Research Statement 1/4 Qing Qu (qq2105@columbia.edu) Today we are living in an era of information explosion. As the sensors and sensing modalities proliferate, our world is inundated by unprecedented

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Scalar curvature and the Thurston norm

Scalar curvature and the Thurston norm Scalar curvature and the Thurston norm P. B. Kronheimer 1 andt.s.mrowka 2 Harvard University, CAMBRIDGE MA 02138 Massachusetts Institute of Technology, CAMBRIDGE MA 02139 1. Introduction Let Y be a closed,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Covariance Sketching via Quadratic Sampling

Covariance Sketching via Quadratic Sampling Covariance Sketching via Quadratic Sampling Yuejie Chi Department of ECE and BMI The Ohio State University Tsinghua University June 2015 Page 1 Acknowledgement Thanks to my academic collaborators on some

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization Design of Proection Matrix for Compressive Sensing by Nonsmooth Optimization W.-S. Lu T. Hinamoto Dept. of Electrical & Computer Engineering Graduate School of Engineering University of Victoria Hiroshima

More information

Restricted Strong Convexity Implies Weak Submodularity

Restricted Strong Convexity Implies Weak Submodularity Restricted Strong Convexity Implies Weak Submodularity Ethan R. Elenberg Rajiv Khanna Alexandros G. Dimakis Department of Electrical and Computer Engineering The University of Texas at Austin {elenberg,rajivak}@utexas.edu

More information

On Optimal Frame Conditioners

On Optimal Frame Conditioners On Optimal Frame Conditioners Chae A. Clark Department of Mathematics University of Maryland, College Park Email: cclark18@math.umd.edu Kasso A. Okoudjou Department of Mathematics University of Maryland,

More information

COMPRESSED Sensing (CS) is a method to recover a

COMPRESSED Sensing (CS) is a method to recover a 1 Sample Complexity of Total Variation Minimization Sajad Daei, Farzan Haddadi, Arash Amini Abstract This work considers the use of Total Variation (TV) minimization in the recovery of a given gradient

More information

Fourier phase retrieval with random phase illumination

Fourier phase retrieval with random phase illumination Fourier phase retrieval with random phase illumination Wenjing Liao School of Mathematics Georgia Institute of Technology Albert Fannjiang Department of Mathematics, UC Davis. IMA August 5, 27 Classical

More information