Solving Corrupted Quadratic Equations, Provably

Size: px
Start display at page:

Download "Solving Corrupted Quadratic Equations, Provably"

Transcription

1 Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206

2 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin Liang (Syracuse). Research supported by NSF, AFOSR and ONR.

3 Estimation of Low-rank PSD Matrices Consider estimation of a low-rank positive-semidefinite (PSD) matrix X R n n from symmetric rank-one measurements: y i = a i a T i, X = a T i Xa i, i =,..., m. The measurements are nonnegative since X 0. If rank(x) = r, decompose X as X = UU T, where U R n r, then the measurements are quadratic in U: The rank r may be unknown. y i = a T i UU T a i = U T a i 2 2. Goal: recover X or U from as a small number of measurements. Related to low-rank matrix recovery but more structured/restricted. 2

4 Application - phase retrieval Quadratic measurements arise in optical applications such as phase retrieval, namely, recover x R n /C n from y i = a i, x 2 = a i (xx )a i, i =,..., m. Figure : A typical setup for structured illuminations in diffraction imaging using a phase mask. E. J. Candès, Y. C. Eldar, T. Strohmer and V. Voroninski, Phase retrieval via matrix completion, SIAM J. on Imaging Sciences. 3

5 Application - projection retrieval Quadratic measurements arise in the problem of projection (or subspace) retrieval via energy measurements, namely, recover a subspace U R n r from y i = U T a i 2 2 = a T i (UU T )a i, i =,..., m. a j a i ku T a j k ku T a i k U Useful in SAR imaging. M. Fickus and D. Mixon, Projection Retrieval: Theory and Algorithms, SAMPTA

6 Application - covariance sketching Question: how to sketch a high-dimensional data stream in order to recover its covariance matrix? Consider a data stream possible distributively observed at m sensors: Quadratic Sketching: For each sketching vector a i R n with i.i.d. sub-gaussian entries, i =,..., m: Sketch a substream indexed by {l i t} T t= with a i, x l i t 2 and compute the average: ) y i,t = T ( T 2 a i, x l i t = a T i T t= T x l i t x T l i t t= where X = E[xx T ] is approximately low-rank. a i T a T i Xa i, Y. Chen, Y. Chi, and A. J. Goldsmith. Exact and stable covariance estimation from quadratic sampling via convex programming. IEEE Trans. on IT,

7 Near-optimal recovery via convex programming When a i s are composed of i.i.d. Gaussian entries, we aim to recover X using the following trace minimization algorithm: ˆX = argmin Tr(M) subject to y i = a T i Ma i, i =,..., m. M 0 Theorem (Chen, Chi and Goldsmith, 203) With probability exceeding c exp( c 2 m), the solution ˆX exactly recovers all rank-r matrices X, provided that m > c 0 nr, where c 0, c, c 2 are universal constants. Exact recovery with m = O(nr); Robust against approximate low-rankness and bounded noise. r/n information theoretic limit similar guarantees also hold for the sub-gaussian case m / (n*n) 0 6

8 What about outliers? Outliers happen with sensor failures, malicious attacks, and missing data; For covariance sketching, insufficiently aggregated sketches can be regarded as an outlier; In this talk, we re interested when the measurements are corrupted by both sparse outliers and bounded noise: or equivalently y i = a T i Xa i + η i + w i, i =,..., m, y = A (X) + η + w, where η is a sparse vector with η 0 sm and w is a dense bounded noise. 7

9 Pursuit of outlier-robust algorithms Previous approaches are sensitive to outliers. Goal: algorithms that are oblivious to outliers, i.e. perform equally well with or without outliers, and without any special treatments of outliers. And also statistically and computationally efficient. small sample size: hopefully m is linear in n; large fraction of outliers: hopefully s is a small constant; low computational complexity and easy to implement. We will outline two approaches, based on convex and non-convex optimization respectively. 8

10 Outlier-robust recovery by convex programming To motivate, ideally one would like to look for low-rank matrices that maintain outlier sparsity: ˆX = argmin y A(M) 0, s.t. rank(m) = r M 0 By relaxing the objective function to the l -norm minimization, and dropping the rank constraint, ˆX = argmin y A(M) M 0 We call this algorithm l -regularized Phaselift, or Phaselift-l. Parameter-free formulation without trace minimization or tuning parameters; No prior information is required for the matrix rank, corruption level or bounded noise level. 9

11 Performance guarantee of Phaselift-l Theorem (Li, Sun and Chi, 206) Suppose that w ɛ. Assume the support of η is selected uniformly at random with the signs of η are generated from a symmetric Bernoulli distribution. Then for a fixed rank-r PSD matrix X R n n, there exist some absolute constants C > 0 and 0 < s 0 < such that as long as m > C nr 2, s s 0 /r, the solution to the proposed algorithm satisfies ˆX rɛ X C 2 F m, with probability exceeding e γm/r2 for some constants C 2 and γ. Proof by dual certificate construction. Exact recovery when w = 0 as long as m nr 2 and s /r. When r = we obtain near-optimal guarantee, which recovers the result by Hand for the phase retrieval case ; P. Hand, Phaselift is robust to a constant fraction of arbitrary errors. 0

12 Numerical Performance: Outlier robustness Rank (r) Rank (r) Number of measurements (m) Percent of outliers (s) 0 (a) Fix s, r vs m (b) Fix m, r vs s Figure: Phase transitions of PSD matrix recovery with respect to (a) the number of measurements and the rank, with 5% of measurements corrupted by arbitrary standard Gaussian variables; (b) the percent of outliers and the rank, when the number of measurements is m = 400, where n = 40.

13 2 Robust recovery of Toeplitz PSD Matrices If X is additionally Toeplitz, this can be incorporated: ˆX = argmin y A(M). M 0, Toeplitz Rank (r) Number of measurements (m) Figure: Phase transitions of low-rank Toeplitz PSD matrix recovery w.r.t. the number of measurements and the rank with 5% of measurements corrupted by standard Gaussian variables, when n = 64. 0

14 3 Non-convex approach based on factored model Can we reduce the computational complexity? If rank(x) is known a priori as r, using the Cholesky factorization X = UU T where U R n r, one can directly recover U: Û = argmin l(u) := argmin U R n r U R m n r m l(y i ; U) i= for some loss function l(y i, U): ( quadratic loss of power: l(u; y i) = y i ) U T a i quadratic loss of amplitude: l(u; y i) = ( yi ) U T 2 a 2 i Poisson loss: l(u; y i) = U T a i 2 2 y i log U T a i 2 2 What are the challenges? l(u) can be non-convex and non-smooth. With outliers, we want the loss to sum over only clean samples.

15 4 Non-convex phase retrieval Rank- case (phase retrieval): y i = a i, x 2 + η i + w i, i =,..., m where η 0 = s m is the outliers, w is the additive noise. Exciting developments (without outliers) all following the same recipe: Initialize z (0) via the (truncated) spectral method to land in the neighborhood of the ground truth; Iterative update using (truncated) gradient descent; Provable near-optimal performance for Gaussian measurement model: Statistically: m = O(n) near-optimal sample complexity Computationally: geometric convergence with near-linear run time Examples: Wirtinger Flow (WF) (Candès et.al. 204), Truncated Wirtinger Flow (TWF) (Chen and Candès 205), Reshaped Wirtinger Flow (Zhang and Liang 206), Truncated Amplitude Flow (Wang, Giannakis and Eldar, 206)

16 Non-convex phase retrieval with outliers In the presence of arbitrary outliers, existing approaches fail: Spectral initialization would fail: the eigenvector of Y can be arbitrarily perturbed Y = m y i a i a T i or Y = m y i a i a T i { yi α m m y mean({y i})}. i= i= }{{}}{{} WF TWF Gradient descent would fail: the search direction can be arbitrarily perturbed z (t+) = z (t) µ l(z (t) ; y z (0) 2 i ) i T t where T t = {,..., m} for WF and { } T t = i : y i a T i z (t) 2 α h mean({ y i a T i z (t) 2 }) for TWF. with some details hiding 5

17 6 Robust phase retrieval via median-truncation Need better strategy to eliminate outliers! Key approach: median-truncation well-known in robust statistics to be outlier-resilient; little appearance in high-dimensional estimation; Median is more stable than mean and top-k truncation (which truncates a fixed amount of samples) for various levels of outliers. 2 =0, s=0. 2 = x 2, s=0. 2 =20 x 2, s= mean threshold median threshold mean threshold median threshold mean threshold median threshold top-k threshold top-k threshold top-k threshold 0 Measurements 0 Measurements 0 Measurements no outliers small outlier magnitudes large outlier magnitudes

18 7 Median-Truncated Wirtinger Flow (median-twf) We adopt the Poisson loss function (other loss functions work too) and the Gaussian measurement model. Median-truncated spectral initialization: Set z (0) := λ 0 z where Direction estimation: z is the leading eigenvector of Y = m m y ia ia T i { yi 9/0.455 median({y i })}. i= Norm estimation: λ 0 = median({y i})/0.455 y i = a T i x 2 χ 2 and E[median(χ 2 )] = As long as m = O(n log n) and s = O(), the initialization is provably close to the ground truth: dist(z (0), x) 0 x, where dist(z (0), x) = min{ z (0) + x, z (0) x }.

19 8 Median-Truncated Wirtinger Flow (median-twf) Median-truncated gradient descent: z (t+) = z (t) 2µ m where { } E = i : 0.3 at i z (t) z (t) 5, E 2 = i z(t) 2 y i at a T i E E 2 i z(t) a i } {{ } l tr(z) { } i : r (t) i 2 at i z (t) z (t) median({r(t) i }), with r (t) i = y i (a T i z(t) ) 2. As long as m = O(n log n) and s = O(), l tr (z) satisfies the Regularity Condition RC(µ, λ) for all z, h = z x: m l tr(z), h µ m l tr(z) 2 + λ h 2, h 0 z. which guarantees dist(z (t+), x) ( µλ)dist(z (t), x).,

20 9 Performance guarantee of median-twf Theorem (Zhang, Chi and Liang, 206) Consider the model y i = a i, x 2 + w i + η i, where w c x 2 and η 0 sm. If m c 0 n log n and s < s 0, then with prob. c exp( c 2m), median-twf yields dist(z (t), x) w x + ( ρ)t x, t N simultaneously for all x R n \{0} and some constants c 0, c, c 2 > 0 and 0 < ρ <. Exact recovery when w = 0 with slight more samples (m = O(n log n)) but a constant fraction of outliers s = O(). Stable recovery with additional bounded noise; Resist outliers obliviously: no prior knowledge of outliers. First non-asymptotic robust recovery guarantee using median: much more involved due to the nonlinearity of median.

21 20 Proof sketch Definition (Generalized quantile function) Let 0 < p <. If F is a CDF, the generalized quantile function is F (p) = inf{x R : F (x) p}. Denote θ p (F ) := F (p) and θ p ({X i }) := θ p ( ˆF ), where ˆF is the empirical distribution of the samples {X i } m i=. Concentration of sample quantile: Assume {X i } m i= are i.i.d. drawn from some distribution F. Under some minor assumptions, w.h.p. θ p ({X i } m i=) θ p (F ) < ɛ Bound median by quantiles of clean samples: Consider clean samples { X i } m i= and contaminated samples {X i} m i=. Then θ 2 s ({ X i }) θ 2 ({X i}) θ 2 +s ({ X i }).

22 2 Proof sketch Lemma (Concentration of median) If m > c 0 n log n, then with probability at least c exp( c 2 m), there exist constants β and β such that β z h median( { a T i x 2 a T i z 2 } m i= ) β z h, holds for all z, h := z x satisfying h < / z. A similar property is established for the mean when m = O(n); here we lose a factor of log n due to working with the median.

23 Success rate Success rate Success rate Success rate Numerical experiments with median-twf median-twf median-twf 0.8 trimean-twf TWF 0.8 trimean-twf TWF Outliers fraction s (a) η = 0. x Outliers fraction s (b) η = x median TWF trimean TWF TWF median-twf trimean-twf TWF Outliers fraction s (c) η = 0 x Outliers fraction s (d) η = 00 x 2 Figure: Success rate of exact recovery with outliers for median-twf, trimean-twf, and TWF at different levels of outlier magnitudes. 22

24 Relative error 23 Numerical experiments with median-twf Recovery with both dense noise and sparse outliers: With outliers, median-twf achieve better accuracy than TWF. Moreover, median-twf with outliers achieves almost the same accuracy of TWF without outliers median-twf with outliers TWF with outliers TWF without outliers Iterations Figure: Relative error of median-twf vs. TWF w.r.t. iteration when s = 0., w = 0.0 x 2, and η = w.

25 24 Conclusion We have discussed two approaches for combating outliers: Convex optimization based on PhaseLift-l : ˆX = argmin y A(X) X 0 Non-convex optimization based on median-twf: z (t+) = z (t) µ m l(y i, z (t) ) Ei m No prior knowledge of outliers are required: we can run these algorithms as if outliers do not exist; Exact recovery guarantees for Gaussian measurement model are obtained, even with a constant proportion of arbitrary outliers; Stability against additional bounded noise. Work in progress: extending median-twf to the low-rank setting. i=

26 25 References. Outlier-Robust Recovery of Low-rank Positive Semidefinite Matrices From Magnitude Measurements, ICASSP Provable Non-convex Phase Retrieval with Outliers: Median Truncated Wirtinger Flow, ICML 206

Covariance Sketching via Quadratic Sampling

Covariance Sketching via Quadratic Sampling Covariance Sketching via Quadratic Sampling Yuejie Chi Department of ECE and BMI The Ohio State University Tsinghua University June 2015 Page 1 Acknowledgement Thanks to my academic collaborators on some

More information

Provable Non-convex Phase Retrieval with Outliers: Median Truncated Wirtinger Flow

Provable Non-convex Phase Retrieval with Outliers: Median Truncated Wirtinger Flow Provable Non-convex Phase Retrieval with Outliers: Median Truncated Wirtinger Flow Huishuai Zhang Department of EECS, Syracuse University, Syracuse, NY 3244 USA Yuejie Chi Department of ECE, The Ohio State

More information

Nonconvex Methods for Phase Retrieval

Nonconvex Methods for Phase Retrieval Nonconvex Methods for Phase Retrieval Vince Monardo Carnegie Mellon University March 28, 2018 Vince Monardo (CMU) 18.898 Midterm March 28, 2018 1 / 43 Paper Choice Main reference paper: Solving Random

More information

Median-Truncated Gradient Descent: A Robust and Scalable Nonconvex Approach for Signal Estimation

Median-Truncated Gradient Descent: A Robust and Scalable Nonconvex Approach for Signal Estimation Median-Truncated Gradient Descent: A Robust and Scalable Nonconvex Approach for Signal Estimation Yuejie Chi, Yuanxin Li, Huishuai Zhang, and Yingbin Liang Abstract Recent work has demonstrated the effectiveness

More information

Reshaped Wirtinger Flow for Solving Quadratic System of Equations

Reshaped Wirtinger Flow for Solving Quadratic System of Equations Reshaped Wirtinger Flow for Solving Quadratic System of Equations Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 344 hzhan3@syr.edu Yingbin Liang Department of EECS Syracuse University

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Incremental Reshaped Wirtinger Flow and Its Connection to Kaczmarz Method

Incremental Reshaped Wirtinger Flow and Its Connection to Kaczmarz Method Incremental Reshaped Wirtinger Flow and Its Connection to Kaczmarz Method Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yingbin Liang Department of EECS Syracuse

More information

Nonlinear Analysis with Frames. Part III: Algorithms

Nonlinear Analysis with Frames. Part III: Algorithms Nonlinear Analysis with Frames. Part III: Algorithms Radu Balan Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD July 28-30, 2015 Modern Harmonic Analysis and Applications

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Robust Sparse Recovery via Non-Convex Optimization

Robust Sparse Recovery via Non-Convex Optimization Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Yuejie Chi Departments of ECE and BMI The Ohio State University Colorado School of Mines December 9, 24 Page Acknowledgement Joint work

More information

Nonconvex Matrix Factorization from Rank-One Measurements

Nonconvex Matrix Factorization from Rank-One Measurements Yuanxin Li Cong Ma Yuxin Chen Yuejie Chi CMU Princeton Princeton CMU Abstract We consider the problem of recovering lowrank matrices from random rank-one measurements, which spans numerous applications

More information

Phase recovery with PhaseCut and the wavelet transform case

Phase recovery with PhaseCut and the wavelet transform case Phase recovery with PhaseCut and the wavelet transform case Irène Waldspurger Joint work with Alexandre d Aspremont and Stéphane Mallat Introduction 2 / 35 Goal : Solve the non-linear inverse problem Reconstruct

More information

Recovery of Simultaneously Structured Models using Convex Optimization

Recovery of Simultaneously Structured Models using Convex Optimization Recovery of Simultaneously Structured Models using Convex Optimization Maryam Fazel University of Washington Joint work with: Amin Jalali (UW), Samet Oymak and Babak Hassibi (Caltech) Yonina Eldar (Technion)

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems

Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems Yuxin Chen Department of Statistics Stanford University Stanford, CA 94305 yxchen@stanfor.edu Emmanuel J. Candès

More information

The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences

The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences Yuxin Chen Emmanuel Candès Department of Statistics, Stanford University, Sep. 2016 Nonconvex optimization

More information

Composite nonlinear models at scale

Composite nonlinear models at scale Composite nonlinear models at scale Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with D. Davis (Cornell), M. Fazel (UW), A.S. Lewis (Cornell) C. Paquette (Lehigh), and S. Roy (UW)

More information

Fundamental Limits of Weak Recovery with Applications to Phase Retrieval

Fundamental Limits of Weak Recovery with Applications to Phase Retrieval Proceedings of Machine Learning Research vol 75:1 6, 2018 31st Annual Conference on Learning Theory Fundamental Limits of Weak Recovery with Applications to Phase Retrieval Marco Mondelli Department of

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

The Iterative and Regularized Least Squares Algorithm for Phase Retrieval

The Iterative and Regularized Least Squares Algorithm for Phase Retrieval The Iterative and Regularized Least Squares Algorithm for Phase Retrieval Radu Balan Naveed Haghani Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD April 17, 2016

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Optimal Spectral Initialization for Signal Recovery with Applications to Phase Retrieval

Optimal Spectral Initialization for Signal Recovery with Applications to Phase Retrieval Optimal Spectral Initialization for Signal Recovery with Applications to Phase Retrieval Wangyu Luo, Wael Alghamdi Yue M. Lu Abstract We present the optimal design of a spectral method widely used to initialize

More information

Solving Large-scale Systems of Random Quadratic Equations via Stochastic Truncated Amplitude Flow

Solving Large-scale Systems of Random Quadratic Equations via Stochastic Truncated Amplitude Flow Solving Large-scale Systems of Random Quadratic Equations via Stochastic Truncated Amplitude Flow Gang Wang,, Georgios B. Giannakis, and Jie Chen Dept. of ECE and Digital Tech. Center, Univ. of Minnesota,

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

A Geometric Analysis of Phase Retrieval

A Geometric Analysis of Phase Retrieval A Geometric Analysis of Phase Retrieval Ju Sun, Qing Qu, John Wright js4038, qq2105, jw2966}@columbiaedu Dept of Electrical Engineering, Columbia University, New York NY 10027, USA Abstract Given measurements

More information

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University Statistical Issues in Searches: Photon Science Response Rebecca Willett, Duke University 1 / 34 Photon science seen this week Applications tomography ptychography nanochrystalography coherent diffraction

More information

Recovery of Compactly Supported Functions from Spectrogram Measurements via Lifting

Recovery of Compactly Supported Functions from Spectrogram Measurements via Lifting Recovery of Compactly Supported Functions from Spectrogram Measurements via Lifting Mark Iwen markiwen@math.msu.edu 2017 Friday, July 7 th, 2017 Joint work with... Sami Merhi (Michigan State University)

More information

Coherent imaging without phases

Coherent imaging without phases Coherent imaging without phases Miguel Moscoso Joint work with Alexei Novikov Chrysoula Tsogka and George Papanicolaou Waves and Imaging in Random Media, September 2017 Outline 1 The phase retrieval problem

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Confidence Intervals for Low-dimensional Parameters with High-dimensional Data

Confidence Intervals for Low-dimensional Parameters with High-dimensional Data Confidence Intervals for Low-dimensional Parameters with High-dimensional Data Cun-Hui Zhang and Stephanie S. Zhang Rutgers University and Columbia University September 14, 2012 Outline Introduction Methodology

More information

Convex Phase Retrieval without Lifting via PhaseMax

Convex Phase Retrieval without Lifting via PhaseMax Convex Phase etrieval without Lifting via PhaseMax Tom Goldstein * 1 Christoph Studer * 2 Abstract Semidefinite relaxation methods transform a variety of non-convex optimization problems into convex problems,

More information

Network Localization via Schatten Quasi-Norm Minimization

Network Localization via Schatten Quasi-Norm Minimization Network Localization via Schatten Quasi-Norm Minimization Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (Joint Work with Senshan Ji Kam-Fung

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Bo Liu Department of Computer Science, Rutgers Univeristy Xiao-Tong Yuan BDAT Lab, Nanjing University of Information Science and Technology

More information

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms Aditya Viswanathan Department of Mathematics and Statistics adityavv@umich.edu www-personal.umich.edu/

More information

Stochastic dynamical modeling:

Stochastic dynamical modeling: Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou

More information

Globally Convergent Levenberg-Marquardt Method For Phase Retrieval

Globally Convergent Levenberg-Marquardt Method For Phase Retrieval Globally Convergent Levenberg-Marquardt Method For Phase Retrieval Zaiwen Wen Beijing International Center For Mathematical Research Peking University Thanks: Chao Ma, Xin Liu 1/38 Outline 1 Introduction

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Characterization of Gradient Dominance and Regularity Conditions for Neural Networks

Characterization of Gradient Dominance and Regularity Conditions for Neural Networks Characterization of Gradient Dominance and Regularity Conditions for Neural Networks Yi Zhou Ohio State University Yingbin Liang Ohio State University Abstract zhou.1172@osu.edu liang.889@osu.edu The past

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Optimal Linear Estimation under Unknown Nonlinear Transform

Optimal Linear Estimation under Unknown Nonlinear Transform Optimal Linear Estimation under Unknown Nonlinear Transform Xinyang Yi The University of Texas at Austin yixy@utexas.edu Constantine Caramanis The University of Texas at Austin constantine@utexas.edu Zhaoran

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel J. Candès, and Yi Ma, Microsoft Research Asia, Beijing, P.R.C Dept. of Electrical

More information

arxiv: v1 [cs.it] 26 Oct 2018

arxiv: v1 [cs.it] 26 Oct 2018 Outlier Detection using Generative Models with Theoretical Performance Guarantees arxiv:1810.11335v1 [cs.it] 6 Oct 018 Jirong Yi Anh Duc Le Tianming Wang Xiaodong Wu Weiyu Xu October 9, 018 Abstract This

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Phase retrieval

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Phase retrieval ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Phase retrieval Yuejie Chi Department of Electrical and Computer Engineering Spring 018 1/48 Phase retrieval: the missing

More information

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard

More information

INEXACT ALTERNATING OPTIMIZATION FOR PHASE RETRIEVAL WITH OUTLIERS

INEXACT ALTERNATING OPTIMIZATION FOR PHASE RETRIEVAL WITH OUTLIERS INEXACT ALTERNATING OPTIMIZATION FOR PHASE RETRIEVAL WITH OUTLIERS Cheng Qian Xiao Fu Nicholas D. Sidiropoulos Lei Huang Dept. of Electronics and Information Engineering, Harbin Institute of Technology,

More information

Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016

Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 Optimization for Tensor Models Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 1 Tensors Matrix Tensor: higher-order matrix

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Robustness Meets Algorithms

Robustness Meets Algorithms Robustness Meets Algorithms Ankur Moitra (MIT) ICML 2017 Tutorial, August 6 th CLASSIC PARAMETER ESTIMATION Given samples from an unknown distribution in some class e.g. a 1-D Gaussian can we accurately

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Analysis of Greedy Algorithms

Analysis of Greedy Algorithms Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Department of Mathematics Department of Statistical Science Cornell University London, January 7, 2016 Joint work

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

Phase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation

Phase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation hase Retrieval eets Statistical Learning Theory: A Flexible Convex Relaxation Sohail Bahmani Justin Romberg School of ECE Georgia Institute of Technology {sohail.bahmani,jrom}@ece.gatech.edu Abstract We

More information

Minimizing the Difference of L 1 and L 2 Norms with Applications

Minimizing the Difference of L 1 and L 2 Norms with Applications 1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Scalable Subspace Clustering

Scalable Subspace Clustering Scalable Subspace Clustering René Vidal Center for Imaging Science, Laboratory for Computational Sensing and Robotics, Institute for Computational Medicine, Department of Biomedical Engineering, Johns

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

Fundamental Limits of PhaseMax for Phase Retrieval: A Replica Analysis

Fundamental Limits of PhaseMax for Phase Retrieval: A Replica Analysis Fundamental Limits of PhaseMax for Phase Retrieval: A Replica Analysis Oussama Dhifallah and Yue M. Lu John A. Paulson School of Engineering and Applied Sciences Harvard University, Cambridge, MA 0238,

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

OWL to the rescue of LASSO

OWL to the rescue of LASSO OWL to the rescue of LASSO IISc IBM day 2018 Joint Work R. Sankaran and Francis Bach AISTATS 17 Chiranjib Bhattacharyya Professor, Department of Computer Science and Automation Indian Institute of Science,

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Asilomar 2011 Jason T. Parker (AFRL/RYAP) Philip Schniter (OSU) Volkan Cevher (EPFL) Problem Statement Traditional

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery

Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Mingkui Tan, School of omputer Science, The University of Adelaide, Australia Ivor W. Tsang, IS, University of Technology Sydney,

More information

Matrix Completion: Fundamental Limits and Efficient Algorithms

Matrix Completion: Fundamental Limits and Efficient Algorithms Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, 2010 1 / 33 Matrix completion Find the missing entries in a huge data matrix 2 / 33 Example

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson, Anton van den Hengel School of Computer Science University of Adelaide,

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information