An iterative hard thresholding estimator for low rank matrix recovery

Size: px
Start display at page:

Download "An iterative hard thresholding estimator for low rank matrix recovery"

Transcription

1 An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical Statistics, University of Cambridge Workshop HDP-QPh, June. 10th 2015

2 Introduction Low rank matrix recovery in high dimension : Relevant applications (in particular quantum tomography) Interesting theoretical challenges This talk : based on a joint work with Arlene K.Y. Kim, An iterative hard thresholding estimator for low rank matrix recovery with explicit limiting distribution, availabe on arxiv (arxiv: ).

3 Outline The matrix recovery setting Setting Results In high dimension The problem Discussion Hard thresholding estimator The estimator Results Simulations

4 Outline The matrix recovery setting Setting Results In high dimension The problem Discussion Hard thresholding estimator The estimator Results Simulations

5 Setting Background and notations Vector notations Let p > 0..,. : classical vectorial scalar product in C p.. q, q 0 : l p (semi, for q = 0) norm in C p Matrix notations Let d > 0..,. tr : scalar product for d d Hermitian matrices (A, B) : A, B tr = tr ( A B ).. F : Frobenius norm, A 2 F = A, A tr = i λ2 i, where the (λ 2 i ) i are the eigen values of A A.. : Trace (or Schatten 1) norm, A = i λi.. S : Spectral (or Schatten ) norm, A S = sup i λ i.

6 Setting The matrix regression setting For a parameter Θ, and a sensing matrix X i, of dim. d d, one observes a noisy data for i n, Y i = tr ( (X i ) Θ ) + ɛ i = X i, Θ tr + ɛ i, where ɛ R n is an i.i.d. noise vector.

7 Setting The matrix regression setting : objective Let X be a linear operator, s.t. X(A) = (tr ( (X i ) A )) ) ( X = i, A tr i n i n. Model : Y = X(Θ) + ɛ. Problem Given (Y, X), reconstruct Θ. Matrix version of the linear regression problem.

8 Setting Application to quantum tomography Source = ^ θ Measurement = ^ X See Gross (2011); Flammia et al. (2012); Kahn and Guta (2009); Guta et al. (2012); Barndorff-Nielsen et al. (2003); Butucea, Guta, Kypraios (2015); Alquier et al. (2013); Liu (2011); Gross et al. (2010) etc.

9 Results The least square estimator Model : Y = X(Θ) + ɛ. Most natural idea : least square, i.e. ˆΘ = arg min T X(T ) Y 2 2, which has for solution the least square estimator ˆΘ = (X X) 1 X (Y ), where X = n i=1 (Xi ) is the conjugate of X.

10 Results Properties of the least square estimator Under standard properties of the noise, and invertibility of the design (i.e. existence of (X X) 1 ), one has asymptotically ˆΘ Θ N (0, (X X) 1 ), and if sub-gaussian noise (e.g. bounded), with proba. 1 δ ˆΘ Θ 2 F C d2 log(1/δ), n which is minimax optimal over d d matrices. Those two results enable to do inference (estimation + confidence statements).

11 Outline The matrix recovery setting Setting Results In high dimension The problem Discussion Hard thresholding estimator The estimator Results Simulations

12 The problem Main problem Crucial assumption : X X invertible - measurement system must be complete. We thus require n d 2. Question : What if it is not the case, i.e. high dimensional setting where n d 2? Answer : Restrict set of parameters and impose condition on design.

13 The problem Restriction on the set of parameters Problem : If n d 2, some parameters have same image even in the absence of noise, impossible to have uniform reconstruction over d d matrices. Solution : Restrict the space of parameters. Natural restriction : low rank matrices. We write M(k), for the set of matrices of rank k.

14 The problem Design assumption Good sampling scheme X satisfies the matrix Restricted Isometry Property (RIP) : sup T M(k) 1 n X(T ) 2 2 T 2 F T 2 F ɛ. M(k) See e.g. Candès and Recht (2009); Candès and Tao (2010); Candès and Plan (2011); Gross (2011); Liu (2011) etc. Example : Random sub-gaussian design, or random sampling from incoherent basis (e.g. Pauli)...

15 The problem Remark on the following pictures M(k) is not easy to draw so for drawing the intuitions, I will resort to the sparse linear regression model, where and θ is k sparse. Y = Xθ + ɛ, M(k) 1 sparse vectors

16 The problem First non-convex solution If X satisfies the matrix RIP, the measurement system is complete for M(k). The estimator ˆΘ 0 = arg min X(T ) Y 2, T M(k) will satisfy with proba. larger than 1 δ ˆΘ 0 Θ 2 F C kd log(1/δ). n Problem : Non convex, horrible program.

17 The problem Convex relaxation Current estimator ˆΘ 0 = arg min X(T ) Y 2, T M(k) Problem : Non convex, horrible program. Idea = convex relaxation : ˆΘ = arg min X(T ) Y 2, T : T b or rather ˆΘ = arg min T X(T ) Y λ T.

18 The problem Convex relaxation Matrix Lasso : ˆΘ = arg min T X(T ) Y λ T. Θ^

19 The problem Convex relaxation Theorem If the design satisfies the matrix RIP, and if d log(1/δ) λ n then matrix lasso satisfies with proba. larger than 1 δ ˆΘ Θ 2 F C kd log(1/δ). n See e.g. Fazel et al. (2010); Candès and Plan (2011); Gross et al. (2010); Flammia et al. (2012); Koltchinskii et al. (2011) etc.

20 Discussion Questions From a minimax perspective, problem is solved. But How to implement efficiently the matrix lasso? Or rather, is there an estimator that is computationally efficient? What is the precision of the estimate? Entry-wise results? Limiting distribution?

21 Discussion Implementation ˆΘ is defined by an optimisation program - it is computationally solvable in theory. But in practice? Projected gradient descent in the noisy case Agarwal et al. (2012). Many works on this the regression setting Agarwal et al. (2012); Goldfarb and Ma (2011); Blumensath and Davies (2009); Tanner and Wei (2012) - in particular Hard thresholding in the noiseless case.

22 Discussion Implementation Projected gradient descent : Mean squared error Hard Thresholding : Mean squared error Convex relaxation of the constraint Actual constraint

23 Discussion Uncertainty quantification Uncertainty quantification? Results only for linear regression model... Global vs. local result (. F vs.. S ). Estimator with uncertainty distributed and with explicit limiting distribution van de Geer et al. (2014); Javanmard and Montanari (2014); Zhang and Zhang (2014). Remark : Minimax confidence set depending on the sparsity? Does not exist in the linear regression model Nickl and van de Geer (2014)... But exists in the matrix recovery model Carpentier, Eisert, Gross and Nickl (2015)!

24 Discussion Uncertainty quantification Constrained solution : no obvious lim. distr. Projected solution : Gaussian lim. distr. Θ^ Y - X( Θ^ ) Θ^ Y - X( ) X * ( ) 1 n

25 Outline The matrix recovery setting Setting Results In high dimension The problem Discussion Hard thresholding estimator The estimator Results Simulations

26 The estimator Prerequisites Let K be an upper bound on the rank of the parameter, i.e. Θ M(K). Assume that X satisfies the matrix RIP : We will need : sup T M(2K) 1 n X(T ) 2 2 T 2 F T 2 c n (2K). F c n (2K) K < 1/4. For e.g. Gaussian design, or random Pauli design, we have (up to a log(d) for Pauli) kd c n (2K) n, and so condition satisfied whenever d k n 1.

27 The estimator The hard thresholding estimator Initial values for the estimator ˆΘ 0 and the threshold T 0 : ˆΘ 0 = 0 R d d, T 0 = B R +. Set now recursively, for r N, r 1, T r = 4 c n (2K) KT r 1 + C d log(1/δ), and, n ˆΘ r = ˆΘ r n X ( Y X( ˆΘ r 1 ) ) Tr. where M T is the matrix where all singular values of M smaller than T are thresholded (set to 0).

28 The estimator Interpretations Low rank projected gradient descent : A projected gradient descent on the set of low rank matrices But gradient step is 1 (so not really a gradient descent...). Gradient step of 1 is very important... Mean squared error Actual constraint

29 The estimator Interpretations Application of a contracting operator : We have : ˆΘ r = ˆΘ r n X ( X(Θ ˆΘ r 1 )+ɛ )) Tr. Θ - ^ Θ r-1 By condition c n (2K) K < 1/4, we have X X * 1 n 1 n X X Id d S 1/4. So multiple application of a spectral contraction (with thresholdings). 1 X * X( Θ - ^Θ ) n r-1

30 The estimator Interpretations Taylor expansion of the inverse function : Least squares : (X X) 1 X Y. Problem : (X X) 1. Taylor expansion at r of (Id d (Id d 1/nX X)) 1 : L(r) = r (Id d 1 n X X) m. m=0 If suppression of thresholding step, the estimator we constructed is of the form 1 n L(r)X Y. Thresholding between each step controls the small eigen values...

31 Results General result Theorem Let r O(log(n)). With probability larger than 1 δ and for any k K/2 and also that sup Θ M(k), Θ F B Θ ˆΘ d log(1/δ) r S C 1, n sup rank( ˆΘ r ) k. Θ M(k), Θ F B Note : Minimax optimal results in Frobenius and Trace norm follow immediately.

32 Results Discussion Bounds are minimax-optimal. The spectral norm bound allows to have bound of the entry-wise risk. Adaptive estimator : no need to know k. Results hold also in the linear regression setting with estimator in the same vein.

33 Results Result in Gaussian design Theorem Assume that the elements in the design matrices X i M are i.i.d. Gaussian with mean 0 and variance 1. Then, writing Z := 1 n X (ɛ) and := n( ˆΘ r Θ) 1 n X X( ˆΘ r Θ), we have n( ˆΘ Θ) = + Z, ( ) where Z X N 0, 1 n X X. Assuming that max(k 2 d, Kd log(d)) = o(n), we have that = o P (1).

34 Results Discussion Limiting distribution entrywise confidence sets. Bound on the risk of each entry by 1/n (Gaussian concentration). Results hold also in the linear regression setting.

35 Simulations Simulations Simulations for Gaussian design Gaussian uncorrelated noise ɛ N (0, I n ) Parameter Θ of rank k Θ = k N l Nl T, where, N l N (0, I d ). l=1 Computing estimator, and entrywise Confidence intervals.

36 Simulations log(rescaled Risk) log(rescaled Risk) p=64,k= n p=64,k= n log(rescaled Risk) log(rescaled Risk) p=100,k= n p=100,k= n Figure: Logarithm of the rescaled Frobenius risk of the estimate.

37 Simulations p=64,k=3 p=100,k=3 log(length of CI) n p=64,k=10 log(length of CI) n p=100,k=10 log(length of CI) n log(length of CI) n Figure: Logarithm of rescaled CI length

38 Simulations Coverage Probability Coverage Probability p=64,k= n p=64,k= n Coverage Probability Coverage Probability p=100,k= n p=100,k=10 Figure: Coverage of CI n

39 Conclusion We have Minimax-optimal bounds. In particular for spectral norm. Estimator that is very fast to compute. With limiting distribution in the case of a Gaussian design. We want Limiting distribution in non Gaussian design? Sharper bounds on entries in non-gaussian design? Results with true quantum model?

40 Thank you!

41 References I Agarwal, A., S. Negahban, and M. J. Wainwright (2012). Fast global convergence of gradient methods for high-dimensional statistical recovery. Ann. Statist. 40(5), P. Alquier, C. Butucea, M. Hebiri, K. Meziani, T. Morimae (2013). Rank penalized estimation of a quantum system. Physical Reviews A 88, O. E. Barndorff-Nielsen, R. D. Gill, P. E. Jupp (2003). On quantum statistical inference (with discussion). J. R. Statist. Soc. B. 65(5), Blumensath, T. and M. E. Davies (2009). Iterative hard thresholding for compressed sensing. Appl. Computat. Har. Analysis 27(3),

42 References II Butucea, Guta, Kypraios (2015). Spectral thresholding quantum tomography for low rank states. arxiv: Candès, E. and T. Tao (2010). The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inform. Theory 56, Candès, E. J. and Y. Plan (2011). Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements. IEEE Trans. Inform. Theory 57(4), Candès, E. and B. Recht (2009). Exact matrix completion via convex optimization. Found. Comput. Math. 9,

43 References III Carpentier A., Eisert J., Gross D. and Nickl R. (2015). EUncertainty Quantification for Matrix Compressed Sensing and Quantum Tomography Problems. arxiv : Flammia, S. T, D. Gross, Y.-K. Liu, and J. Eisert. Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators. New J. Phys., 14(9):095022, van de Geer, S., P. Bühlmann, Y. Ritov, and R. Dezeure (2014). On asymptotically optimal confidence regions and tests for high-dimensional models. Ann. Statist. 42(3), Recht, B., Fazel, M., and Parrilo, P. A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review 52(3),

44 References IV Goldfarb, D. and S. Ma (2011). Convergence of fixed-point continuation algorithms for matrix rank minimization. Found. Comput. Math. 11, Gross, D. (2011). Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inform. Theory 57(3), Gross, D., Y.-K. Liu, S. T Flammia, S. Becker, and J. Eisert. Quantum state tomography via compressed sensing. Physical Rev. letters, 105(15):150401, M. Guta, T. Kypraios and I. Dryden. Rank based model selection for multiple ions quantum tomography. New Journal of Physics, 14:105002, 2012.

45 References V H. Haffner et al. Scalable multiparticle entanglement of trapped ions. Nature, 438: , Javanmard, A. and A. Montanari. Confidence intervals and hypothesis testing for high-dimensional regression. J. Mach. Learn. Res., 15(1): , J. Kahn and M. Guta. Local asymptotic normality for finite dimensional quantum systems. Commun. Math. Phys., (2009). Koltchinskii, V., K. Lounici, and A. B. Tsybakov (2011). Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist. 39(5), Liu, Y.K. (2011). Universal low-rank matrix recovery from Pauli measurements. Adv. Neural Inf. Process. Syst.,

46 References VI Nickl, R. and van de Geer, S. (2014). Confidence sets in sparse regression. Ann. Statist. 41(6), Tanner, J. and K. Wei (2012). Normalized iterative hard thresolding for matrix completion. SIAM J. Sci. Comput. 35, S104 S125. Zhang, C-H. and Zhang, S-S. (2012). Confidence intervals for low dimensional parameters in high dimensional linear models. J. R. Stat. Soc. Ser. B Stat. Methodol. 76,

Universal low-rank matrix recovery from Pauli measurements

Universal low-rank matrix recovery from Pauli measurements Universal low-rank matrix recovery from Pauli measurements Yi-Kai Liu Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, MD, USA yi-kai.liu@nist.gov

More information

Adaptive one-bit matrix completion

Adaptive one-bit matrix completion Adaptive one-bit matrix completion Joseph Salmon Télécom Paristech, Institut Mines-Télécom Joint work with Jean Lafond (Télécom Paristech) Olga Klopp (Crest / MODAL X, Université Paris Ouest) Éric Moulines

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

High-dimensional Statistics

High-dimensional Statistics High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given

More information

De-biasing the Lasso: Optimal Sample Size for Gaussian Designs

De-biasing the Lasso: Optimal Sample Size for Gaussian Designs De-biasing the Lasso: Optimal Sample Size for Gaussian Designs Adel Javanmard USC Marshall School of Business Data Science and Operations department Based on joint work with Andrea Montanari Oct 2015 Adel

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

1-bit Matrix Completion. PAC-Bayes and Variational Approximation

1-bit Matrix Completion. PAC-Bayes and Variational Approximation : PAC-Bayes and Variational Approximation (with P. Alquier) PhD Supervisor: N. Chopin Bayes In Paris, 5 January 2017 (Happy New Year!) Various Topics covered Matrix Completion PAC-Bayesian Estimation Variational

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

High-dimensional Statistical Models

High-dimensional Statistical Models High-dimensional Statistical Models Pradeep Ravikumar UT Austin MLSS 2014 1 Curse of Dimensionality Statistical Learning: Given n observations from p(x; θ ), where θ R p, recover signal/parameter θ. For

More information

Three Generalizations of Compressed Sensing

Three Generalizations of Compressed Sensing Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or

More information

Binary matrix completion

Binary matrix completion Binary matrix completion Yaniv Plan University of Michigan SAMSI, LDHD workshop, 2013 Joint work with (a) Mark Davenport (b) Ewout van den Berg (c) Mary Wootters Yaniv Plan (U. Mich.) Binary matrix completion

More information

The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective

The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Forty-Eighth Annual Allerton Conference Allerton House UIUC Illinois USA September 9 - October 1 010 The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Gongguo Tang

More information

A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables

A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables Niharika Gauraha and Swapan Parui Indian Statistical Institute Abstract. We consider the problem of

More information

1 Regression with High Dimensional Data

1 Regression with High Dimensional Data 6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

On Iterative Hard Thresholding Methods for High-dimensional M-Estimation

On Iterative Hard Thresholding Methods for High-dimensional M-Estimation On Iterative Hard Thresholding Methods for High-dimensional M-Estimation Prateek Jain Ambuj Tewari Purushottam Kar Microsoft Research, INDIA University of Michigan, Ann Arbor, USA {prajain,t-purkar}@microsoft.com,

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

High-dimensional statistics: Some progress and challenges ahead

High-dimensional statistics: Some progress and challenges ahead High-dimensional statistics: Some progress and challenges ahead Martin Wainwright UC Berkeley Departments of Statistics, and EECS University College, London Master Class: Lecture Joint work with: Alekh

More information

A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models

A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models Jingyi Jessica Li Department of Statistics University of California, Los

More information

Least squares under convex constraint

Least squares under convex constraint Stanford University Questions Let Z be an n-dimensional standard Gaussian random vector. Let µ be a point in R n and let Y = Z + µ. We are interested in estimating µ from the data vector Y, under the assumption

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Estimation of (near) low-rank matrices with noise and high-dimensional scaling

Estimation of (near) low-rank matrices with noise and high-dimensional scaling Estimation of (near) low-rank matrices with noise and high-dimensional scaling Sahand Negahban Department of EECS, University of California, Berkeley, CA 94720, USA sahand n@eecs.berkeley.edu Martin J.

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1. BY T. TONY CAI AND ANRU ZHANG University of Pennsylvania

ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1. BY T. TONY CAI AND ANRU ZHANG University of Pennsylvania The Annals of Statistics 2015, Vol. 43, No. 1, 102 138 DOI: 10.1214/14-AOS1267 Institute of Mathematical Statistics, 2015 ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1 BY T. TONY CAI AND ANRU ZHANG University

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Convex relaxation for Combinatorial Penalties

Convex relaxation for Combinatorial Penalties Convex relaxation for Combinatorial Penalties Guillaume Obozinski Equipe Imagine Laboratoire d Informatique Gaspard Monge Ecole des Ponts - ParisTech Joint work with Francis Bach Fête Parisienne in Computation,

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

arxiv: v2 [math.st] 12 Feb 2008

arxiv: v2 [math.st] 12 Feb 2008 arxiv:080.460v2 [math.st] 2 Feb 2008 Electronic Journal of Statistics Vol. 2 2008 90 02 ISSN: 935-7524 DOI: 0.24/08-EJS77 Sup-norm convergence rate and sign concentration property of Lasso and Dantzig

More information

Lecture 2 Part 1 Optimization

Lecture 2 Part 1 Optimization Lecture 2 Part 1 Optimization (January 16, 2015) Mu Zhu University of Waterloo Need for Optimization E(y x), P(y x) want to go after them first, model some examples last week then, estimate didn t discuss

More information

Learning discrete graphical models via generalized inverse covariance matrices

Learning discrete graphical models via generalized inverse covariance matrices Learning discrete graphical models via generalized inverse covariance matrices Duzhe Wang, Yiming Lv, Yongjoon Kim, Young Lee Department of Statistics University of Wisconsin-Madison {dwang282, lv23, ykim676,

More information

19.1 Problem setup: Sparse linear regression

19.1 Problem setup: Sparse linear regression ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 19: Minimax rates for sparse linear regression Lecturer: Yihong Wu Scribe: Subhadeep Paul, April 13/14, 2016 In

More information

General principles for high-dimensional estimation: Statistics and computation

General principles for high-dimensional estimation: Statistics and computation General principles for high-dimensional estimation: Statistics and computation Martin Wainwright Statistics, and EECS UC Berkeley Joint work with: Garvesh Raskutti, Sahand Negahban Pradeep Ravikumar, Bin

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Quantum State Tomography via Compressed Sensing

Quantum State Tomography via Compressed Sensing Quantum State Tomography via Compressed Sensing Laureline Pinault January 5, 2017 Abstract This report aims at presenting the article Quantum State Tomography via Compressed Sensing by David Gross, Yi-Kai

More information

Matrix Completion: Fundamental Limits and Efficient Algorithms

Matrix Completion: Fundamental Limits and Efficient Algorithms Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, 2010 1 / 33 Matrix completion Find the missing entries in a huge data matrix 2 / 33 Example

More information

Phase recovery with PhaseCut and the wavelet transform case

Phase recovery with PhaseCut and the wavelet transform case Phase recovery with PhaseCut and the wavelet transform case Irène Waldspurger Joint work with Alexandre d Aspremont and Stéphane Mallat Introduction 2 / 35 Goal : Solve the non-linear inverse problem Reconstruct

More information

arxiv: v3 [stat.me] 8 Jun 2018

arxiv: v3 [stat.me] 8 Jun 2018 Between hard and soft thresholding: optimal iterative thresholding algorithms Haoyang Liu and Rina Foygel Barber arxiv:804.0884v3 [stat.me] 8 Jun 08 June, 08 Abstract Iterative thresholding algorithms

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

Analysis of Greedy Algorithms

Analysis of Greedy Algorithms Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction JMLR: Workshop and Conference Proceedings 9 20 35 339 24th Annual Conference on Learning Theory Concentration-Based Guarantees for Low-Rank Matrix Reconstruction Rina Foygel Department of Statistics, University

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

Bayesian Sparse Linear Regression with Unknown Symmetric Error

Bayesian Sparse Linear Regression with Unknown Symmetric Error Bayesian Sparse Linear Regression with Unknown Symmetric Error Minwoo Chae 1 Joint work with Lizhen Lin 2 David B. Dunson 3 1 Department of Mathematics, The University of Texas at Austin 2 Department of

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

OWL to the rescue of LASSO

OWL to the rescue of LASSO OWL to the rescue of LASSO IISc IBM day 2018 Joint Work R. Sankaran and Francis Bach AISTATS 17 Chiranjib Bhattacharyya Professor, Department of Computer Science and Automation Indian Institute of Science,

More information

arxiv: v1 [math.st] 10 Sep 2015

arxiv: v1 [math.st] 10 Sep 2015 Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees Department of Statistics Yudong Chen Martin J. Wainwright, Department of Electrical Engineering and

More information

ESTIMATION OF (NEAR) LOW-RANK MATRICES WITH NOISE AND HIGH-DIMENSIONAL SCALING

ESTIMATION OF (NEAR) LOW-RANK MATRICES WITH NOISE AND HIGH-DIMENSIONAL SCALING Submitted to the Annals of Statistics ESTIMATION OF (NEAR) LOW-RANK MATRICES WITH NOISE AND HIGH-DIMENSIONAL SCALING By Sahand Negahban and Martin J. Wainwright University of California, Berkeley We study

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Department of Mathematics Department of Statistical Science Cornell University London, January 7, 2016 Joint work

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Robust estimation, efficiency, and Lasso debiasing

Robust estimation, efficiency, and Lasso debiasing Robust estimation, efficiency, and Lasso debiasing Po-Ling Loh University of Wisconsin - Madison Departments of ECE & Statistics WHOA-PSI workshop Washington University in St. Louis Aug 12, 2017 Po-Ling

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Mathematical Methods for Data Analysis

Mathematical Methods for Data Analysis Mathematical Methods for Data Analysis Massimiliano Pontil Istituto Italiano di Tecnologia and Department of Computer Science University College London Massimiliano Pontil Mathematical Methods for Data

More information

Stochastic dynamical modeling:

Stochastic dynamical modeling: Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou

More information

Risk and Noise Estimation in High Dimensional Statistics via State Evolution

Risk and Noise Estimation in High Dimensional Statistics via State Evolution Risk and Noise Estimation in High Dimensional Statistics via State Evolution Mohsen Bayati Stanford University Joint work with Jose Bento, Murat Erdogdu, Marc Lelarge, and Andrea Montanari Statistical

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National

More information

Interpolation-Based Trust-Region Methods for DFO

Interpolation-Based Trust-Region Methods for DFO Interpolation-Based Trust-Region Methods for DFO Luis Nunes Vicente University of Coimbra (joint work with A. Bandeira, A. R. Conn, S. Gratton, and K. Scheinberg) July 27, 2010 ICCOPT, Santiago http//www.mat.uc.pt/~lnv

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Stein s Method for Matrix Concentration

Stein s Method for Matrix Concentration Stein s Method for Matrix Concentration Lester Mackey Collaborators: Michael I. Jordan, Richard Y. Chen, Brendan Farrell, and Joel A. Tropp University of California, Berkeley California Institute of Technology

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University Statistical Issues in Searches: Photon Science Response Rebecca Willett, Duke University 1 / 34 Photon science seen this week Applications tomography ptychography nanochrystalography coherent diffraction

More information

high-dimensional inference robust to the lack of model sparsity

high-dimensional inference robust to the lack of model sparsity high-dimensional inference robust to the lack of model sparsity Jelena Bradic (joint with a PhD student Yinchu Zhu) www.jelenabradic.net Assistant Professor Department of Mathematics University of California,

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Nonlinear Structured Signal Estimation in High Dimensions via Iterative Hard Thresholding

Nonlinear Structured Signal Estimation in High Dimensions via Iterative Hard Thresholding Nonlinear Structured Signal Estimation in High Dimensions via Iterative Hard Thresholding Kaiqing Zhang Zhuoran Yang Zhaoran Wang University of Illinois, Urbana-Champaign Princeton University Northwestern

More information

Uniform Post Selection Inference for LAD Regression and Other Z-estimation problems. ArXiv: Alexandre Belloni (Duke) + Kengo Kato (Tokyo)

Uniform Post Selection Inference for LAD Regression and Other Z-estimation problems. ArXiv: Alexandre Belloni (Duke) + Kengo Kato (Tokyo) Uniform Post Selection Inference for LAD Regression and Other Z-estimation problems. ArXiv: 1304.0282 Victor MIT, Economics + Center for Statistics Co-authors: Alexandre Belloni (Duke) + Kengo Kato (Tokyo)

More information

Nonconcave Penalized Likelihood with A Diverging Number of Parameters

Nonconcave Penalized Likelihood with A Diverging Number of Parameters Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized

More information

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 13, JULY 1, 2013 3279 Compressed Sensing Affine Rank Minimization Under Restricted Isometry T. Tony Cai Anru Zhang Abstract This paper establishes new

More information

Shifting Inequality and Recovery of Sparse Signals

Shifting Inequality and Recovery of Sparse Signals Shifting Inequality and Recovery of Sparse Signals T. Tony Cai Lie Wang and Guangwu Xu Astract In this paper we present a concise and coherent analysis of the constrained l 1 minimization method for stale

More information

Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators

Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators Electronic Journal of Statistics ISSN: 935-7524 arxiv: arxiv:503.0388 Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators Yuchen Zhang, Martin J. Wainwright

More information

Rank Minimization over Finite Fields

Rank Minimization over Finite Fields Rank Minimization over Finite Fields Vincent Y. F. Tan, Laura Balzano and Stark C. Draper Dept. of ECE, University of Wisconsin-Madison, Email: {vtan@,sunbeam@ece.,sdraper@ece.}wisc.edu LIDS, Massachusetts

More information

High-dimensional covariance estimation based on Gaussian graphical models

High-dimensional covariance estimation based on Gaussian graphical models High-dimensional covariance estimation based on Gaussian graphical models Shuheng Zhou Department of Statistics, The University of Michigan, Ann Arbor IMA workshop on High Dimensional Phenomena Sept. 26,

More information

Confidence Intervals for Low-dimensional Parameters with High-dimensional Data

Confidence Intervals for Low-dimensional Parameters with High-dimensional Data Confidence Intervals for Low-dimensional Parameters with High-dimensional Data Cun-Hui Zhang and Stephanie S. Zhang Rutgers University and Columbia University September 14, 2012 Outline Introduction Methodology

More information

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Beyond stochastic gradient descent for large-scale machine learning

Beyond stochastic gradient descent for large-scale machine learning Beyond stochastic gradient descent for large-scale machine learning Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with Eric Moulines - October 2014 Big data revolution? A new

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

A General Framework for High-Dimensional Inference and Multiple Testing

A General Framework for High-Dimensional Inference and Multiple Testing A General Framework for High-Dimensional Inference and Multiple Testing Yang Ning Department of Statistical Science Joint work with Han Liu 1 Overview Goal: Control false scientific discoveries in high-dimensional

More information