Large-Scale Multiple Testing in Medical Informatics

Size: px
Start display at page:

Download "Large-Scale Multiple Testing in Medical Informatics"

Transcription

1 Large-Scale Multiple Testing in Medical Informatics experiment space model space data IMA Workshop on Large Data Sets in Medical Informatics November Rob Nowak Joint work with R. Castro, J. Haupt, M. Malloy and B. Nazer

2 Motivation: Inferring Biological Pathways microwell array virus 13,071 single-gene knock-down cell strains infect each strain with fluorescing virus 72 h fruit fly approximately 100 significant genes/protiens discovered by Ahlquist and Kawaoka Labs at UW-Madison

3 Motivation: Inferring Biological Pathways microwell array virus 13,071 single-gene knock-down cell strains infect each strain with fluorescing virus 72 h fruit fly Challenges: 1. very low SNR data and huge number of experiments and tests 2. non-linear interactions

4 Challenge 1: High-Dimensionality and Low SNR Vol August 2008 doi: /nature07151 Drosophila RNAi screen identifies host genes important for influenza virus replication Linhui Hao 1,2 *, Akira Sakurai 3 *{, Tokiko Watanabe 3, Ericka Sorensen 1, Chairul A. Nidom 5,6, Michael A. Newton 4, Paul Ahlquist 1,2 & Yoshihiro Kawaoka 3,7,8,9 How do they confidently determine the ~100 out of 13K genes hijacked for virus replication from extremely noisy data? Sequential Experimental Design: Stage 1: assay all 13K strains, twice; keep all with significant fluorescence in one or both assays for 2nd stage (13K 1K) Stage 2: assay remaining 1K strains, 6-12 times; retain only those with statistically significant fluorescence (1K 100) vastly more efficient that replicating all 13K experiments many times

5 Feedback from Data Analysis to Data Collection high-throughput experiments experiment space Optimized multi-stage designs controlling the false discovery or the family-wise error rate S. Zehetmayer, P. Bauer and M. Posch, Statist. Med. 2008; 27: model space data sets of genes critical to a certain function/process microarray or assay datasets

6 Challenge 2: Sparsity & Nonlinear Effects Genome Linear Pathway Redundant Pathways Genome Knockdown Knockdown Hijacked Proteins Hijacked Proteins detectable effect Linear Sparsity little or no observable effect! Multilinear Sparsity Detecting redundant pathways requires pairwise knockdowns and brute-force examination of all such knockdowns would require millions of experiments!

7 Outline of Talk 1. Sequential Experimental Designs for High-Dimensional Testing thresholds for recovery in high-dimensional limit: non-adaptive designs sequential designs SNR log n SNR arbitrarily slowly growing function of n 2. Compressed Sensing of Sparse Multilinear Functions number of compressed sensing measurements for sparse recovery: linear sparsity K S log n multilinear sparsity K min{s 2 log n, S log 3 (S) log n, S α log α n} where α 1 depends on pattern of sparsity

8 Multiple Testing Problem virus 13,071 single-gene knock-down cell strains infect each strain with fluorescing virus 72 h fruit fly microwell array fluorescence level genes which strains have statistically significant deviations in fluorescence?

9 Idealized Example non-sequential design (n cell strains, k samples each) two-stage design (adaptively allocate kn samples) y measure each cell strain with equal precision/snr, then threshold to control false-positive error may be impossible to reliably separate signals from noise first stage has large false-positive rate, but low false-negative. larger SNR in second stage may make it easier to separate signals from noise. Under a fixed sensing/experimental budget, does this two-stage design (or some other sequential design) provide better error control than non-sequential design?

10 Model Selection Wired Science News for Your Neurons Previous post Next post Challenge: huge number of possible models, each with a different pattern of sparsity Scanning Dead Salmon in fmri Machine Highlights Risk of Red Herrings By Alexis Madrigal September 18, :37 pm Categories: Brains and Behavior

11 The Multiple Testing Problem

12

13 Single Experiment Model y i = x i + z i,i=1,..., n z i iid N (0, 1) sparsity: x i = 0 except on a small subset S {1,..., n} where x i = µ>0 y x Suppose we want to locate just one signal component: î = arg max i y i Even if no signal is present, max i y i 2 log n It is impossible to reliably detect signal components if µ< 2 log n

14 An Alternative: Sequential Experimental Design Instead of the usual non-adaptive observation model y i = x i + z i,i=1,..., n suppose we are able to sequentially collect several independent measurements of each component of x, according to where y i,j = x i + γ 1/2 i,j z i,j, i =1,..., n, j =1,..., k j indexes the measurement steps k denotes the total number of steps z i,j iid N (0, 1) γ 1/2 i,j z i,j N (0, γ 1 i,j ) γ i,j 0 controls the precision of each measurement Total precision budget is constrained, but the choice of γ i,j can depend on past observations {y i,l } l<j.

15 Experimental (Precision) Budget sequential measurement model y i,j = x i + γ 1/2 i,j z i,j, i =1,..., n, j =1,..., k The precision parameters {γ i,j } are required to satisfy k n γ i,j n j=1 i=1 For example, the usual non-adaptive, single measurement model corresponds to taking k = 1, and γ i,1 = 1, i =1,..., n. This baseline can be compared with adaptive procedures by allowing k>1 and variable {γ i,j } satisfying budget. Precision parameters control the SNR per component. SNR is increased/decreased by more/fewer repeated samples or longer/shorter observation times allocate precision sequentially and adaptively

16 Sequential Thresholding Sequential Thresholding initialize: S 0 = {1,..., n}, γ 1 i,j =2 for j =1,..., k 1) measure: y i,j N (x i, 2),i S j 1 2) threshold: S j = {i : y i,j 0} end output: S k = {i : y i,k > 0} precision = { 1 2, if measured 0, otherwise define the sparsity level s := S total precision budget: E [ i,j γ i,j = k E S j 1 j=1 k j=1 ( ) n s 2 j 1 + s n s + ks n ] (when n s) probability of error: P(S k S) = P ({S c S k } {S S c k }) P (S c S k ) + P (S S c k )

17 Idealized Example threshold at zero and re-measure only those components that survive repeat several times most of true signal components survive several thresholding steps, almost all of noise components do not

18 Low False-Negative, High False-Positive Rates 0 µ

19 False Positives P(S k S) P (S c S k ) + P (S S c k ) P (S c S k ) = P i S P i S = i S k j=1 k j=1 y i,j > 0 y i,j > 0 2 k = n s 2 k

20 False Negatives P(S k S) P (S c S k ) + P (S S c k ) P (S Sk c ) = P k j=1 i S ( µ2 ks 2 exp y i,j < 0 4 )

21 Probability of Error Bound P(S k S) P (S c S k ) + P (S Sk c ) n s 2 k + ks ) ( 2 exp µ2 4 = n s 2 k + 1 ( ) 2 exp (µ2 4 log(ks)) 4 Consider high-dimensional limit as n and take k = log 2 n 1+ɛ 0 P(S k S) n s 2 k exp ( ) (µ2 4 log(s(1 + ɛ) log 2 n)) 4 Second term tends to zero if µ c log(s log 2 n), for any c>4

22 Gains of Sequential Design non-sequential: µ 2 log n (necessary) Rui Castro (Eindhoven) Jarvis Haupt (Minnesota) Matt Malloy (Madison) sequential thresholding: (sufficient) µ c(log s + log log 2 n), with c>4 with a bit more work we can show c>2 suffices sequential lower bound: (necessary for P(S k S) ɛ) µ 2 log s/ɛ achievable by SPRT, but requires perfect knowlege of s greater sensitivity for same precision budget or lower experimental requirements for equivalent sensitivity

23 Biology Example 13,071 single-gene knock-down cell strains prob(error) 3 db log s vs. log n SNR (db) sequential thresholding is about twice as sensitive (for equal experimental budget) or requires half the number experiments (for same sensitivity)

24 Breaking the log(s) Barrier probability of error: P(one or more false discoveries) α (too conservative) Alternative: # false discoveries α Ŝ false-discovery proportion: FDP(Ŝ) := S\ b S S = # falsely discovered components # discovered components non-discovery proportion: NDP(Ŝ) := S\ b S S = # missed components # true non-zero components assumptions: Gaussian model, sublinear sparsity level: s = n 1 β, β (0, 1) To guarantee that FDP and NDP tend to zero as n non-sequential sequential thresholding µ 2β log n (Ingster, Jin & Donoho) µ arbitrarily slowly growing function of n sequential thresholding has virtually no dependence on dimension or sparsity level, effectively eliminating the curse of dimensionality altogether

25 Example n =2 14, x 0 = n = 128 non-discovery rate of single experiment gain log(16384) 10 non-discovery rate of seq. test 5% false discovery rate for both methods

26 Challenge 2: Nonlinearities Genome Genome Knockdown Knockdowns Hijacked Proteins Hijacked Proteins no effect detectable effect must knockdown both redundant genes to see an effect! ( ) 2 85, 000, 000 possible two-fold gene deletion strains!

27 Sparse Interaction Models Knockdown y = i a i x (1) i + i<j a i a j x (2) ij Approximate output (virus reproduction) with a sparse bilinear system. x (1) i x (2) ij a i non-zero iff gene is critical to pathway non-zero iff gene pair is critical to pathway 1 if gene is knocked down; 0 otherwise sparsity = most are 0

28 Sensing Sparse Interactions Linear model: y(x) = i Bilinear model: y(x) = i x i a i x (1) i a i + i<j x (2) i,j a ia j ( ) 2 85, 000, 000 possible pairwise interactions! Since most coefficients, {x i } or {x (1) i,x (2) ij }, are zero our goal is to identify critical components and interactions from using very few measurements a = {+,, +, +,,,, +} random bit sequence e.g., random selection of multiple simultaneous gene knock-downs model x y(x) collect K size(x) measurements y 1,y 2,..., y K using K random inputs (related work also by G. Giannakis U. Minnesota)

29 (Linear) Compressed Sensing This is the conventional compressed sensing problem for the linear model. y k = i a ki x i, k =1,..., K x R n, a { 1, +1} n sparse x: x 0 = S n find sparse solution to y = Ax If the measurement matrix satisfies the restricted isometry property (RIP) with δ 2S < 2 1 for all S-sparse vectors: (1 δ S ) x 2 2 Ax 2 2 (1 + δ S ) x 2 2 then x can be recovered from y by convex optimization: min z 1 subject to Az = y RIP holds with high probability if K cs log(n/s)

30 Multilinear Compressed Sensing Multilinear model: y = a i1 a i2 a id x i1 i 2 i D i 1 <i 2 < <i D x 0 = S N x R N, a { 1, 1} n N = ( ) n D y is called a Rademacher chaos of order D Compressed sensing problem for multilinear model K measurements of this form: y =[y 1 y K ] T find sparse solution to y = Ax matrix A now composed of monomials in a i,j Does it satisfy the RIP property?

31 On Average, Things Look Good RIP: (1 δ S ) x 2 2 Ax 2 2 (1 + δ S ) x 2 2 isotropic measurements: E [ Ax 2] = x 2 2 symmetric binary random inputs: P(a i,j = +1) = P(a i,j = 1) = 1/2 Linear CS: n = 3 inputs, K = 2 measurements and D = 1, A = 1 [ ] a1,1 a 1,2 a 1,3 E[A T A] = Identity 2 a 2,1 a 2,2 a 2,3 Bilinear CS: n = 3 inputs, K = 2 measurements and D = 2, A = 1 [ ] a1,1 a 1,2 a 1,1 a 1,3 a 1,2 a 1,3 E[A T A] = Identity 2 a 2,1 a 2,2 a 2,1 a 2,3 a 2,2 a 2,3

32 What about the distributions? y = Ax, E y 2 2 = x 2 2 P( y 2 x 2 2 >t) exp( poly(t)) x 2 2 x 2 2 Gaussian tails (as in linear CS) or heavy tails?

33 Tail Behavior Best case: decoupled chaos y = a 1 a 2 x 1,2 + a 3 a 4 x a 2k 1 a 2k x 2k 1,2k ã 1 x 1,2 + ã 2 x 3,4 + + ã k x 2k 1,2k equivalent to iid binary symmetric sensing subgaussian tails independent of D: P(y 2 x 2 2 >t) exp( ct) Worst case: strongly coupled chaos y = a i a j x i,j, k := 1 i<j l ( ) l 2 significant probability of large deviations from mean: if x i,j =1/ k, then P(y 2 k) =2 l =2 ck1/2 heavy tails depending on D: P(y 2 x 2 2 >t) exp( ct 1/D )

34 Combinatorial Dimension of Rademacher Chaos The combinatorial dimension 1 α D measures the level of dependence introduced by a particular pattern of sparsity. y = i 1 <i 2 < <i D a i1 a i2 a id x i1 i 2 i D Blei-Janson 04: A Rademacher chaos with combinatorial dimensional α satisfies ( exp c 1 t 1/α) P ( y 2 >t ) exp ( c 2 t 1/α) tails are light to heavy, depending on 1 α D

35 Dependencies Matter (in practice) 0.25 RIP: (1 δ S ) x 2 2 Ax 2 2 (1 + δ S ) x linear 1 δ S bilinear 0.05 trilinear support size S K = 5S

36 Dependencies Matter (in theory) K = number of measurements needed to recovery S-sparse multilinear forms proof technique bound on K ingredients Gershgorin Rudelson- Vershynin Rademacher chaos S 2 log N S(log 3 S)(log N) S α log α (N/S) 1 α D empirical 2nd moment bounds, union bound empirical 2nd moment bounds, no union bound tail bounds union bound compare with linear CS bound: K S log(n/s)

37 Gershgorin Bound i) Control each element of (partial) Gram matrix G T = A T T A T using Hoeffding s inequality and bound probability that G T is approximately diagonal. G R T = δ 1 SS δ SS δ SS δ 1 SS δ SS δ SS 1 ii) Gershgorin s Disc Theorem guarantees that eigenvalues lie in the range g ii j i g ij λ i (G T ) g ii + j i g ij iii) union bound over all ( N S ) sparsity patterns. RIP holds if K cs 2 log N

38 Heavy-Tailed Restricted Isometries Theorem 1 (Vershynin) Let à be a K N measurement matrix whose rows a T i are independent isotropic random vectors in R N. Let B be a number such that all entries a ij B almost surely. Then the normalized matrix A = 1 K à satisfies the following for K N, for every sparsity level S N and 0 < ɛ< 1: if the number of measurements satisfies K C ɛ 2 S log N log 3 (S) then the RIP constant δ S of A satisfies E[δ S ] ɛ. Check conditions: isotropy: G := A T A, E[G] = Identity elements of à bounded by 1.

39 Chaos Tail Bound Lemma 1 Assume that y k, k =1,..., K, are i.i.d. Rademacher variables of order D with combinatorial dimension 1 α D and Eyk 2 = 1. There exist constants c, C > 0 such that ( ) 1 K P yk 2 1 >t C exp( c min(kt 2,K 1/α t 1/α )) K k=1 proof technique: Blei-Jansen chaos tail bounds moment bound for sums of symmetric i.i.d. variables due to R. Latala apply lemma and union bound over ɛ-net for sparse vectors (technique from Baraniuk-Devore-Davenport-Wakin 08) RIP holds if K CS α log α (N/S)

40 Conclusions Compressed Sensing of Sparse Multilinear Functions number of compressed sensing measurements for sparse recovery: linear sparsity K S log n multilinear sparsity K min{s 2 log n, S log 3 (S) log n, S α log α n} where α 1 depends on pattern of sparsity Sparse Interactions: Identifying High-Dimensional. Multilinear Systems via Compressed Sensing, B. Nazer and RN, Allerton 2010 Sequential Experimental Designs for High-Dimensional Testing thresholds for recovery in high-dimensional limit: non-adaptive designs sequential designs SNR log n SNR arbitrarily slowly growing function of n Distilled Sensing: Adaptive Sampling for Sparse Detection and Estimation J. Haupt, R. Castro, and RN, arxiv: v2

Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing

Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing Bobak Nazer and Robert D. Nowak University of Wisconsin, Madison Allerton 10/01/10 Motivation: Virus-Host Interaction

More information

Adaptive Information and Optimization

Adaptive Information and Optimization Adaptive Information and Optimization experiment space model space data WIDDOW 31 January 2011 Rob Nowak www.ece.wisc.edu/~nowak Joint work with R. Castro, J. Haupt, M. Malloy Adaptive Information Goal:

More information

arxiv: v1 [math.st] 29 Jan 2010

arxiv: v1 [math.st] 29 Jan 2010 DISTILLED SENSING: ADAPTIVE SAMPLING FOR SPARSE DETECTION AND ESTIMATION arxiv:00.53v [math.st] 29 Jan 200 By Jarvis Haupt, Rui Castro, and Robert Nowak Rice University, Columbia University, and University

More information

Distilled Sensing: Adaptive Sampling for. Sparse Detection and Estimation

Distilled Sensing: Adaptive Sampling for. Sparse Detection and Estimation Distilled Sensing: Adaptive Sampling for Sparse Detection and Estimation Jarvis Haupt, Rui Castro, and Robert Nowak arxiv:00.53v2 [math.st] 27 May 200 Abstract Adaptive sampling results in dramatic improvements

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Tutorial: Sparse Signal Recovery

Tutorial: Sparse Signal Recovery Tutorial: Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan (Sparse) Signal recovery problem signal or population length N k important Φ x = y measurements or tests:

More information

A Generalized Restricted Isometry Property

A Generalized Restricted Isometry Property 1 A Generalized Restricted Isometry Property Jarvis Haupt and Robert Nowak Department of Electrical and Computer Engineering, University of Wisconsin Madison University of Wisconsin Technical Report ECE-07-1

More information

Supremum of simple stochastic processes

Supremum of simple stochastic processes Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Adaptive Compressive Imaging Using Sparse Hierarchical Learned Dictionaries

Adaptive Compressive Imaging Using Sparse Hierarchical Learned Dictionaries Adaptive Compressive Imaging Using Sparse Hierarchical Learned Dictionaries Jarvis Haupt University of Minnesota Department of Electrical and Computer Engineering Supported by Motivation New Agile Sensing

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

Performance Analysis for Sparse Support Recovery

Performance Analysis for Sparse Support Recovery Performance Analysis for Sparse Support Recovery Gongguo Tang and Arye Nehorai ESE, Washington University April 21st 2009 Gongguo Tang and Arye Nehorai (Institute) Performance Analysis for Sparse Support

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

1 Regression with High Dimensional Data

1 Regression with High Dimensional Data 6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Random Coding for Fast Forward Modeling

Random Coding for Fast Forward Modeling Random Coding for Fast Forward Modeling Justin Romberg with William Mantzel, Salman Asif, Karim Sabra, Ramesh Neelamani Georgia Tech, School of ECE Workshop on Sparsity and Computation June 11, 2010 Bonn,

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Jonathan Scarlett jonathan.scarlett@epfl.ch Laboratory for Information and Inference Systems (LIONS)

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

1 Adaptive Sensing for Sparse Recovery

1 Adaptive Sensing for Sparse Recovery 1 Adaptive Sensing for Sparse Recovery Jarvis Haupt 1 and Robert Nowak 2 1 University of Minnesota; 2 University of Wisconsin Madison 1.1 Introduction High-dimensional inference problems cannot be accurately

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

ACCORDING to Shannon s sampling theorem, an analog

ACCORDING to Shannon s sampling theorem, an analog 554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,

More information

Overlapping Variable Clustering with Statistical Guarantees and LOVE

Overlapping Variable Clustering with Statistical Guarantees and LOVE with Statistical Guarantees and LOVE Department of Statistical Science Cornell University WHOA-PSI, St. Louis, August 2017 Joint work with Mike Bing, Yang Ning and Marten Wegkamp Cornell University, Department

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Toeplitz Compressed Sensing Matrices with. Applications to Sparse Channel Estimation

Toeplitz Compressed Sensing Matrices with. Applications to Sparse Channel Estimation Toeplitz Compressed Sensing Matrices with 1 Applications to Sparse Channel Estimation Jarvis Haupt, Waheed U. Bajwa, Gil Raz, and Robert Nowak Abstract Compressed sensing CS has recently emerged as a powerful

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

Structured signal recovery from non-linear and heavy-tailed measurements

Structured signal recovery from non-linear and heavy-tailed measurements Structured signal recovery from non-linear and heavy-tailed measurements Larry Goldstein* Stanislav Minsker* Xiaohan Wei # *Department of Mathematics # Department of Electrical Engineering UniversityofSouthern

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

High-dimensional Statistical Models

High-dimensional Statistical Models High-dimensional Statistical Models Pradeep Ravikumar UT Austin MLSS 2014 1 Curse of Dimensionality Statistical Learning: Given n observations from p(x; θ ), where θ R p, recover signal/parameter θ. For

More information

High-dimensional Statistics

High-dimensional Statistics High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova Least squares regularized or constrained by L0: relationship between their global minimizers Mila Nikolova CMLA, CNRS, ENS Cachan, Université Paris-Saclay, France nikolova@cmla.ens-cachan.fr SIAM Minisymposium

More information

On the coherence barrier and analogue problems in compressed sensing

On the coherence barrier and analogue problems in compressed sensing On the coherence barrier and analogue problems in compressed sensing Clarice Poon University of Cambridge June 1, 2017 Joint work with: Ben Adcock Anders Hansen Bogdan Roman (Simon Fraser) (Cambridge)

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Gaussian graphical models and Ising models: modeling networks Eric Xing Lecture 0, February 7, 04 Reading: See class website Eric Xing @ CMU, 005-04

More information

Hypothesis Testing via Convex Optimization

Hypothesis Testing via Convex Optimization Hypothesis Testing via Convex Optimization Arkadi Nemirovski Joint research with Alexander Goldenshluger Haifa University Anatoli Iouditski Grenoble University Information Theory, Learning and Big Data

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

Info-Greedy Sequential Adaptive Compressed Sensing

Info-Greedy Sequential Adaptive Compressed Sensing Info-Greedy Sequential Adaptive Compressed Sensing Yao Xie Joint work with Gabor Braun and Sebastian Pokutta Georgia Institute of Technology Presented at Allerton Conference 2014 Information sensing for

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Boaz Nadler The Weizmann Institute of Science Israel Joint works with Inbal Horev, Ronen Basri, Meirav Galun and Ery Arias-Castro

More information

Prediction of double gene knockout measurements

Prediction of double gene knockout measurements Prediction of double gene knockout measurements Sofia Kyriazopoulou-Panagiotopoulou sofiakp@stanford.edu December 12, 2008 Abstract One way to get an insight into the potential interaction between a pair

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

6 Compressed Sensing and Sparse Recovery

6 Compressed Sensing and Sparse Recovery 6 Compressed Sensing and Sparse Recovery Most of us have noticed how saving an image in JPEG dramatically reduces the space it occupies in our hard drives as oppose to file types that save the pixel value

More information

Approximate Message Passing Algorithms

Approximate Message Passing Algorithms November 4, 2017 Outline AMP (Donoho et al., 2009, 2010a) Motivations Derivations from a message-passing perspective Limitations Extensions Generalized Approximate Message Passing (GAMP) (Rangan, 2011)

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Analog-to-Information Conversion

Analog-to-Information Conversion Analog-to-Information Conversion Sergiy A. Vorobyov Dept. Signal Processing and Acoustics, Aalto University February 2013 Winter School on Compressed Sensing, Ruka 1/55 Outline 1 Compressed Sampling (CS)

More information

arxiv: v2 [cs.lg] 6 May 2017

arxiv: v2 [cs.lg] 6 May 2017 arxiv:170107895v [cslg] 6 May 017 Information Theoretic Limits for Linear Prediction with Graph-Structured Sparsity Abstract Adarsh Barik Krannert School of Management Purdue University West Lafayette,

More information

High-dimensional covariance estimation based on Gaussian graphical models

High-dimensional covariance estimation based on Gaussian graphical models High-dimensional covariance estimation based on Gaussian graphical models Shuheng Zhou Department of Statistics, The University of Michigan, Ann Arbor IMA workshop on High Dimensional Phenomena Sept. 26,

More information

Linear Regression with Strongly Correlated Designs Using Ordered Weigthed l 1

Linear Regression with Strongly Correlated Designs Using Ordered Weigthed l 1 Linear Regression with Strongly Correlated Designs Using Ordered Weigthed l 1 ( OWL ) Regularization Mário A. T. Figueiredo Instituto de Telecomunicações and Instituto Superior Técnico, Universidade de

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Announcements. Proposals graded

Announcements. Proposals graded Announcements Proposals graded Kevin Jamieson 2018 1 Hypothesis testing Machine Learning CSE546 Kevin Jamieson University of Washington October 30, 2018 2018 Kevin Jamieson 2 Anomaly detection You are

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Stochastic Quasi-Newton Methods

Stochastic Quasi-Newton Methods Stochastic Quasi-Newton Methods Donald Goldfarb Department of IEOR Columbia University UCLA Distinguished Lecture Series May 17-19, 2016 1 / 35 Outline Stochastic Approximation Stochastic Gradient Descent

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Analysis of Greedy Algorithms

Analysis of Greedy Algorithms Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

Phase Transition Phenomenon in Sparse Approximation

Phase Transition Phenomenon in Sparse Approximation Phase Transition Phenomenon in Sparse Approximation University of Utah/Edinburgh L1 Approximation: May 17 st 2008 Convex polytopes Counting faces Sparse Representations via l 1 Regularization Underdetermined

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

A fast randomized algorithm for approximating an SVD of a matrix

A fast randomized algorithm for approximating an SVD of a matrix A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,

More information

Stability of Modified-CS over Time for recursive causal sparse reconstruction

Stability of Modified-CS over Time for recursive causal sparse reconstruction Stability of Modified-CS over Time for recursive causal sparse reconstruction Namrata Vaswani Department of Electrical and Computer Engineering Iowa State University http://www.ece.iastate.edu/ namrata

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse Recovery with Pre-Gaussian Random Matrices Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,

More information

IEOR 265 Lecture 3 Sparse Linear Regression

IEOR 265 Lecture 3 Sparse Linear Regression IOR 65 Lecture 3 Sparse Linear Regression 1 M Bound Recall from last lecture that the reason we are interested in complexity measures of sets is because of the following result, which is known as the M

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

l 1 -Regularized Linear Regression: Persistence and Oracle Inequalities

l 1 -Regularized Linear Regression: Persistence and Oracle Inequalities l -Regularized Linear Regression: Persistence and Oracle Inequalities Peter Bartlett EECS and Statistics UC Berkeley slides at http://www.stat.berkeley.edu/ bartlett Joint work with Shahar Mendelson and

More information

Recovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices

Recovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices Recovery of Low Rank and Jointly Sparse 1 Matrices with Two Sampling Matrices Sampurna Biswas, Hema K. Achanta, Mathews Jacob, Soura Dasgupta, and Raghuraman Mudumbai Abstract We provide a two-step approach

More information

Beyond incoherence and beyond sparsity: compressed sensing in the real world

Beyond incoherence and beyond sparsity: compressed sensing in the real world Beyond incoherence and beyond sparsity: compressed sensing in the real world Clarice Poon 1st November 2013 University of Cambridge, UK Applied Functional and Harmonic Analysis Group Head of Group Anders

More information

Dimensionality Reduction Notes 3

Dimensionality Reduction Notes 3 Dimensionality Reduction Notes 3 Jelani Nelson minilek@seas.harvard.edu August 13, 2015 1 Gordon s theorem Let T be a finite subset of some normed vector space with norm X. We say that a sequence T 0 T

More information

Lecture 6 September 13, 2016

Lecture 6 September 13, 2016 CS 395T: Sublinear Algorithms Fall 206 Prof. Eric Price Lecture 6 September 3, 206 Scribe: Shanshan Wu, Yitao Chen Overview Recap of last lecture. We talked about Johnson-Lindenstrauss (JL) lemma [JL84]

More information

Matrix Concentration. Nick Harvey University of British Columbia

Matrix Concentration. Nick Harvey University of British Columbia Matrix Concentration Nick Harvey University of British Columbia The Problem Given any random nxn, symmetric matrices Y 1,,Y k. Show that i Y i is probably close to E[ i Y i ]. Why? A matrix generalization

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

arxiv: v1 [math.pr] 22 May 2008

arxiv: v1 [math.pr] 22 May 2008 THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered

More information

Lecture 3: Compressive Classification

Lecture 3: Compressive Classification Lecture 3: Compressive Classification Richard Baraniuk Rice University dsp.rice.edu/cs Compressive Sampling Signal Sparsity wideband signal samples large Gabor (TF) coefficients Fourier matrix Compressive

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Observability with Random Observations

Observability with Random Observations Observability with Random Observations 1 Borhan M. Sanandaji, Michael B. Wakin, and Tyrone L. Vincent Abstract Recovery of the initial state of a high-dimensional system can require a large number of measurements.

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

DISCUSSION OF INFLUENTIAL FEATURE PCA FOR HIGH DIMENSIONAL CLUSTERING. By T. Tony Cai and Linjun Zhang University of Pennsylvania

DISCUSSION OF INFLUENTIAL FEATURE PCA FOR HIGH DIMENSIONAL CLUSTERING. By T. Tony Cai and Linjun Zhang University of Pennsylvania Submitted to the Annals of Statistics DISCUSSION OF INFLUENTIAL FEATURE PCA FOR HIGH DIMENSIONAL CLUSTERING By T. Tony Cai and Linjun Zhang University of Pennsylvania We would like to congratulate the

More information

arxiv: v4 [math.st] 13 Aug 2012

arxiv: v4 [math.st] 13 Aug 2012 On the Fundamental Limits of Adaptive Sensing Ery Arias-Castro, Emmanuel J. Candès and Mark A. Davenport November 20 (Revised August 202) arxiv:.4646v4 [math.st] 3 Aug 202 Abstract Suppose we can sequentially

More information