Action Recognition on Distributed Body Sensor Networks via Sparse Representation

Size: px
Start display at page:

Download "Action Recognition on Distributed Body Sensor Networks via Sparse Representation"

Transcription

1 Action Recognition on Distributed Body Sensor Networks via Sparse Representation Allen Y Yang <yang@eecsberkeleyedu> June 28, 2008 CVPR4HB

2 New Paradigm of Distributed Pattern Recognition Centralized Recognition Distributed Recognition powerful processors (virtually) unlimited memory (virtually) unlimited bandwidth observations in controlled environment simple sensor management mobile processors limited onboard memory band-limited communications high-percentage of occlusion/outliers complex sensor networks

3 New Paradigm of Distributed Pattern Recognition Centralized Recognition Distributed Recognition powerful processors (virtually) unlimited memory (virtually) unlimited bandwidth observations in controlled environment simple sensor management mobile processors limited onboard memory band-limited communications high-percentage of occlusion/outliers complex sensor networks Main constraints for sensor network applications 1 Conserve power consumption 2 Adapt to network configuration changes

4 d-war: Distributed Wearable Action Recognition Architecture Eight sensors on human body Locations are given and fixed Each sensor carries triaxial accelerometer and biaxial gyroscope Sampling frequency: 20Hz Figure: Readings from 8 x-axis accelerometers and x-axis gyroscopes for a stand-kneel-stand sequence

5 Challenges 1 Simultaneous segmentation and classification

6 Challenges 1 Simultaneous segmentation and classification 2 Individual sensors not sufficient to classify full-body motions Single sensors on the upper body can not recognize lower body motions Vice Versa

7 Challenges 1 Simultaneous segmentation and classification 2 Individual sensors not sufficient to classify full-body motions Single sensors on the upper body can not recognize lower body motions Vice Versa 3 Simulate sensor failure and network congestion by different subsets of active sensors

8 Challenges 1 Simultaneous segmentation and classification 2 Individual sensors not sufficient to classify full-body motions Single sensors on the upper body can not recognize lower body motions Vice Versa 3 Simulate sensor failure and network congestion by different subsets of active sensors 4 Identity independence: Figure: Same actions performed by two subjects

9 Outline 1 Overview: Classification framework via l 1 -minimization 2 Individual sensor: Obtains limited classification ability and becomes active only when certain events are locally detected 3 Global classifier: Adapts to the change of active sensors in network and boost multi-sensor recognition

10 Sparse Representation Sparsity A signal is sparse if most of its coefficients are (approximately) zero

11 Sparse Representation Sparsity A signal is sparse if most of its coefficients are (approximately) zero 1 Sparsity in frequency domain 2 Sparsity in spatial domain Figure: 2-D DCT transform Figure: Gene microarray data

12 Sparsity in human visual cortex [Olshausen & Field 1997, Serre & Poggio 2006]

13 Sparsity in human visual cortex [Olshausen & Field 1997, Serre & Poggio 2006] 1 Feed-forward: No iterative feedback loop 2 Redundancy: Average neurons for each feature representation 3 Recognition: Information exchange between stages is not about individual neurons, but rather how many neurons as a group fire together

14 Sparsity and l 1 -Minimization 1 Black gold age [Claerbout & Muir 1973, Taylor, Banks & McCoy 1979] Figure: Deconvolution of spike train

15 Sparsity and l 1 -Minimization 1 Black gold age [Claerbout & Muir 1973, Taylor, Banks & McCoy 1979] Figure: Deconvolution of spike train 2 Basis pursuit [Chen & Donoho 1999]: Given y = Ax and x unknown, x = arg min x 1, subject to y = Ax 3 The Lasso (least absolute shrinkage and selection operator) [Tibshirani 1996] x = arg min x 1, subject to y Ax 2 ɛ

16 Sparse Representation Encodes Membership 1 Notation Training: For K classes, collect training samples {v 1,1,, v 1,n1 },, {v K,1,, v K,nK } R D Test: Present a new y R D, solve for label(y) [1, 2,, K]

17 Sparse Representation Encodes Membership 1 Notation Training: For K classes, collect training samples {v 1,1,, v 1,n1 },, {v K,1,, v K,nK } R D Test: Present a new y R D, solve for label(y) [1, 2,, K] 2 Subspace model: Assume y belongs to Class i: where A i = [v i,1, v i,2,, v i,ni ] y = α i,1 v i,1 + α i,2 v i,2 + + α i,n1 v i,ni, = A i α i,

18 Sparse Representation Encodes Membership 1 Notation Training: For K classes, collect training samples {v 1,1,, v 1,n1 },, {v K,1,, v K,nK } R D Test: Present a new y R D, solve for label(y) [1, 2,, K] 2 Subspace model: Assume y belongs to Class i: where A i = [v i,1, v i,2,, v i,ni ] y = α i,1 v i,1 + α i,2 v i,2 + + α i,n1 v i,ni, = A i α i, 3 Nevertheless, Class i is the unknown variable we need to solve: α 1 α 2 Sparse representation y = [A 1, A 2,, A K ] = Ax α K

19 Sparse Representation Encodes Membership 1 Notation Training: For K classes, collect training samples {v 1,1,, v 1,n1 },, {v K,1,, v K,nK } R D Test: Present a new y R D, solve for label(y) [1, 2,, K] 2 Subspace model: Assume y belongs to Class i: where A i = [v i,1, v i,2,, v i,ni ] y = α i,1 v i,1 + α i,2 v i,2 + + α i,n1 v i,ni, = A i α i, 3 Nevertheless, Class i is the unknown variable we need to solve: α 1 α 2 Sparse representation y = [A 1, A 2,, A K ] = Ax α K Sparse representation encodes membership! x 0 = [ 0 0 α T i 0 0 ] T R n

20 Distributed Action Recognition 1 Training samples: manually segment and normalize to duration h

21 Distributed Action Recognition 1 Training samples: manually segment and normalize to duration h 2 On each sensor node i, stack training actions into vector form v i = [x(1),, x(h), y(1),, y(h), z(1),, z(h), θ(1),, θ(h), ρ(1),, ρ(h)] T R 5h

22 Distributed Action Recognition 1 Training samples: manually segment and normalize to duration h 2 On each sensor node i, stack training actions into vector form v i = [x(1),, x(h), y(1),, y(h), z(1),, z(h), θ(1),, θ(h), ρ(1),, ρ(h)] T R 5h 3 Full body motion Training sample: v = ( v1 v 8 ) Test sample: y = y 1 R 8 5h y 8

23 Distributed Action Recognition 1 Training samples: manually segment and normalize to duration h 2 On each sensor node i, stack training actions into vector form v i = [x(1),, x(h), y(1),, y(h), z(1),, z(h), θ(1),, θ(h), ρ(1),, ρ(h)] T R 5h 3 Full body motion Training sample: v = ( v1 v 8 ) Test sample: y = y 1 R 8 5h y 8 4 Action subspace: If y is from Class i, y = y 1 ( v1 = α i,1 y 8 v8 ) α i,ni ( v1 v8 ) n i = A i α i

24 Sparse Representation 1 Nevertheless, label(y) = i is the unknown membership function to solve: α 1 Sparse Representation: y = [ ] α 2 A 1 A 2 A K = Ax R8 5h α K

25 Sparse Representation 1 Nevertheless, label(y) = i is the unknown membership function to solve: α 1 Sparse Representation: y = [ ] α 2 A 1 A 2 A K = Ax R8 5h α K 2 One solution: x = [0, 0,, α T i, 0,, 0] T Sparse representation encodes membership

26 Sparse Representation 1 Nevertheless, label(y) = i is the unknown membership function to solve: α 1 Sparse Representation: y = [ ] α 2 A 1 A 2 A K = Ax R8 5h α K 2 One solution: x = [0, 0,, α T i, 0,, 0] T Sparse representation encodes membership 3 Two problems: Directly solving the linear system is intractable Seeking the sparsest solution

27 Dimensionality Reduction 1 Construct Fisher/LDA features R i R 10 5h on each node i: ỹ i = R i y i = R i A i x = Ã i x R 10

28 Dimensionality Reduction 1 Construct Fisher/LDA features R i R 10 5h on each node i: ỹ i = R i y i = R i A i x = Ã i x R 10 2 Globally ỹ 1 = ỹ 8 R R 8 y 1 y 8 = RAx = Ãx R8 10

29 Dimensionality Reduction 1 Construct Fisher/LDA features R i R 10 5h on each node i: ỹ i = R i y i = R i A i x = Ã i x R 10 2 Globally ỹ 1 = ỹ 8 R R 8 y 1 y 8 = RAx = Ãx R During the transformation, the data matrix A and x remain unchanged

30 Seeking Sparsest Solution: l 1 -Minimization 1 Ideal solution: l 0 -minimization (P 0 ) x = arg min x 0 st ỹ = Ãx x where 0 simply counts the number of nonzero terms However, such solution is generally NP-hard

31 Seeking Sparsest Solution: l 1 -Minimization 1 Ideal solution: l 0 -minimization (P 0 ) x = arg min x 0 st ỹ = Ãx x where 0 simply counts the number of nonzero terms However, such solution is generally NP-hard 2 Compressed sensing: under mild condition, equivalence relation (P 1 ) x = arg min x 1 st ỹ = Ãx x

32 Seeking Sparsest Solution: l 1 -Minimization 1 Ideal solution: l 0 -minimization (P 0 ) x = arg min x 0 st ỹ = Ãx x where 0 simply counts the number of nonzero terms However, such solution is generally NP-hard 2 Compressed sensing: under mild condition, equivalence relation (P 1 ) x = arg min x 1 st ỹ = Ãx x 3 l 1 -Ball l 1 -Minimization is convex Solution equal to l 0 -minimization Reference: Robust face recognition via sparse representation (in press) PAMI, 2008

33 Distributed Recognition Framework 1 Distributed Sparse Representation ỹ 1 v1 ) ( v1 ) = R (,, x ỹ 8 v 8 1 v 8 n ỹ 1 =R 1(v 1,1,,v 1,n)x ỹ 8 =R 8(v 8,1,,v 8,n)x

34 Distributed Recognition Framework 1 Distributed Sparse Representation ỹ 1 v1 ) ( v1 ) = R (,, x ỹ 8 v 8 1 v 8 n ỹ 1 =R 1(v 1,1,,v 1,n)x ỹ 8 =R 8(v 8,1,,v 8,n)x 2 Local classifier: For each note i, solve ỹ i = R i ( vi,1,, v i,n ) x R 10

35 Distributed Recognition Framework 1 Distributed Sparse Representation ỹ 1 v1 ) ( v1 ) = R (,, x ỹ 8 v 8 1 v 8 n ỹ 1 =R 1(v 1,1,,v 1,n)x ỹ 8 =R 8(v 8,1,,v 8,n)x 2 Local classifier: For each note i, solve ỹ i = R i ( vi,1,, v i,n ) x R 10 3 Adaptive global classifier: Suppose only first L sensors are available (L 8): ỹ 1 R 1 0 v 1 v 1 =,, x R 10L ỹ L 0 R L v L 1 v L n

36 Distributed Recognition Framework 1 Distributed Sparse Representation ỹ 1 v1 ) ( v1 ) = R (,, x ỹ 8 v 8 1 v 8 n ỹ 1 =R 1(v 1,1,,v 1,n)x ỹ 8 =R 8(v 8,1,,v 8,n)x 2 Local classifier: For each note i, solve ỹ i = R i ( vi,1,, v i,n ) x R 10 3 Adaptive global classifier: Suppose only first L sensors are available (L 8): ỹ 1 R 1 0 v 1 v 1 =,, x R 10L ỹ L 0 R L v L 1 v L n 4 The representation x and training matrix A remain invariant

37 Rejection of Invalid Actions (Segmentation) 1 Multiple segmentation hypothesis

38 Rejection of Invalid Actions (Segmentation) 1 Multiple segmentation hypothesis 2 Outlier rejection

39 Rejection of Invalid Actions (Segmentation) 1 Multiple segmentation hypothesis 2 Outlier rejection Outlier Rejection When l 1 -solution is not sparse or concentrated to one subspace, the test sample is invalid Sparsity Concentration Index: SCI(x) = K max i δ i (x) 1 / x 1 1 K 1 [0, 1]

40 Experiment Precision vs Recall: Sensors 2 7 2,7 1,2,7 1-3, 7,8 1-8 Prec [%] Rec [%]

41 Experiment Precision vs Recall: Sensors 2 7 2,7 1,2,7 1-3, 7,8 1-8 Prec [%] Rec [%] Segmentation results using all 8 sensors: (a) Stand-Sit-Stand (b) Sit-Lie-Sit (c) Rotate-Left (d) Go-Downstairs

42 Confusion Tables (a) 1-8 (b) 7

43 Conclusion 1 Mixture subspace model: 10-D action spaces suffice for data communication

44 Conclusion 1 Mixture subspace model: 10-D action spaces suffice for data communication 2 Accurate recognition via sparse representation Full-body network: 99% accuracy with 95% recall Keep one on upper body and one on lower body: 94% accuracy and 82% recall Reduce to single sensors: 90% accuracy

45 Conclusion 1 Mixture subspace model: 10-D action spaces suffice for data communication 2 Accurate recognition via sparse representation Full-body network: 99% accuracy with 95% recall Keep one on upper body and one on lower body: 94% accuracy and 82% recall Reduce to single sensors: 90% accuracy 3 Applications Beyond Wii controllers and iphones

46 Conclusion 1 Mixture subspace model: 10-D action spaces suffice for data communication 2 Accurate recognition via sparse representation Full-body network: 99% accuracy with 95% recall Keep one on upper body and one on lower body: 94% accuracy and 82% recall Reduce to single sensors: 90% accuracy 3 Applications Beyond Wii controllers and iphones Eldertech: falling detection, mobility monitoring

47 Conclusion 1 Mixture subspace model: 10-D action spaces suffice for data communication 2 Accurate recognition via sparse representation Full-body network: 99% accuracy with 95% recall Keep one on upper body and one on lower body: 94% accuracy and 82% recall Reduce to single sensors: 90% accuracy 3 Applications Beyond Wii controllers and iphones Eldertech: falling detection, mobility monitoring Energy Expenditure: lifestyle-related chronic diseases

48

49 Acknowledgments Collaborators Berkeley: Ruzena Bajcsy, Shankar Sastry, Sameer Iyengar Cornell: Philip Kuryloski UT-Dallas: Roozbeh Jafari NSF TRUST Center Funding Support ARO MURI: Heterogeneous Sensor Networks (HSN) Telecom Italia Lab at Berkeley References Robust face recognition via sparse representation (in press) PAMI, 2008 Donoho For most large underdetermined systems of equations, the minimal l 1 -norm near-solution approximates the sparsest near-solution 2004 Candes Compressive sampling 2006 Donoho Neighborly polytopes and sparse solution of underdetermined linear equations 2004

Compressed Sensing Meets Machine Learning - Classification of Mixture Subspace Models via Sparse Representation

Compressed Sensing Meets Machine Learning - Classification of Mixture Subspace Models via Sparse Representation Compressed Sensing Meets Machine Learning - Classification of Mixture Subspace Models via Sparse Representation Allen Y Yang Feb 25, 2008 UC Berkeley What is Sparsity Sparsity A

More information

Co-Recognition of Human Activity and Sensor Location via Compressed Sensing in Wearable Body Sensor Networks

Co-Recognition of Human Activity and Sensor Location via Compressed Sensing in Wearable Body Sensor Networks 2012 Ninth International Conference on Wearable and Implantable Body Sensor Networks Co-Recognition of Human Activity and Sensor Location via Compressed Sensing in Wearable Body Sensor Networks Wenyao

More information

... SPARROW. SPARse approximation Weighted regression. Pardis Noorzad. Department of Computer Engineering and IT Amirkabir University of Technology

... SPARROW. SPARse approximation Weighted regression. Pardis Noorzad. Department of Computer Engineering and IT Amirkabir University of Technology ..... SPARROW SPARse approximation Weighted regression Pardis Noorzad Department of Computer Engineering and IT Amirkabir University of Technology Université de Montréal March 12, 2012 SPARROW 1/47 .....

More information

Sparse representation classification and positive L1 minimization

Sparse representation classification and positive L1 minimization Sparse representation classification and positive L1 minimization Cencheng Shen Joint Work with Li Chen, Carey E. Priebe Applied Mathematics and Statistics Johns Hopkins University, August 5, 2014 Cencheng

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Solution Recovery via L1 minimization: What are possible and Why?

Solution Recovery via L1 minimization: What are possible and Why? Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization

More information

Sparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Sparse regression. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Phase Transition Phenomenon in Sparse Approximation

Phase Transition Phenomenon in Sparse Approximation Phase Transition Phenomenon in Sparse Approximation University of Utah/Edinburgh L1 Approximation: May 17 st 2008 Convex polytopes Counting faces Sparse Representations via l 1 Regularization Underdetermined

More information

Lecture 24 May 30, 2018

Lecture 24 May 30, 2018 Stats 3C: Theory of Statistics Spring 28 Lecture 24 May 3, 28 Prof. Emmanuel Candes Scribe: Martin J. Zhang, Jun Yan, Can Wang, and E. Candes Outline Agenda: High-dimensional Statistical Estimation. Lasso

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Sparse Principal Component Analysis via Alternating Maximization and Efficient Parallel Implementations

Sparse Principal Component Analysis via Alternating Maximization and Efficient Parallel Implementations Sparse Principal Component Analysis via Alternating Maximization and Efficient Parallel Implementations Martin Takáč The University of Edinburgh Joint work with Peter Richtárik (Edinburgh University) Selin

More information

Making Flippy Floppy

Making Flippy Floppy Making Flippy Floppy James V. Burke UW Mathematics jvburke@uw.edu Aleksandr Y. Aravkin IBM, T.J.Watson Research sasha.aravkin@gmail.com Michael P. Friedlander UBC Computer Science mpf@cs.ubc.ca Current

More information

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Sparse Algorithms are not Stable: A No-free-lunch Theorem

Sparse Algorithms are not Stable: A No-free-lunch Theorem Sparse Algorithms are not Stable: A No-free-lunch Theorem Huan Xu Shie Mannor Constantine Caramanis Abstract We consider two widely used notions in machine learning, namely: sparsity algorithmic stability.

More information

High-dimensional Statistics

High-dimensional Statistics High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given

More information

Computation and Relaxation of Conditions for Equivalence between l 1 and l 0 Minimization

Computation and Relaxation of Conditions for Equivalence between l 1 and l 0 Minimization Computation and Relaxation of Conditions for Equivalence between l and l Minimization Yoav Sharon John Wright Yi Ma April, 28 Abstract In this paper, we investigate the exact conditions under which the

More information

2.3. Clustering or vector quantization 57

2.3. Clustering or vector quantization 57 Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :

More information

Shankar Shivappa University of California, San Diego April 26, CSE 254 Seminar in learning algorithms

Shankar Shivappa University of California, San Diego April 26, CSE 254 Seminar in learning algorithms Recognition of Visual Speech Elements Using Adaptively Boosted Hidden Markov Models. Say Wei Foo, Yong Lian, Liang Dong. IEEE Transactions on Circuits and Systems for Video Technology, May 2004. Shankar

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

from Object Image Based on

from Object Image Based on Inference of Grasping Pattern from Object Image Based on Interaction Descriptor Tadashi Matsuo, Takuya Kawakami, Yoko Ogawa, Nobutaka Shimada Ritsumeikan University Introduction An object as a tool has

More information

Algorithms for sparse analysis Lecture I: Background on sparse approximation

Algorithms for sparse analysis Lecture I: Background on sparse approximation Algorithms for sparse analysis Lecture I: Background on sparse approximation Anna C. Gilbert Department of Mathematics University of Michigan Tutorial on sparse approximations and algorithms Compress data

More information

High-dimensional Statistical Models

High-dimensional Statistical Models High-dimensional Statistical Models Pradeep Ravikumar UT Austin MLSS 2014 1 Curse of Dimensionality Statistical Learning: Given n observations from p(x; θ ), where θ R p, recover signal/parameter θ. For

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Minimizing the Difference of L 1 and L 2 Norms with Applications

Minimizing the Difference of L 1 and L 2 Norms with Applications 1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:

More information

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs) ORF 523 Lecture 8 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. 1 Outline Convexity-preserving operations Convex envelopes, cardinality

More information

COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION

COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Learning Multiple Tasks with a Sparse Matrix-Normal Penalty

Learning Multiple Tasks with a Sparse Matrix-Normal Penalty Learning Multiple Tasks with a Sparse Matrix-Normal Penalty Yi Zhang and Jeff Schneider NIPS 2010 Presented by Esther Salazar Duke University March 25, 2011 E. Salazar (Reading group) March 25, 2011 1

More information

Scale Mixture Modeling of Priors for Sparse Signal Recovery

Scale Mixture Modeling of Priors for Sparse Signal Recovery Scale Mixture Modeling of Priors for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Outline Outline Sparse

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Sparse Approximation and Variable Selection

Sparse Approximation and Variable Selection Sparse Approximation and Variable Selection Lorenzo Rosasco 9.520 Class 07 February 26, 2007 About this class Goal To introduce the problem of variable selection, discuss its connection to sparse approximation

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Communication constraints and latency in Networked Control Systems

Communication constraints and latency in Networked Control Systems Communication constraints and latency in Networked Control Systems João P. Hespanha Center for Control Engineering and Computation University of California Santa Barbara In collaboration with Antonio Ortega

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Sparse and Robust Optimization and Applications

Sparse and Robust Optimization and Applications Sparse and and Statistical Learning Workshop Les Houches, 2013 Robust Laurent El Ghaoui with Mert Pilanci, Anh Pham EECS Dept., UC Berkeley January 7, 2013 1 / 36 Outline Sparse Sparse Sparse Probability

More information

Recognition Performance from SAR Imagery Subject to System Resource Constraints

Recognition Performance from SAR Imagery Subject to System Resource Constraints Recognition Performance from SAR Imagery Subject to System Resource Constraints Michael D. DeVore Advisor: Joseph A. O SullivanO Washington University in St. Louis Electronic Systems and Signals Research

More information

Uncertainty quantification for sparse solutions of random PDEs

Uncertainty quantification for sparse solutions of random PDEs Uncertainty quantification for sparse solutions of random PDEs L. Mathelin 1 K.A. Gallivan 2 1 LIMSI - CNRS Orsay, France 2 Mathematics Dpt., Florida State University Tallahassee, FL, USA SIAM 10 July

More information

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting

More information

Optimization methods

Optimization methods Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Making Flippy Floppy

Making Flippy Floppy Making Flippy Floppy James V. Burke UW Mathematics jvburke@uw.edu Aleksandr Y. Aravkin IBM, T.J.Watson Research sasha.aravkin@gmail.com Michael P. Friedlander UBC Computer Science mpf@cs.ubc.ca Vietnam

More information

Bayesian Methods for Sparse Signal Recovery

Bayesian Methods for Sparse Signal Recovery Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery

More information

Naval Air Warfare Center Weapons Division China Lake, CA. Automatic Target Recognition via Sparse Representations

Naval Air Warfare Center Weapons Division China Lake, CA. Automatic Target Recognition via Sparse Representations Naval Air Warfare Center Weapons Division China Lake, CA. Automatic Target Recognition via Sparse Representations Outline Research objective / ATR challenges Background on Compressed Sensing (CS) ATR in

More information

When Dictionary Learning Meets Classification

When Dictionary Learning Meets Classification When Dictionary Learning Meets Classification Bufford, Teresa 1 Chen, Yuxin 2 Horning, Mitchell 3 Shee, Liberty 1 Mentor: Professor Yohann Tendero 1 UCLA 2 Dalhousie University 3 Harvey Mudd College August

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

18.6 Regression and Classification with Linear Models

18.6 Regression and Classification with Linear Models 18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight

More information

Compressive Sensing Theory and L1-Related Optimization Algorithms

Compressive Sensing Theory and L1-Related Optimization Algorithms Compressive Sensing Theory and L1-Related Optimization Algorithms Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, USA CAAM Colloquium January 26, 2009 Outline:

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD EE-731: ADVANCED TOPICS IN DATA SCIENCES LABORATORY FOR INFORMATION AND INFERENCE SYSTEMS SPRING 2016 INSTRUCTOR: VOLKAN CEVHER SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD STRUCTURED SPARSITY

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Sparse Linear Models (10/7/13)

Sparse Linear Models (10/7/13) STA56: Probabilistic machine learning Sparse Linear Models (0/7/) Lecturer: Barbara Engelhardt Scribes: Jiaji Huang, Xin Jiang, Albert Oh Sparsity Sparsity has been a hot topic in statistics and machine

More information

On the Lagrangian Biduality of Sparsity Minimization Problems

On the Lagrangian Biduality of Sparsity Minimization Problems On the Lagrangian Biduality of Sparsity Minimization Problems Dheeraj Singaraju Roberto Tron Ehsan Elhamifar Allen Yang S. Shankar Sastry Electrical Engineering and Computer Sciences University of California

More information

DISCUSSION OF INFLUENTIAL FEATURE PCA FOR HIGH DIMENSIONAL CLUSTERING. By T. Tony Cai and Linjun Zhang University of Pennsylvania

DISCUSSION OF INFLUENTIAL FEATURE PCA FOR HIGH DIMENSIONAL CLUSTERING. By T. Tony Cai and Linjun Zhang University of Pennsylvania Submitted to the Annals of Statistics DISCUSSION OF INFLUENTIAL FEATURE PCA FOR HIGH DIMENSIONAL CLUSTERING By T. Tony Cai and Linjun Zhang University of Pennsylvania We would like to congratulate the

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

COMS 4771 Lecture Fixed-design linear regression 2. Ridge and principal components regression 3. Sparse regression and Lasso

COMS 4771 Lecture Fixed-design linear regression 2. Ridge and principal components regression 3. Sparse regression and Lasso COMS 477 Lecture 6. Fixed-design linear regression 2. Ridge and principal components regression 3. Sparse regression and Lasso / 2 Fixed-design linear regression Fixed-design linear regression A simplified

More information

Matrix Rank Minimization with Applications

Matrix Rank Minimization with Applications Matrix Rank Minimization with Applications Maryam Fazel Haitham Hindi Stephen Boyd Information Systems Lab Electrical Engineering Department Stanford University 8/2001 ACC 01 Outline Rank Minimization

More information

Discriminative Learning and Big Data

Discriminative Learning and Big Data AIMS-CDT Michaelmas 2016 Discriminative Learning and Big Data Lecture 2: Other loss functions and ANN Andrew Zisserman Visual Geometry Group University of Oxford http://www.robots.ox.ac.uk/~vgg Lecture

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Reservoir Computing and Echo State Networks

Reservoir Computing and Echo State Networks An Introduction to: Reservoir Computing and Echo State Networks Claudio Gallicchio gallicch@di.unipi.it Outline Focus: Supervised learning in domain of sequences Recurrent Neural networks for supervised

More information

Machine Learning for Biomedical Engineering. Enrico Grisan

Machine Learning for Biomedical Engineering. Enrico Grisan Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Curse of dimensionality Why are more features bad? Redundant features (useless or confounding) Hard to interpret and

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017

COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017 COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University UNDERDETERMINED LINEAR EQUATIONS We

More information

Fast Sparse Representation Based on Smoothed

Fast Sparse Representation Based on Smoothed Fast Sparse Representation Based on Smoothed l 0 Norm G. Hosein Mohimani 1, Massoud Babaie-Zadeh 1,, and Christian Jutten 2 1 Electrical Engineering Department, Advanced Communications Research Institute

More information

COS513: FOUNDATIONS OF PROBABILISTIC MODELS LECTURE 10

COS513: FOUNDATIONS OF PROBABILISTIC MODELS LECTURE 10 COS53: FOUNDATIONS OF PROBABILISTIC MODELS LECTURE 0 MELISSA CARROLL, LINJIE LUO. BIAS-VARIANCE TRADE-OFF (CONTINUED FROM LAST LECTURE) If V = (X n, Y n )} are observed data, the linear regression problem

More information

ESL Chap3. Some extensions of lasso

ESL Chap3. Some extensions of lasso ESL Chap3 Some extensions of lasso 1 Outline Consistency of lasso for model selection Adaptive lasso Elastic net Group lasso 2 Consistency of lasso for model selection A number of authors have studied

More information

IEOR 265 Lecture 3 Sparse Linear Regression

IEOR 265 Lecture 3 Sparse Linear Regression IOR 65 Lecture 3 Sparse Linear Regression 1 M Bound Recall from last lecture that the reason we are interested in complexity measures of sets is because of the following result, which is known as the M

More information

Sparse analysis Lecture II: Hardness results for sparse approximation problems

Sparse analysis Lecture II: Hardness results for sparse approximation problems Sparse analysis Lecture II: Hardness results for sparse approximation problems Anna C. Gilbert Department of Mathematics University of Michigan Sparse Problems Exact. Given a vector x R d and a complete

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery

Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery Daniel L. Pimentel-Alarcón, Ashish Tiwari Georgia State University, Atlanta, GA Douglas A. Hope Hope Scientific Renaissance

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

A Compressive Sensing Based Compressed Neural Network for Sound Source Localization

A Compressive Sensing Based Compressed Neural Network for Sound Source Localization A Compressive Sensing Based Compressed Neural Network for Sound Source Localization Mehdi Banitalebi Dehkordi Speech Processing Research Lab Yazd University Yazd, Iran mahdi_banitalebi@stu.yazduni.ac.ir

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Bayesian Paradigm. Maximum A Posteriori Estimation

Bayesian Paradigm. Maximum A Posteriori Estimation Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)

More information

Adventures in Kernel Density Estimation! Clay Scott!! joint with!! JooSeuk Kim, Robert Vandermeulen, and! Efrén Cruz Cortés!

Adventures in Kernel Density Estimation! Clay Scott!! joint with!! JooSeuk Kim, Robert Vandermeulen, and! Efrén Cruz Cortés! Adventures in Kernel Density Estimation! Clay Scott!! joint with!! JooSeuk Kim, Robert Vandermeulen, and! Efrén Cruz Cortés! Kernel Density Estimation! Kernel Density Estimation! b f (x ) = 1 n X i k ¾

More information

ONR Mine Warfare Autonomy Virtual Program Review September 7, 2017

ONR Mine Warfare Autonomy Virtual Program Review September 7, 2017 ONR Mine Warfare Autonomy Virtual Program Review September 7, 2017 Information-driven Guidance and Control for Adaptive Target Detection and Classification Silvia Ferrari Pingping Zhu and Bo Fu Mechanical

More information

Machine Learning. Regularization and Feature Selection. Fabio Vandin November 14, 2017

Machine Learning. Regularization and Feature Selection. Fabio Vandin November 14, 2017 Machine Learning Regularization and Feature Selection Fabio Vandin November 14, 2017 1 Regularized Loss Minimization Assume h is defined by a vector w = (w 1,..., w d ) T R d (e.g., linear models) Regularization

More information

Dictionary Learning for L1-Exact Sparse Coding

Dictionary Learning for L1-Exact Sparse Coding Dictionary Learning for L1-Exact Sparse Coding Mar D. Plumbley Department of Electronic Engineering, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom. Email: mar.plumbley@elec.qmul.ac.u

More information

Conditions for a Unique Non-negative Solution to an Underdetermined System

Conditions for a Unique Non-negative Solution to an Underdetermined System Conditions for a Unique Non-negative Solution to an Underdetermined System Meng Wang and Ao Tang School of Electrical and Computer Engineering Cornell University Ithaca, NY 14853 Abstract This paper investigates

More information

Sparse approximation via locally competitive algorithms

Sparse approximation via locally competitive algorithms Sparse approximation via locally competitive algorithms AMSC PhD Preliminary Exam May 14, 2014 1/51 Outline I Motivation 1 Motivation 2 3 4 5 2/51 Table of Contents Motivation 1 Motivation 2 3 4 5 3/51

More information

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations David Donoho Department of Statistics Stanford University Email: donoho@stanfordedu Hossein Kakavand, James Mammen

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Mahdi Barzegar Communications and Information Theory Group (CommIT) Technische Universität Berlin Heisenberg Communications and

More information

An Homotopy Algorithm for the Lasso with Online Observations

An Homotopy Algorithm for the Lasso with Online Observations An Homotopy Algorithm for the Lasso with Online Observations Pierre J. Garrigues Department of EECS Redwood Center for Theoretical Neuroscience University of California Berkeley, CA 94720 garrigue@eecs.berkeley.edu

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

A significance test for the lasso

A significance test for the lasso 1 First part: Joint work with Richard Lockhart (SFU), Jonathan Taylor (Stanford), and Ryan Tibshirani (Carnegie-Mellon Univ.) Second part: Joint work with Max Grazier G Sell, Stefan Wager and Alexandra

More information