Machine Learning for Signal Processing Sparse and Overcomplete Representations
|
|
- Clarence Cox
- 6 years ago
- Views:
Transcription
1 Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1
2 So far Weights Data Basis Data Independent ICA PCA NNMF # basis <= dim X 2
3 So far Can we use the same structure to identify basic units that compose the signal? 3
4 Key Topics in this Lecture Basics Component-based representations Overcomplete and Sparse Representations, Dictionaries Pursuit Algorithms How to learn a dictionary Why is an overcomplete representation powerful? 4
5 Representing Data Dictionary (codebook) 5
6 Representing Data Dictionary Atoms 6
7 Representing Data Dictionary Atoms Each atom is a basic unit that can be used to compose larger units. 7
8 Representing Data Atoms 8
9 Representing Data Atoms 9
10 Representing Data 1
11 Representing Data 11
12 Representing Data 12
13 Representing Data
14 Representing Data w 1 w 5 w 2 w 6 w 3 w 7 w 4. 14
15 Representing Data Linear combination of elements in the Dictionary = 15
16 Representing Data Linear combination of elements in the Dictionary = = 16
17 Representing Data Linear combination of elements in the Dictionary = = 17
18 Quick Linear Algebra Refresher Remember, #(Basis Vectors)= #unknowns Basis Vectors (from Dictionary) Weights Input data 18
19 Overcomplete Representations What is the dimensionality of the input image? (say 64x64 image) 496 What is the dimensionality of the dictionary? (each image = 64x64 pixels) 496 x N 19
20 Overcomplete Representations What is the dimensionality of the input image? (say 64x64 image) 496 What is the dimensionality of the dictionary? (each image = 64x64 pixels) 496 x N??? 2
21 Overcomplete Representations What is the dimensionality of the input image? (say 64x64 image) 496 What is the dimensionality of the dictionary? (each image = 64x64 pixels) 496 x N VERY LARGE!!! 21
22 Overcomplete Representations What is the dimensionality of the input image? (say 64x64 image) If N > 496 (as it likely is) we have 496 an overcomplete representation What is the dimensionality of the dictionary? (each image = 64x64 pixels) 496 x N VERY LARGE!!! 22
23 Overcomplete Representations What is the dimensionality of the input image? (say 64x64 image) More generally: 496 If #(basis vectors) > dimensions of input What is the dimensionality of the dictionary? we have an overcomplete representation (each image = 64x64 pixels) 496 x N VERY LARGE!!! 23
24 Quick Linear Algebra Refresher Remember, #(Basis Vectors)= #unknowns Basis Vectors (from Dictionary) Weights Input data 24
25 Dictionary based Representations Overcomplete dictionary -based representations are composition-based representations with more bases than the dimensionality of the data D α X = Bases matrix is wide (more bases than dimensions) Input
26 Why Dictionary-based Representations? Dictionary based representations are semantically more meaningful Enable content-based description Bases can capture entire structures in data E.g. notes in music E.g. image structures (such as faces) in images Enable content-based processing Reconstructing, separating, denoising, manipulating speech/music signals Coding, compression, etc. Statistical reasons: We will get to that shortly.. 26
27 Problems How to obtain the dictionary Which will give us meaningful representations How to compute the weights? D α X = Input
28 Problems How to obtain the dictionary Which will give us meaningful representations How to compute the weights? D α X = Input
29 Quick Linear Algebra Refresher Remember, #(Basis Vectors)= #unknowns Basis Vectors Weights Input data When can we solve for α? 29
30 Quick Linear Algebra Refresher = Unique solution D: full rank = We may have no exact solution = Infinite Solutions 3
31 Quick Linear Algebra Refresher = Unique solution D: full rank = = We may have no exact solution Our Case Infinite Solutions 31
32 Using Pseudo-Inverse? 32
33 Overcomplete Representation Unknown D α X = #(Basis Vectors) > dimensions of the input 33
34 Representing Data Using bases that we know But no more than k=4 bases 34
35 Overcompleteness and Sparsity To solve an overcomplete system of the type: Make assumptions about the data. Suppose, we say that X is composed of no more than a fixed number (k) of bases from D (k dim(x)) The term bases is an abuse of terminology.. Now, we can find the set of k bases that best fit the data point, X. 35
36 Overcompleteness and Sparsity Atoms But no more than k=4 bases are active 36
37 Overcompleteness and Sparsity Atoms But no more than k=4 bases 37
38 No more than 4 bases α D = X 38
39 No more than 4 bases ONLY THE a COMPONENTS CORRESPONDING TO THE 4 KEY D DICTIONARY ENTRIES ARE NON-ZERO α = X 39
40 No more than 4 bases ONLY THE a COMPONENTS CORRESPONDING TO THE 4 KEY D DICTIONARY ENTRIES ARE NON-ZERO α = X MOST OF a IS ZERO!! a IS SPARSE 4
41 Sparsity- Definition Sparse representations are representations that account for most or all information of a signal with a linear combination of a small number of atoms. (from: 41
42 The Sparsity Problem We don t really know k You are given a signal X Assuming X was generated using the dictionary, can we find α that generated it? 42
43 The Sparsity Problem We want to use as few basis vectors as possible to do this. Min a a s.t. X Da 43
44 The Sparsity Problem We want to use as few basis vectors as possible to do this. Min a a s.t. X Da Counts the number of nonzero elements in α 44
45 The Sparsity Problem We want to use as few basis vectors as possible to do this Ockham s razor: Choose the simplest explanation invoking the fewest variables Min a a s.t. X Da 45
46 The Sparsity Problem We want to use as few basis vectors as possible to do this. Min a a s.t. X Da How can we solve the above? 46
47 Obtaining Sparse Solutions We will look at 2 algorithms: Matching Pursuit (MP) Basis Pursuit (BP) 47
48 Matching Pursuit (MP) Greedy algorithm Finds an atom in the dictionary that best matches the input signal Remove the weighted value of this atom from the signal Again, find an atom in the dictionary that best matches the remaining signal. Continue till a defined stop condition is satisfied. 48
49 Matching Pursuit Find the dictionary atom that best matches the given signal. Weight = w 1 49
50 Matching Pursuit Remove weighted image to obtain updated signal Find best match for this signal from the dictionary 5
51 Matching Pursuit Find best match for updated signal 51
52 Matching Pursuit Find best match for updated signal Iterate till you reach a stopping condition, norm(residualinputsignal) < threshold 52
53 Matching Pursuit From 53
54 Matching Pursuit Problems??? 54
55 Matching Pursuit Main Problem Computational complexity The entire dictionary has to be searched at every iteration 55
56 Comparing MP and BP Matching Pursuit Hard thresholding Basis Pursuit (remember the equations) Greedy optimization at each step Weights obtained using greedy rules 56
57 Basis Pursuit (BP) Remember, Min a a s.t. X Da 57
58 Basis Pursuit Remember, Min a a s.t. X Da In the general case, this is intractable 58
59 Basis Pursuit Remember, Min a a s.t. X Da In the general case, this is intractable Requires combinatorial optimization 59
60 Basis Pursuit Replace the intractable expression by an expression that is solvable Min a a 1 s.t. X Da 6
61 Basis Pursuit Replace the intractable expression by an expression that is solvable Min a a 1 s.t. X Da This will provide identical solutions when D obeys the Restricted Isometry Property. 61
62 Basis Pursuit Replace the intractable expression by an expression that is solvable Min a a 1 s.t. X Da Objective Constraint 62
63 Basis Pursuit We can formulate the optimization term as: Min a { X Da 2 a 1 } Constraint Objective 63
64 Basis Pursuit We can formulate the optimization term as: Min a { X Da 2 a 1 } λ is a penalty term on the non-zero elements and promotes sparsity 64
65 Basis Pursuit Equivalent to LASSO; for more details, see this paper We can by Tibshirani formulate the optimization term as: Min a { X Da 2 a 1 } λ is a penalty term on the non-zero elements and promotes sparsity 65
66 Basis Pursuit a 1 is not differentiable at a j = Gradient of a 1 for gradient descent update At optimum, following conditions hold 66 { 1 } 2 a a a D X Min at 1 [-1,1]at at 1 1 j j j j a a a a a if, if, ) ( 2 2 j j j j j X sign X a a a a a D D
67 Basis Pursuit There are efficient ways to solve the LASSO formulation. Simplest solution: Coordinate descent algorithms On webpage.. 67
68 L 1 vs L Min a a s. t. X Da X Overcomplete set of 6 bases L minimization Two-sparse solution ANY pair of bases can explain X with error 68
69 L 1 vs L Min a a 1 s. t. X Da X Overcomplete set of 6 bases L 1 minimization Two-sparse solution All else being equal, the two closest bases are chosen 69
70 Comparing MP and BP Matching Pursuit Hard thresholding Basis Pursuit Soft thresholding (remember the equations) Greedy optimization at each step Weights obtained using greedy rules Global optimization Can force N-sparsity with appropriately chosen weights 7
71 General Formalisms L minimization L constrained optimization L 1 minimization Min a Da L 1 constrained optimization a s. t. X Min a a 1 s. t. X Da Min a s. t. a Min a s. t. a X X 1 Da C Da C
72 Many Other Methods.. Iterative Hard Thresholding (IHT) CoSAMP OMP 72
73 Problems How to obtain the dictionary Which will give us meaningful representations How to compute the weights? D α X = Input
74 Trivial Solution D = Training data Impractical and useless 74
75 Dictionaries: Compressive Sensing Just random vectors! D 75
76 More Structured ways of Constructing Dictionaries Dictionary entries must be structurally meaningful Represent true compositional units of data Have already encountered two ways of building dictionaries NMF for non-negative data K-means.. 76
77 K-Means for Composing Dictionaries Train the codebook from training data using K-means Every vector is approximated by the centroid of the cluster it falls into Cluster means are codebook entries Dictionary entries Also compositional units the compose the data 77
78 K-Means for Dictionaries Each column is a codeword (centroid) from the codebook D a must be 1 sparse Only a entry must 1 = 1 1 = 1 α 1 X 78
79 K-Means Learn Codewords to minimize the total squared length of the training vectors from the closest codeword 79
80 Length-unconstrained K-Means for Dictionaries Each column is a codeword (centroid) from the codebook D a must be 1 sparse No restriction on a value = 1 α 3 X 8
81 SVD K-Means Learn Codewords to minimize the total squared projection error of the training vectors from the closest codeword 81
82 1. Initialize a set of centroids randomly 2. For each data point x, find the projection from the centroid for each cluster p cluster SVD K means cluster 3. Put data point in the cluster of the closest centroid Cluster for which p cluster is maximum 4. When all data points are clustered, recompute centroids x T m m cluster Principal Eigenvector({ x x cluster}) 5. If not converged, go back to 2 82
83 Problem Only represents Radial patterns 83
84 What about this pattern? Dictionary entries that represent radial patterns will not capture this structure 1-sparse representations will not do 84
85 What about this pattern? We need AFFINE patterns 85
86 What about this pattern? We need AFFINE patterns Each vector is modeled by a linear combination of K (here 2) bases 86
87 What about this pattern? Every line is a (constrined) combination of two bases 2-sparse Constraint: Line = a.b 1 + (1-a)b 2 We need AFFINE patterns Each vector is modeled by a linear combination of K (here 2) bases 87
88 Codebooks for K sparsity? Each column is a codeword (centroid) from the codebook D a must be k sparse No restriction on a value = k α X 88
89 Formalizing Given training data We want to find a dictionary D, such that With sparse 89
90 Formalizing Two objectives: - Approximation - Sparsity in coefficients NON-Convex!!! 9
91 An iterative method Given D, estimate We can use any method to get sparse solution Given, estimate D Difficult! 91
92 K SVD Initialize Codebook D = 1. For every vector, compute K-sparse alphas Using any pursuit algorithm a =
93 K-SVD D j j!= 1 2. For each codeword (k): For each vector x Subtract the contribution of all other codewords to obtain e k (x) Codeword-specific residual Compute the principal Eigen vector of {e k (x)} 3. Return to step 1 D = a = a j (x) j!= e k x = x-σ j k j D j 93
94 K-SVD D Termination of each iteration: Updated dictionary Conclusion: A dictionary where any data vector can be composed of at most K dictionary entries More generally, sparse composition 94
95 Problems How to obtain the dictionary Which will give us meaningful representations How to compute the weights? D α X = Input
96 Applications of Sparse Representations Many many applications Signal representation Statistical modelling.. We ve seen one: Compressive sensing Another popular use Denoising 97
97 Denoising As the name suggests, remove noise! 98
98 A toy example 99
99 A toy example Identity matrix Translation of a Gaussian pulse 1
100 Image Denoising Here s what we want 11
101 Image Denoising Here s what we want 12
102 Image Denoising Here s what we want 13
103 The Image Denoising Problem Given an image Remove Gaussian additive noise from it 14
104 Image Denoising Noisy Input Orig. Image Y = X + ε ε = Ν(, σ) Gaussian Noise 15
105 Image Denoising Remove the noise from Y, to obtain X as best as possible. 16
106 Image Denoising Remove the noise from Y, to obtain X as best as possible Using sparse representations over learned dictionaries 17
107 Image Denoising Remove the noise from Y, to obtain X as best as possible Using sparse representations over learned dictionaries We will learn the dictionaries 18
108 Image Denoising Remove the noise from Y, to obtain X as best as possible Using sparse representations over learned dictionaries We will learn the dictionaries What data will we use? The corrupted image itself! 19
109 Image Denoising We use the data to be denoised to learn the dictionary. Training and denoising become an iterated process. We use image patches of size n x n pixels (i.e. if the image is 64x64, patches are 8x8) 11
110 Image Denoising The data dictionary D Size = n x k (k > n) This is known and fixed, to start with Every image patch can be sparsely represented using D 111
111 Image Denoising Recall our equations from before. We want to find α so as to minimize the value of the equation below: Min a { X Da 2 a } Min{ X Da a } 1 a 2 112
112 Image Denoising Min a { X Da 2 a 1 } In the above, X is a patch. 113
113 Image Denoising Min a { X Da 2 a 1 } In the above, X is a patch. If the larger image is fully expressed by the every patch in it, how can we go from patches to the image? 114
114 Image Denoising Min { X Y 2 a ij,x 2 ij R ij X Da ij 2 2 } ij a ij ij 115
115 Image Denoising Min { X Y 2 a ij,x 2 ij R ij X Da ij 2 2 } ij a ij ij (X Y) is the error between the input and denoised image. μ is a penalty on the error. 116
116 Image Denoising Min { X Y 2 a ij,x 2 ij R ij X Da ij 2 2 } ij a ij ij Error bounding in each patch -R ij selects the (ij) th patch -Terms in summation = no. of patches 117
117 Image Denoising Min { X Y 2 a ij,x 2 ij R ij X Da ij 2 2 } ij a ij ij λ forces sparsity 118
118 Image Denoising But, we don t know our dictionary D. We want to estimate D as well. 119
119 Image Denoising But, we don t know our dictionary D. We want to estimate D as well. Min { X Y 2 D,a ij,x 2 } ij a ij ij ij R ij X Da ij 2 We can use the previous equation itself!!! 2 12
120 Image Denoising Min { X Y 2 D,a ij,x 2 ij R ij X Da ij 2 2 } ij a ij ij How do we estimate all 3 at once? 121
121 Image Denoising Min { X Y 2 D,a ij,x 2 ij R ij X Da ij 2 2 } ij a ij ij How do we estimate all 3 at once? We cannot estimate them at the same time! 122
122 Image Denoising Min { X Y 2 D,a ij,x 2 ij R ij X Da ij 2 2 } ij a ij ij How do we estimate all 3 at once? Fix 2, and find the optimal 3 rd. 123
123 Image Denoising Min { X Y 2 D,a ij,x 2 ij R ij X Da ij 2 2 } ij a ij ij Initialize X = Y 124
124 Image Denoising 2 Min { X Y a 2 ij } ij a ij ij ij R ij X Da ij 2 2 Initialize X = Y, initialize D You know how to solve the remaining portion for α MP, BP! 125
125 Image Denoising Now, update the dictionary D. Update D one column at a time, following the K-SVD algorithm K-SVD maintains the sparsity structure 126
126 Image Denoising Now, update the dictionary D. Update D one column at a time, following the K-SVD algorithm K-SVD maintains the sparsity structure Iteratively update α and D 127
127 Image Denoising Learned Dictionary for Face Image denoising From: M. Elad and M. Aharon, Image denoising via learned dictionaries and sparse representation, CVPR,
128 Image Denoising Min X { X Y 2 2 ij R ij X Da ij 2 2 } ij a ij ij Const. wrt X We know D and α The quadratic term above has a closedform solution 129
129 Image Denoising Min X { X Y 2 2 ij R ij X Da ij 2 2 } ij a ij ij Const. wrt X We know D and α X ( I R T R ) 1 ( Y ij ij ij ij R T ij D a ij ) 13
130 Image Denoising Summarizing We wanted to obtain 3 things 131
131 Image Denoising Summarizing We wanted to obtain 3 things Weights α Dictionary D Denoised Image X 132
132 Image Denoising Summarizing We wanted to obtain 3 things Weights α Your favorite pursuit algorithm Dictionary D Using K-SVD Denoised Image X 133
133 Image Denoising Summarizing We wanted to obtain 3 things Weights α Your favorite pursuit algorithm Dictionary D Using K-SVD Denoised Image X Iterating 134
134 Image Denoising Summarizing We wanted to obtain 3 things Weights α Dictionary D Denoised Image X- Closed form solution 135
135 Image Denoising Here s what we want 136
136 Image Denoising Here s what we want 137
137 Comparing to Other Techniques Non-Gaussian data PCA of ICA Which is which? Images from Lewicki and Sejnowski, Learning Overcomplete Representations,
138 Comparing to Other Techniques Non-Gaussian data PCA ICA Images from Lewicki and Sejnowski, Learning Overcomplete Representations,
139 Comparing to Other Techniques Predicts data here Non-Gaussian data Doesn t predict data here Does pretty well PCA ICA Images from Lewicki and Sejnowski, Learning Overcomplete Representations, 2. 14
140 Comparing to Other Techniques Data still in 2-D space ICA Overcomplete Doesn t capture the underlying representation, which Overcomplete representations can do 141
141 Summary Overcomplete representations can be more powerful than component analysis techniques. Dictionary can be learned from data. Relative advantages and disadvantages of the pursuit algorithms. 142
Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013
Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations
More informationStructured matrix factorizations. Example: Eigenfaces
Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix
More informationMLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT
MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationSparse & Redundant Signal Representation, and its Role in Image Processing
Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique
More informationA tutorial on sparse modeling. Outline:
A tutorial on sparse modeling. Outline: 1. Why? 2. What? 3. How. 4. no really, why? Sparse modeling is a component in many state of the art signal processing and machine learning tasks. image processing
More information2.3. Clustering or vector quantization 57
Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :
More informationApplied Machine Learning for Biomedical Engineering. Enrico Grisan
Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination
More informationSparse Estimation and Dictionary Learning
Sparse Estimation and Dictionary Learning (for Biostatistics?) Julien Mairal Biostatistics Seminar, UC Berkeley Julien Mairal Sparse Estimation and Dictionary Learning Methods 1/69 What this talk is about?
More informationhttps://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:
More informationCOMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017
COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY
More informationMachine Learning for Signal Processing Non-negative Matrix Factorization
Machine Learning for Signal Processing Non-negative Matrix Factorization Class 10. 3 Oct 2017 Instructor: Bhiksha Raj With examples and slides from Paris Smaragdis 11755/18797 1 A Quick Recap X B W D D
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationLecture Notes 10: Matrix Factorization
Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To
More informationLecture: Face Recognition and Feature Reduction
Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationSparse linear models
Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time
More informationSparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda
Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic
More informationSparsity in Underdetermined Systems
Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2
More informationIntroduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012
Introduction to Sparsity Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation
More informationSparse Approximation and Variable Selection
Sparse Approximation and Variable Selection Lorenzo Rosasco 9.520 Class 07 February 26, 2007 About this class Goal To introduce the problem of variable selection, discuss its connection to sparse approximation
More informationLecture 5 : Sparse Models
Lecture 5 : Sparse Models Homework 3 discussion (Nima) Sparse Models Lecture - Reading : Murphy, Chapter 13.1, 13.3, 13.6.1 - Reading : Peter Knee, Chapter 2 Paolo Gabriel (TA) : Neural Brain Control After
More informationCompressed Sensing: Extending CLEAN and NNLS
Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction
More informationDATA MINING AND MACHINE LEARNING
DATA MINING AND MACHINE LEARNING Lecture 5: Regularization and loss functions Lecturer: Simone Scardapane Academic Year 2016/2017 Table of contents Loss functions Loss functions for regression problems
More informationIntroduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin
1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)
More informationLecture: Face Recognition and Feature Reduction
Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the
More informationClassification. The goal: map from input X to a label Y. Y has a discrete set of possible values. We focused on binary Y (values 0 or 1).
Regression and PCA Classification The goal: map from input X to a label Y. Y has a discrete set of possible values We focused on binary Y (values 0 or 1). But we also discussed larger number of classes
More informationComplementary Matching Pursuit Algorithms for Sparse Approximation
Complementary Matching Pursuit Algorithms for Sparse Approximation Gagan Rath and Christine Guillemot IRISA-INRIA, Campus de Beaulieu 35042 Rennes, France phone: +33.2.99.84.75.26 fax: +33.2.99.84.71.71
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More information17 Random Projections and Orthogonal Matching Pursuit
17 Random Projections and Orthogonal Matching Pursuit Again we will consider high-dimensional data P. Now we will consider the uses and effects of randomness. We will use it to simplify P (put it in a
More informationCPSC 340: Machine Learning and Data Mining. Sparse Matrix Factorization Fall 2018
CPSC 340: Machine Learning and Data Mining Sparse Matrix Factorization Fall 2018 Last Time: PCA with Orthogonal/Sequential Basis When k = 1, PCA has a scaling problem. When k > 1, have scaling, rotation,
More informationTopographic Dictionary Learning with Structured Sparsity
Topographic Dictionary Learning with Structured Sparsity Julien Mairal 1 Rodolphe Jenatton 2 Guillaume Obozinski 2 Francis Bach 2 1 UC Berkeley 2 INRIA - SIERRA Project-Team San Diego, Wavelets and Sparsity
More informationCSC2515 Winter 2015 Introduction to Machine Learning. Lecture 2: Linear regression
CSC2515 Winter 2015 Introduction to Machine Learning Lecture 2: Linear regression All lecture slides will be available as.pdf on the course website: http://www.cs.toronto.edu/~urtasun/courses/csc2515/csc2515_winter15.html
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More information1 Sparsity and l 1 relaxation
6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the
More informationMachine Learning for Signal Processing Non-negative Matrix Factorization
Machine Learning for Signal Processing Non-negative Matrix Factorization Class 10. Instructor: Bhiksha Raj With examples from Paris Smaragdis 11755/18797 1 The Engineer and the Musician Once upon a time
More information2 Regularized Image Reconstruction for Compressive Imaging and Beyond
EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement
More informationCPSC 340: Machine Learning and Data Mining. Sparse Matrix Factorization Fall 2017
CPSC 340: Machine Learning and Data Mining Sparse Matrix Factorization Fall 2017 Admin Assignment 4: Due Friday. Assignment 5: Posted, due Monday of last week of classes Last Time: PCA with Orthogonal/Sequential
More informationStatistical approach for dictionary learning
Statistical approach for dictionary learning Tieyong ZENG Joint work with Alain Trouvé Page 1 Introduction Redundant dictionary Coding, denoising, compression. Existing algorithms to generate dictionary
More informationSketching for Large-Scale Learning of Mixture Models
Sketching for Large-Scale Learning of Mixture Models Nicolas Keriven Université Rennes 1, Inria Rennes Bretagne-atlantique Adv. Rémi Gribonval Outline Introduction Practical Approach Results Theoretical
More informationLinear Methods for Regression. Lijun Zhang
Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived
More informationCompressed Sensing and Neural Networks
and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications
More informationGreedy Sparsity-Constrained Optimization
Greedy Sparsity-Constrained Optimization Sohail Bahmani, Petros Boufounos, and Bhiksha Raj 3 sbahmani@andrew.cmu.edu petrosb@merl.com 3 bhiksha@cs.cmu.edu Department of Electrical and Computer Engineering,
More informationLecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent
10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 5: Gradient Descent Scribes: Loc Do,2,3 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for
More informationDesigning Information Devices and Systems I Discussion 13B
EECS 6A Fall 7 Designing Information Devices and Systems I Discussion 3B. Orthogonal Matching Pursuit Lecture Orthogonal Matching Pursuit (OMP) algorithm: Inputs: A set of m songs, each of length n: S
More informationIs the test error unbiased for these programs? 2017 Kevin Jamieson
Is the test error unbiased for these programs? 2017 Kevin Jamieson 1 Is the test error unbiased for this program? 2017 Kevin Jamieson 2 Simple Variable Selection LASSO: Sparse Regression Machine Learning
More informationOverview. Optimization-Based Data Analysis. Carlos Fernandez-Granda
Overview Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 1/25/2016 Sparsity Denoising Regression Inverse problems Low-rank models Matrix completion
More informationBayesian Paradigm. Maximum A Posteriori Estimation
Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)
More informationLinear Regression. CSL603 - Fall 2017 Narayanan C Krishnan
Linear Regression CSL603 - Fall 2017 Narayanan C Krishnan ckn@iitrpr.ac.in Outline Univariate regression Multivariate regression Probabilistic view of regression Loss functions Bias-Variance analysis Regularization
More informationLinear Regression. CSL465/603 - Fall 2016 Narayanan C Krishnan
Linear Regression CSL465/603 - Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Outline Univariate regression Multivariate regression Probabilistic view of regression Loss functions Bias-Variance analysis
More informationSOS Boosting of Image Denoising Algorithms
SOS Boosting of Image Denoising Algorithms Yaniv Romano and Michael Elad The Technion Israel Institute of technology Haifa 32000, Israel The research leading to these results has received funding from
More informationAnalysis of Greedy Algorithms
Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm
More informationCOMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION
COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for
More informationLinear Models for Regression. Sargur Srihari
Linear Models for Regression Sargur srihari@cedar.buffalo.edu 1 Topics in Linear Regression What is regression? Polynomial Curve Fitting with Scalar input Linear Basis Function Models Maximum Likelihood
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationDeriving Principal Component Analysis (PCA)
-0 Mathematical Foundations for Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Deriving Principal Component Analysis (PCA) Matt Gormley Lecture 11 Oct.
More informationInverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France
Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse
More informationUnsupervised learning: beyond simple clustering and PCA
Unsupervised learning: beyond simple clustering and PCA Liza Rebrova Self organizing maps (SOM) Goal: approximate data points in R p by a low-dimensional manifold Unlike PCA, the manifold does not have
More informationA Quest for a Universal Model for Signals: From Sparsity to ConvNets
A Quest for a Universal Model for Signals: From Sparsity to ConvNets Yaniv Romano The Electrical Engineering Department Technion Israel Institute of technology Joint work with Vardan Papyan Jeremias Sulam
More informationThe Iteration-Tuned Dictionary for Sparse Representations
The Iteration-Tuned Dictionary for Sparse Representations Joaquin Zepeda #1, Christine Guillemot #2, Ewa Kijak 3 # INRIA Centre Rennes - Bretagne Atlantique Campus de Beaulieu, 35042 Rennes Cedex, FRANCE
More informationLeast Mean Squares Regression
Least Mean Squares Regression Machine Learning Spring 2018 The slides are mainly from Vivek Srikumar 1 Lecture Overview Linear classifiers What functions do linear classifiers express? Least Squares Method
More informationOvercomplete Dictionaries for. Sparse Representation of Signals. Michal Aharon
Overcomplete Dictionaries for Sparse Representation of Signals Michal Aharon ii Overcomplete Dictionaries for Sparse Representation of Signals Reasearch Thesis Submitted in Partial Fulfillment of The Requirements
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationLinear Regression. Aarti Singh. Machine Learning / Sept 27, 2010
Linear Regression Aarti Singh Machine Learning 10-701/15-781 Sept 27, 2010 Discrete to Continuous Labels Classification Sports Science News Anemic cell Healthy cell Regression X = Document Y = Topic X
More informationcross-language retrieval (by concatenate features of different language in X and find co-shared U). TOEFL/GRE synonym in the same way.
10-708: Probabilistic Graphical Models, Spring 2015 22 : Optimization and GMs (aka. LDA, Sparse Coding, Matrix Factorization, and All That ) Lecturer: Yaoliang Yu Scribes: Yu-Xiang Wang, Su Zhou 1 Introduction
More informationMachine Learning CSE546 Carlos Guestrin University of Washington. October 7, Efficiency: If size(w) = 100B, each prediction is expensive:
Simple Variable Selection LASSO: Sparse Regression Machine Learning CSE546 Carlos Guestrin University of Washington October 7, 2013 1 Sparsity Vector w is sparse, if many entries are zero: Very useful
More informationLearning Theory Continued
Learning Theory Continued Machine Learning CSE446 Carlos Guestrin University of Washington May 13, 2013 1 A simple setting n Classification N data points Finite number of possible hypothesis (e.g., dec.
More informationSparsity Regularization
Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation
More informationMachine Learning Techniques
Machine Learning Techniques ( 機器學習技法 ) Lecture 13: Deep Learning Hsuan-Tien Lin ( 林軒田 ) htlin@csie.ntu.edu.tw Department of Computer Science & Information Engineering National Taiwan University ( 國立台灣大學資訊工程系
More informationIntroduction to Machine Learning
10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what
More informationSparsifying Transform Learning for Compressed Sensing MRI
Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois
More informationThe Analysis Cosparse Model for Signals and Images
The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under
More informationISyE 691 Data mining and analytics
ISyE 691 Data mining and analytics Regression Instructor: Prof. Kaibo Liu Department of Industrial and Systems Engineering UW-Madison Email: kliu8@wisc.edu Office: Room 3017 (Mechanical Engineering Building)
More informationCSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18
CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$
More informationParcimonie en apprentissage statistique
Parcimonie en apprentissage statistique Guillaume Obozinski Ecole des Ponts - ParisTech Journée Parcimonie Fédération Charles Hermite, 23 Juin 2014 Parcimonie en apprentissage 1/44 Classical supervised
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationCOMS 4771 Introduction to Machine Learning. James McInerney Adapted from slides by Nakul Verma
COMS 4771 Introduction to Machine Learning James McInerney Adapted from slides by Nakul Verma Announcements HW1: Please submit as a group Watch out for zero variance features (Q5) HW2 will be released
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationCOMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017
COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University UNDERDETERMINED LINEAR EQUATIONS We
More informationA Unified Energy-Based Framework for Unsupervised Learning
A Unified Energy-Based Framework for Unsupervised Learning Marc Aurelio Ranzato Y-Lan Boureau Sumit Chopra Yann LeCun Courant Insitute of Mathematical Sciences New York University, New York, NY 10003 Abstract
More informationProximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725
Proximal Gradient Descent and Acceleration Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: subgradient method Consider the problem min f(x) with f convex, and dom(f) = R n. Subgradient method:
More informationIntroduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur
Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module 2 Lecture 05 Linear Regression Good morning, welcome
More informationMachine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression
Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression Due: Monday, February 13, 2017, at 10pm (Submit via Gradescope) Instructions: Your answers to the questions below,
More informationReview: Learning Bimodal Structures in Audio-Visual Data
Review: Learning Bimodal Structures in Audio-Visual Data CSE 704 : Readings in Joint Visual, Lingual and Physical Models and Inference Algorithms Suren Kumar Vision and Perceptual Machines Lab 106 Davis
More informationSparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images
Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are
More informationCOMS 4771 Regression. Nakul Verma
COMS 4771 Regression Nakul Verma Last time Support Vector Machines Maximum Margin formulation Constrained Optimization Lagrange Duality Theory Convex Optimization SVM dual and Interpretation How get the
More informationFactor Analysis (10/2/13)
STA561: Probabilistic machine learning Factor Analysis (10/2/13) Lecturer: Barbara Engelhardt Scribes: Li Zhu, Fan Li, Ni Guan Factor Analysis Factor analysis is related to the mixture models we have studied.
More informationCSE 554 Lecture 7: Alignment
CSE 554 Lecture 7: Alignment Fall 2012 CSE554 Alignment Slide 1 Review Fairing (smoothing) Relocating vertices to achieve a smoother appearance Method: centroid averaging Simplification Reducing vertex
More informationLow Rank Matrix Completion Formulation and Algorithm
1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9
More informationNoise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm
Sparse Representation and the K-SV Algorithm he CS epartment he echnion Israel Institute of technology Haifa 3, Israel University of Erlangen - Nürnberg April 8 Noise Removal? Our story begins with image
More informationDesigning Information Devices and Systems I Spring 2018 Homework 13
EECS 16A Designing Information Devices and Systems I Spring 2018 Homework 13 This homework is due April 30, 2018, at 23:59. Self-grades are due May 3, 2018, at 23:59. Submission Format Your homework submission
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationGraphical Models for Collaborative Filtering
Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationSignal Analysis. Principal Component Analysis
Multi dimensional Signal Analysis Lecture 2E Principal Component Analysis Subspace representation Note! Given avector space V of dimension N a scalar product defined by G 0 a subspace U of dimension M
More informationLecture 11: Unsupervised Machine Learning
CSE517A Machine Learning Spring 2018 Lecture 11: Unsupervised Machine Learning Instructor: Marion Neumann Scribe: Jingyu Xin Reading: fcml Ch6 (Intro), 6.2 (k-means), 6.3 (Mixture Models); [optional]:
More informationSINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS. Emad M. Grais and Hakan Erdogan
SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS Emad M. Grais and Hakan Erdogan Faculty of Engineering and Natural Sciences, Sabanci University, Orhanli
More information