Matrix-Tensor and Deep Learning in High Dimensional Data Analysis
|
|
- Arabella Fields
- 6 years ago
- Views:
Transcription
1 Matrix-Tensor and Deep Learning in High Dimensional Data Analysis Tien D. Bui Department of Computer Science and Software Engineering Concordia University 14 th ICIAR Montréal July 5-7, 2017
2 Introduction Face recognition using sparse representation Background subtraction via Surveillance Matrix decomposition Videos Face Images Large-scale & High dimensional Data Text with highlighted keywords Latent Sematic Indexing 2 Predict Movie users rating rating via Matrix Factorization
3 The Matrix Problems 3
4 The Matrix Problems Matrix factorization Decomposing a matrix into product of two matrices such as PCA, SVD, Compressive Sensing and sparse dictionary learning n k n m X m A Matrix decomposition Decomposing a matrix into sum of two matrices with specific properties (e.g. sparsity, rank, etc.) Robust PCA [Candès et al. 2009] k B M (original) L (low-rank) S (sparse) 4
5 Matrix Problem Definition M = L+DS+Y; 0 is an over-complete dictionary (m<l). representing error or noise 1) Robust PCA (Candès et al.) Absence of D 2) Compressed Sensing Absence of L 3) PCA Absence of D, S (i.e. Recovery of L from M) 4) Subspace clustering problem These problems require an optimization formulation and an 5 associated norm.
6 Non-Convex and L p -norm Optimization An example in Compressed Sensing (CS): Aiming to find signal s sparse coefficient vector Recover exactlywith very fewmeasurement but NP-hard Non-convex approximation Convex relaxation Solvable in poly time using LP where 0 < p< 1 Fewermeasurement than l 1 -norm but Harder to solve. 6
7 Matrix Decomposition The LpSolution of Robust PCA M = L+S 7
8 Matrix Decomposition: Robust PCA min L,S σ ( L) p + λ S p s.t. L + S = M 0 < p <1 8
9 Matrix Decomposition: Robust PCA Off-line L p RPCA min L,S s.t. d g σ j + λ g s ij j=1 ( ) L + S = M m n ij=1 ( ) where 1 3 4, 5 6 -the jthsingular value of L, 76 - an element of S Using ADMM to solve (2) Step 1 Fix L, solve for the matrix S Step 2 Fix S, solve for the matrix L Step 3 Update LagrangianmulY Alternating Direction Method of Multipliers 9 min l t,s t s.t. On-line L p RPCA n t=1 ( ( )) + λg( s t ) ( g σ L ) t (2) (3) L + S = M Perform all three steps oncefor each sample/column Compute columns of sparse matrix S by soft-thresholding on each sample Compute columns of matrix Lby updating weighted SVT incrementally Update parameter Yfor each sample Incremental Singular Value Decomposition Complexity is O(mnk), k m, i.e. linear in sample dimension and number of samples. Our paper in CVIU March 2017
10 L p -RPCA: Analysis Convergence analysis two main theorems Theorem 1: Objective function decreases monotonically Theorem 2: Converges to a stationary point [CVIU March 2017] p decreasing Objective function value and relative error (RE) of LP-RPCA algorithm on the synthetic data while varying p. (a) Shows the convergence curves of LP-RPCA algorithm. (b) Shows the performance (RE) of LP-RPCA algorithm 10 07/07/2017
11 Off-line L p -RPCA CVIU March
12 Online L p -RPCA CVIU March
13 Matrix Factorization The L p Solution of Robust SVD 13
14 Matrix Factorization: L p -SVD s.t. UU T = VV T = I Singular Values: diagonal elements of (descending order) Singular Vectors: columns of Uand rows of V T Rank of X: ,( > rank? ), equals the number of nonzero singular values Compact/truncated R F G,CA R H G, BA R G G 14
15 Robust Lp-SVD Non-convex lp approach Rewrite Objective function (4) s.t. UU T = VV T = I (4) (5) Then, (6) Linearize (6)and form the augmented Lagrangian function Using ADMM to solve (6) iteratively Robust lp-norm Singular Value Decomposition NIPSW
16 Robust Lp-SVD Rewritenobjective function: Linearize and forming Lagrangian function: Using ADMM with iterative steps: Step1 Find L Step2 Find R Step3 Find E Step4 Update Lagrangian multiplier Y Robust lp-norm Singular Value Decomposition NIPS
17 Applications of Matrix Problems Face Modelling Online Video Processing Video Inpainting Voice Music Separation 3D Structure from Motion Comparison with State of the Arts 17
18 Face Modeling : Recognition under Occlusions SR and LRA for Robust Face Recognition: two modules. 3 I J,, 4 ] ( ICPR,2014). 18
19 Face Modeling: Removing Noise, Shadows, Darkness Face modeling Face Images Clear faces Illumination CVIU March /07/2017
20 Online Video Processing Online background subtraction (Video demo) CVPR Change Detection W 14 Videos 360 x 288, 1200 frames s/frame 33fps without GPU Videos 160 x 120, 3200 frames CVIU March s/frame 200 fps without GPU 20
21 Video background subtraction Online background subtraction - Performance Time per frame decreasing 21
22 Online Video Inpainting Video inpainting (video demo) CVIU March
23 Voice Music Separation Separate a music-audio (song) stream Xinto the low-rank music part Land sparse singing voice part S Audio samples from MIR-1K Dataset Music-audio (X): Low-rank music (L): Sparse singing voice (S): Original music (L 0 ): Original singing voice (S 0 ): Huang et al., "Singing-Voice Separation From Monaural Recordings Using Robust Principal Component Analysis," ICASSP Matlabcode 23
24 3D Reconstruction from 2D Data Structure from motion X 2D points U 3D points V T Transformation 24 Bui et al. NIPS 2015
25 Comparing State-of-the-art Methods for Low-Rank and Sparse Intel Core CPU,16.00GB RAM [3] Candes et al. JACM 2011 [7] Chartrand IEEE TSP 2012 [8] Netrapalli et al. NIPS 2014 [9] Lu et al. AAAI 2015 [10] Lu et al. IEEE TIP 2015 [11] Hageet al. Comp Stat 2014 [12] Menget al. ICCV 2013 [13] Qiuet al. IEEE TIT 2014 [14] Feng et al. NIPS 2013 CVIU March
26 Tensor Decomposition 26
27 Tensor Decomposition Tucker Decomposition: approximated by a core tensor Z and three factor matrices Tensor Train Decomposition: represents a tensor compactly in terms of factors is a generalization of SVD from matrices to tensors 27
28 Tensor Decomposition (MPCA) Data is presented in tensors instead of vectors or matrices Higher-order Singular Value Decomposition (HOSVD) is used to decompose these tensors f 1 2 s 3 f d n n p s p X=Z U V V R Factor 1 Tensor X V f T n n R p p p Pixels Factor 2 Decompose U 1 Z 3 2 f V s T n s n s R Factor 3 Subjects Poses TPAMI April
29 Multilinear Principal Component Analysis (MPCA) Multilinear PCA via Kronecker product: Kronecker product f f T ( ) Z( f V T f ) s p s X = UZ V V = U V p T Reminder SVD: X = USV T low-dimensional parameters low-dimensional parameters X U Z x1,1 x1,2 1,3 x x2,1 x2,2 x2,3 = f V T s f v1 s f v2 s f 1 V f T p f 2 f 3 v p v p v p 2 subjects 3 poses TPAMI April
30 Problems with HOSVD SVD-L1 is robust against outliers and noise Italso can dealwithmissingdata Outliers SVD-L2 SVD-L1 Factor 1 Factor 3 SVD-L1 is robust against outliers Factor 2 SVD-L1 can deal with missing data TPAMI April
31 Algorithm for [ U, V] = l1svd ( X) TPAMI April
32 High Order L1-SVD The corresponding Augmented Lagrangian function): The Alternative Direction Method of Multipliers (ADMM): TPAMI April 2017 S. Boyd et al., Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers, Foundations and Trends in Machine Learning, Stanford, Vol. 3, No 1,
33 High Order L1-SVD Degraded Image Reconstructed Image Outliers Original Image 30% Missing Pixels PSNR = 36.3 db 50% Missing Pixels PSNR = 33.1 db TPAMI April % Missing Pixels PSNR = 27.1 db 33
34 High Order L1-SVD Original Video Foreground SVD-L1 Background TPAMI April
35 Our Works on Matrix-Tensor LRA and SR: Sparse Support Vector Machine for Pattern Recognition, Concur and Computation: Practice and Experience Non-convex Online Robust PCA: Enhance Sparsity via Lp-norm Minimization, IJCV 2017 Robust Lp-norm Singular Value Decomposition, NIPSW 2015 Are Sparse Representation and Dictionary Learning Good for Handwritten Character Recognition?, ICFHR 2014 Sparse Representation and Low-Rank Approximation for Robust Face Recognition, ICPR 2014 Tensor Decomposition Compressed Sub-manifold Multifactor Analysis, TPAMI
36 Deep Learning in Face Analysis 36
37 Our Recent Works in DL DBMs and DRBMs: 1) Deep Appearance Models: A Deep Boltzmann Machine Approach to Face Modeling, IJCV ) Beyond Principal Components: Deep Boltzmann Machines for Face Modeling, CVPR Robust DBMs: Robust Deep Appearance Models, ICPR TDRBMs: 1) Longitudinal Face Modeling via Temporal Deep Restricted Boltzmann Machines, CVPR ) Temporal Restricted Boltzmann Machines: A Survey, Pre-print RNN and CNN: Depth-Based 3D Hand Pose Tracking, ICPR
38 Deep Learning in Automatic Face Analytics System System and Framework for CVPR 2015, CVPR
39 39
40 FDDB: Face Detection Data Set and Benchmark True positive rates False positive rates 40
41 Annotated Faces in the Wild (AFW) University of California, Irvine Precisions Recalls 41
42 Precisions Recalls 42
43 43
44 Deep Appearance Models Active Appearance Models (AAM) cannot cope with nonlinear processes (poses, illuminations, expressions, occlusions) but DAMs can. Poses Face Modeling Illumination Blur Occlusions Lowresolution Expressions CVPR 2015, CVPR 2016, ICPR 2016, and IJCV
45 = j j j j ij i j i i i i i i h a h w v b v E σ σ ) ( ) ;, ( w h v = j j j j ij i j i i i i i i h a h w v b v E σ σ ) ( ) ;, ( w h v Consider as v and as h (1) h (2) h Consider as v and as h (1) h (2) h Deep Appearance Models
46 46 = j j j j ij i j i i i i i i h a h w v b v E σ σ ) ( ) ;, ( w h v = j j j j ij i j i i i i i i h a h w v b v E σ σ ) ( ) ;, ( w h v Consider as v and as h (1) h (2) h Consider as v and as h (1) h (2) h
47 CVPR 2015, CVPR 2016 and IJCV ) Original images 2) Warped images to shape-free 3) Down-sampled by 8 (from 120x120) 4) Bicubic interpolation 5) AAM 6) DAM
48 CVPR 2015, CVPR 2016 and IJCV ) Original images 2) Warped images to shape-free 3) AAM 4) DAM
49 1) Original images 2) Warped images to shape-free 3) AAM 4) DAM CVPR 2015, CVPR 2016 and IJCV
50 CVPR 2015, CVPR 2016 and IJCV
51 CVPR 2015, CVPR 2016 and IJCV 2017 Combined effects of occlusions, poses, and illuminations 51
52 Longitudinal Face Modeling via TRBM C. N. Duong, K. Luu, K. G. Quach, and T. D. Bui. "Longitudinal Face Modeling via Temporal Deep Restricted Boltzmann Machines." In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on. IEEE,
53 : Age Progression 53 CVPR 2015, CVPR 2016 and IJCV 2017
54 Depth-based 3D Hand Pose Tracking ICPR
55 Matrix and Tensor in Deep Networks 55
56 What are the Issues? Why do we care about memory? State-of-the-art deep networks are resource hungry, e.g. cannot fit in mobile devices; Up to 95% percent of parameters are in the fully connected layers; Shallow networks with huge fully connected layers can achieve almost the same accuracy as ensemble of deep CNNs 56
57 Low-rank Matrix in Deep Networks Fully-connected layers: use low-rank decomposition of the weight matrices Continuous Speech Recognition (CSR): trained with millions of parameters, with a large number of output targets. These parameters are in the final fully connected layers Acoustic and language modeling: Low-rank matrix has shown astonishing results. The number of parameters is reduced by up to 50% with the same reduction in training time, without loss in recognition accuracy, compared to a full-rank representation Low-rank Matrix Factorization for Deep Neural Network Training with High Dimensional Output, T.N. Sainath et al
58 Tensor Train in Deep Neural Nets Tensor Train (or linear tensor network) to represent the dense weight matrices of the fully-connected layers such that (1) the number of parameters is reduced by a huge factor; and (2) the expressive power of the layer is preserved. For Very Deep networks the compression factor of the dense weight matrix of a fully-connected layer can be up to times leading to the compression factor of the whole network up to 7 times. Deep multi-task representation learning where sparse representations shared across multiple tasks. Tensorizing Neural Networks, A. Novikov et al Deep Multi-task Representation Learning: A Tensor Fac. Approach, Y. Yang et al ICLR 58
59 How to find Tensor Train? Special Cases: Analytical formulas. Medium size Tensors: Exact algorithm based on SVD For large tensors (e.g. 2^50): Approximate algorithms that look at a fraction of the tensor elements There are Python codes available Neural networks use fully-connected layers. The matrix W is of millions parameters. Store and train the matrix W in the TTformat can speed up the training VGG-16 is trained on more than a million images in ImageNet and can classify images into 1000 object categories. For VGG-16 net we compressed matrix to 320 parameters without loss of accuracy 59
60 Thank you 60
Structured matrix factorizations. Example: Eigenfaces
Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix
More informationIntroduction to the Tensor Train Decomposition and Its Applications in Machine Learning
Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March
More informationNon-convex Robust PCA: Provable Bounds
Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing
More informationA Randomized Approach for Crowdsourcing in the Presence of Multiple Views
A Randomized Approach for Crowdsourcing in the Presence of Multiple Views Presenter: Yao Zhou joint work with: Jingrui He - 1 - Roadmap Motivation Proposed framework: M2VW Experimental results Conclusion
More informationTruncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences
Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy
More informationRobust Principal Component Analysis
ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M
More informationRobust Component Analysis via HQ Minimization
Robust Component Analysis via HQ Minimization Ran He, Wei-shi Zheng and Liang Wang 0-08-6 Outline Overview Half-quadratic minimization principal component analysis Robust principal component analysis Robust
More informationDeep Learning Basics Lecture 7: Factor Analysis. Princeton University COS 495 Instructor: Yingyu Liang
Deep Learning Basics Lecture 7: Factor Analysis Princeton University COS 495 Instructor: Yingyu Liang Supervised v.s. Unsupervised Math formulation for supervised learning Given training data x i, y i
More informationCSC 576: Variants of Sparse Learning
CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in
More informationLecture: Face Recognition and Feature Reduction
Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the
More informationA Brief Overview of Practical Optimization Algorithms in the Context of Relaxation
A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation Zhouchen Lin Peking University April 22, 2018 Too Many Opt. Problems! Too Many Opt. Algorithms! Zero-th order algorithms:
More informationLecture Notes 10: Matrix Factorization
Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To
More informationN-mode Analysis (Tensor Framework) Behrouz Saghafi
N-mode Analysis (Tensor Framework) Behrouz Saghafi N-mode Analysis (Tensor Framework) Drawback of 1-mode analysis (e.g. PCA): Captures the variance among just a single factor Our training set contains
More informationFantope Regularization in Metric Learning
Fantope Regularization in Metric Learning CVPR 2014 Marc T. Law (LIP6, UPMC), Nicolas Thome (LIP6 - UPMC Sorbonne Universités), Matthieu Cord (LIP6 - UPMC Sorbonne Universités), Paris, France Introduction
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationLecture: Face Recognition and Feature Reduction
Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed
More informationRobust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng
Robust PCA CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Robust PCA 1 / 52 Previously...
More informationLinear dimensionality reduction for data analysis
Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for
More informationCompressed Sensing and Neural Networks
and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications
More informationCS 231A Section 1: Linear Algebra & Probability Review
CS 231A Section 1: Linear Algebra & Probability Review 1 Topics Support Vector Machines Boosting Viola-Jones face detector Linear Algebra Review Notation Operations & Properties Matrix Calculus Probability
More informationCS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang
CS 231A Section 1: Linear Algebra & Probability Review Kevin Tang Kevin Tang Section 1-1 9/30/2011 Topics Support Vector Machines Boosting Viola Jones face detector Linear Algebra Review Notation Operations
More informationSUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS. Eva L. Dyer, Christoph Studer, Richard G. Baraniuk
SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS Eva L. Dyer, Christoph Studer, Richard G. Baraniuk Rice University; e-mail: {e.dyer, studer, richb@rice.edu} ABSTRACT Unions of subspaces have recently been
More informationDonald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016
Optimization for Tensor Models Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 1 Tensors Matrix Tensor: higher-order matrix
More informationSEQUENTIAL SUBSPACE FINDING: A NEW ALGORITHM FOR LEARNING LOW-DIMENSIONAL LINEAR SUBSPACES.
SEQUENTIAL SUBSPACE FINDING: A NEW ALGORITHM FOR LEARNING LOW-DIMENSIONAL LINEAR SUBSPACES Mostafa Sadeghi a, Mohsen Joneidi a, Massoud Babaie-Zadeh a, and Christian Jutten b a Electrical Engineering Department,
More informationCovariance-Based PCA for Multi-Size Data
Covariance-Based PCA for Multi-Size Data Menghua Zhai, Feiyu Shi, Drew Duncan, and Nathan Jacobs Department of Computer Science, University of Kentucky, USA {mzh234, fsh224, drew, jacobs}@cs.uky.edu Abstract
More informationNeural Networks and Machine Learning research at the Laboratory of Computer and Information Science, Helsinki University of Technology
Neural Networks and Machine Learning research at the Laboratory of Computer and Information Science, Helsinki University of Technology Erkki Oja Department of Computer Science Aalto University, Finland
More informationLinear Subspace Models
Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,
More informationRobust Motion Segmentation by Spectral Clustering
Robust Motion Segmentation by Spectral Clustering Hongbin Wang and Phil F. Culverhouse Centre for Robotics Intelligent Systems University of Plymouth Plymouth, PL4 8AA, UK {hongbin.wang, P.Culverhouse}@plymouth.ac.uk
More informationLow Rank Matrix Completion Formulation and Algorithm
1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationTENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018
TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 Tensors Computations and the GPU AGENDA Tensor Networks and Decompositions Tensor Layers in
More informationShape Outlier Detection Using Pose Preserving Dynamic Shape Models
Shape Outlier Detection Using Pose Preserving Dynamic Shape Models Chan-Su Lee and Ahmed Elgammal Rutgers, The State University of New Jersey Department of Computer Science Outline Introduction Shape Outlier
More informationSINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS. Emad M. Grais and Hakan Erdogan
SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS Emad M. Grais and Hakan Erdogan Faculty of Engineering and Natural Sciences, Sabanci University, Orhanli
More informationCompressive Sensing, Low Rank models, and Low Rank Submatrix
Compressive Sensing,, and Low Rank Submatrix NICTA Short Course 2012 yi.li@nicta.com.au http://users.cecs.anu.edu.au/~yili Sep 12, 2012 ver. 1.8 http://tinyurl.com/brl89pk Outline Introduction 1 Introduction
More informationSound Recognition in Mixtures
Sound Recognition in Mixtures Juhan Nam, Gautham J. Mysore 2, and Paris Smaragdis 2,3 Center for Computer Research in Music and Acoustics, Stanford University, 2 Advanced Technology Labs, Adobe Systems
More informationDeep Learning: Approximation of Functions by Composition
Deep Learning: Approximation of Functions by Composition Zuowei Shen Department of Mathematics National University of Singapore Outline 1 A brief introduction of approximation theory 2 Deep learning: approximation
More informationCITS 4402 Computer Vision
CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh Lecture 06 Object Recognition Objectives To understand the concept of image based object recognition To learn how to match images
More informationSUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS. Eva L. Dyer, Christoph Studer, Richard G. Baraniuk. ECE Department, Rice University, Houston, TX
SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS Eva L. Dyer, Christoph Studer, Richard G. Baraniuk ECE Department, Rice University, Houston, TX ABSTRACT Unions of subspaces have recently been shown to provide
More informationDictionary Learning Using Tensor Methods
Dictionary Learning Using Tensor Methods Anima Anandkumar U.C. Irvine Joint work with Rong Ge, Majid Janzamin and Furong Huang. Feature learning as cornerstone of ML ML Practice Feature learning as cornerstone
More informationOnline Dictionary Learning with Group Structure Inducing Norms
Online Dictionary Learning with Group Structure Inducing Norms Zoltán Szabó 1, Barnabás Póczos 2, András Lőrincz 1 1 Eötvös Loránd University, Budapest, Hungary 2 Carnegie Mellon University, Pittsburgh,
More informationSubspace Methods for Visual Learning and Recognition
This is a shortened version of the tutorial given at the ECCV 2002, Copenhagen, and ICPR 2002, Quebec City. Copyright 2002 by Aleš Leonardis, University of Ljubljana, and Horst Bischof, Graz University
More informationSpectral k-support Norm Regularization
Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:
More informationElastic-Net Regularization of Singular Values for Robust Subspace Learning
Elastic-Net Regularization of Singular Values for Robust Subspace Learning Eunwoo Kim Minsik Lee Songhwai Oh Department of ECE, ASRI, Seoul National University, Seoul, Korea Division of EE, Hanyang University,
More informationNovel methods for multilinear data completion and de-noising based on tensor-svd
Novel methods for multilinear data completion and de-noising based on tensor-svd Zemin Zhang, Gregory Ely, Shuchin Aeron Department of ECE, Tufts University Medford, MA 02155 zemin.zhang@tufts.com gregoryely@gmail.com
More informationCS 4495 Computer Vision Principle Component Analysis
CS 4495 Computer Vision Principle Component Analysis (and it s use in Computer Vision) Aaron Bobick School of Interactive Computing Administrivia PS6 is out. Due *** Sunday, Nov 24th at 11:55pm *** PS7
More informationDeep Reinforcement Learning for Unsupervised Video Summarization with Diversity- Representativeness Reward
Deep Reinforcement Learning for Unsupervised Video Summarization with Diversity- Representativeness Reward Kaiyang Zhou, Yu Qiao, Tao Xiang AAAI 2018 What is video summarization? Goal: to automatically
More informationInvertible Nonlinear Dimensionality Reduction via Joint Dictionary Learning
Invertible Nonlinear Dimensionality Reduction via Joint Dictionary Learning Xian Wei, Martin Kleinsteuber, and Hao Shen Department of Electrical and Computer Engineering Technische Universität München,
More informationPerformance Guarantees for ReProCS Correlated Low-Rank Matrix Entries Case
Performance Guarantees for ReProCS Correlated Low-Rank Matrix Entries Case Jinchun Zhan Dep. of Electrical & Computer Eng. Iowa State University, Ames, Iowa, USA Email: jzhan@iastate.edu Namrata Vaswani
More informationCOMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017
COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY
More informationLinearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学
Linearized Alternating Direction Method: Two Blocks and Multiple Blocks Zhouchen Lin 林宙辰北京大学 Dec. 3, 014 Outline Alternating Direction Method (ADM) Linearized Alternating Direction Method (LADM) Two Blocks
More informationMatrix Assembly in FEA
Matrix Assembly in FEA 1 In Chapter 2, we spoke about how the global matrix equations are assembled in the finite element method. We now want to revisit that discussion and add some details. For example,
More informationNormalization Techniques in Training of Deep Neural Networks
Normalization Techniques in Training of Deep Neural Networks Lei Huang ( 黄雷 ) State Key Laboratory of Software Development Environment, Beihang University Mail:huanglei@nlsde.buaa.edu.cn August 17 th,
More information18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008
18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 MPCA: Multilinear Principal Component Analysis of Tensor Objects Haiping Lu, Student Member, IEEE, Konstantinos N. (Kostas) Plataniotis,
More informationIntroduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin
1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)
More informationhttps://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:
More informationQuick Introduction to Nonnegative Matrix Factorization
Quick Introduction to Nonnegative Matrix Factorization Norm Matloff University of California at Davis 1 The Goal Given an u v matrix A with nonnegative elements, we wish to find nonnegative, rank-k matrices
More informationGroup Sparse Non-negative Matrix Factorization for Multi-Manifold Learning
LIU, LU, GU: GROUP SPARSE NMF FOR MULTI-MANIFOLD LEARNING 1 Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning Xiangyang Liu 1,2 liuxy@sjtu.edu.cn Hongtao Lu 1 htlu@sjtu.edu.cn
More informationPrincipal Component Analysis and Singular Value Decomposition. Volker Tresp, Clemens Otte Summer 2014
Principal Component Analysis and Singular Value Decomposition Volker Tresp, Clemens Otte Summer 2014 1 Motivation So far we always argued for a high-dimensional feature space Still, in some cases it makes
More informationROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION
ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION Niannan Xue, George Papamakarios, Mehdi Bahri, Yannis Panagakis, Stefanos Zafeiriou Imperial College London, UK University of Edinburgh,
More informationKronecker Decomposition for Image Classification
university of innsbruck institute of computer science intelligent and interactive systems Kronecker Decomposition for Image Classification Sabrina Fontanella 1,2, Antonio Rodríguez-Sánchez 1, Justus Piater
More informationOBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES
OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES THEORY AND PRACTICE Bogustaw Cyganek AGH University of Science and Technology, Poland WILEY A John Wiley &. Sons, Ltd., Publication Contents Preface Acknowledgements
More informationApplication to Hyperspectral Imaging
Compressed Sensing of Low Complexity High Dimensional Data Application to Hyperspectral Imaging Kévin Degraux PhD Student, ICTEAM institute Université catholique de Louvain, Belgium 6 November, 2013 Hyperspectral
More informationA new truncation strategy for the higher-order singular value decomposition
A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,
More informationMachine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016
Machine Learning for Signal Processing Neural Networks Continue Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016 1 So what are neural networks?? Voice signal N.Net Transcription Image N.Net Text
More informationRecovering Low-Rank and Sparse Matrices via Robust Bilateral Factorization
2014 IEEE International Conference on Data Mining Recovering Low-Rank and Sparse Matrices via Robust Bilateral Factorization Fanhua Shang, Yuanyuan Liu, James Cheng, Hong Cheng Dept. of Computer Science
More informationRank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs
Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs Bhargav Kanagal Department of Computer Science University of Maryland College Park, MD 277 bhargav@cs.umd.edu Vikas
More informationarxiv: v2 [cs.cv] 20 Apr 2014
Robust Subspace Recovery via Bi-Sparsity Pursuit arxiv:1403.8067v2 [cs.cv] 20 Apr 2014 Xiao Bian, Hamid Krim North Carolina State University Abstract Successful applications of sparse models in computer
More informationA RAIN PIXEL RESTORATION ALGORITHM FOR VIDEOS WITH DYNAMIC SCENES
A RAIN PIXEL RESTORATION ALGORITHM FOR VIDEOS WITH DYNAMIC SCENES V.Sridevi, P.Malarvizhi, P.Mathivannan Abstract Rain removal from a video is a challenging problem due to random spatial distribution and
More informationMachine Learning with Quantum-Inspired Tensor Networks
Machine Learning with Quantum-Inspired Tensor Networks E.M. Stoudenmire and David J. Schwab Advances in Neural Information Processing 29 arxiv:1605.05775 RIKEN AICS - Mar 2017 Collaboration with David
More informationDimensionality Reduction and Principle Components Analysis
Dimensionality Reduction and Principle Components Analysis 1 Outline What is dimensionality reduction? Principle Components Analysis (PCA) Example (Bishop, ch 12) PCA vs linear regression PCA as a mixture
More informationConditions for Robust Principal Component Analysis
Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and
More informationPCA, Kernel PCA, ICA
PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationMachine Learning Techniques
Machine Learning Techniques ( 機器學習技法 ) Lecture 13: Deep Learning Hsuan-Tien Lin ( 林軒田 ) htlin@csie.ntu.edu.tw Department of Computer Science & Information Engineering National Taiwan University ( 國立台灣大學資訊工程系
More informationSparse Solutions of an Undetermined Linear System
1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research
More informationMultiscale Tensor Decomposition
Multiscale Tensor Decomposition Alp Ozdemir 1, Mark A. Iwen 1,2 and Selin Aviyente 1 1 Department of Electrical and Computer Engineering, Michigan State University 2 Deparment of the Mathematics, Michigan
More informationA Convex Approach for Designing Good Linear Embeddings. Chinmay Hegde
A Convex Approach for Designing Good Linear Embeddings Chinmay Hegde Redundancy in Images Image Acquisition Sensor Information content
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations
Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA
More informationLinear Feature Transform and Enhancement of Classification on Deep Neural Network
Linear Feature ransform and Enhancement of Classification on Deep Neural Network Penghang Yin, Jack Xin, and Yingyong Qi Abstract A weighted and convex regularized nuclear norm model is introduced to construct
More informationA Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang
More informationLow-Rank Tensor Completion by Truncated Nuclear Norm Regularization
Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization Shengke Xue, Wenyuan Qiu, Fan Liu, and Xinyu Jin College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou,
More informationA Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation
Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence A Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation Dongdong Chen and Jian Cheng Lv and Zhang Yi
More informationCVPR A New Tensor Algebra - Tutorial. July 26, 2017
CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic
More informationDeep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści
Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, 2017 Spis treści Website Acknowledgments Notation xiii xv xix 1 Introduction 1 1.1 Who Should Read This Book?
More informationTensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization
Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Canyi Lu 1, Jiashi Feng 1, Yudong Chen, Wei Liu 3, Zhouchen Lin 4,5,, Shuicheng Yan 6,1
More informationTensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo
Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery Florian Römer and Giovanni Del Galdo 2nd CoSeRa, Bonn, 17-19 Sept. 2013 Ilmenau University of Technology Institute for Information
More informationRobust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition
Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Gongguo Tang and Arye Nehorai Department of Electrical and Systems Engineering Washington University in St Louis
More informationProbabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,
More informationFace recognition Computer Vision Spring 2018, Lecture 21
Face recognition http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 21 Course announcements Homework 6 has been posted and is due on April 27 th. - Any questions about the homework?
More informationRecovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies
July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:
More informationEfficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm
Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson, Anton van den Hengel School of Computer Science University of Adelaide,
More informationarxiv: v1 [cs.cv] 1 Jun 2014
l 1 -regularized Outlier Isolation and Regression arxiv:1406.0156v1 [cs.cv] 1 Jun 2014 Han Sheng Department of Electrical and Electronic Engineering, The University of Hong Kong, HKU Hong Kong, China sheng4151@gmail.com
More informationLatent Semantic Analysis. Hongning Wang
Latent Semantic Analysis Hongning Wang CS@UVa VS model in practice Document and query are represented by term vectors Terms are not necessarily orthogonal to each other Synonymy: car v.s. automobile Polysemy:
More informationNeural Network Approximation. Low rank, Sparsity, and Quantization Oct. 2017
Neural Network Approximation Low rank, Sparsity, and Quantization zsc@megvii.com Oct. 2017 Motivation Faster Inference Faster Training Latency critical scenarios VR/AR, UGV/UAV Saves time and energy Higher
More informationData Mining Techniques
Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 12 Jan-Willem van de Meent (credit: Yijun Zhao, Percy Liang) DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Linear Dimensionality
More informationWafer Pattern Recognition Using Tucker Decomposition
Wafer Pattern Recognition Using Tucker Decomposition Ahmed Wahba, Li-C. Wang, Zheng Zhang UC Santa Barbara Nik Sumikawa NXP Semiconductors Abstract In production test data analytics, it is often that an
More informationGrassmann Averages for Scalable Robust PCA Supplementary Material
Grassmann Averages for Scalable Robust PCA Supplementary Material Søren Hauberg DTU Compute Lyngby, Denmark sohau@dtu.dk Aasa Feragen DIKU and MPIs Tübingen Denmark and Germany aasa@diku.dk Michael J.
More informationGoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case
ianyi Zhou IANYI.ZHOU@SUDEN.US.EDU.AU Dacheng ao DACHENG.AO@US.EDU.AU Centre for Quantum Computation & Intelligent Systems, FEI, University of echnology, Sydney, NSW 2007, Australia Abstract Low-rank and
More informationCrowdsourcing via Tensor Augmentation and Completion (TAC)
Crowdsourcing via Tensor Augmentation and Completion (TAC) Presenter: Yao Zhou joint work with: Dr. Jingrui He - 1 - Roadmap Background Related work Crowdsourcing based on TAC Experimental results Conclusion
More information