On the convergence of higher-order orthogonality iteration and its extension
|
|
- Miles Newton
- 5 years ago
- Views:
Transcription
1 On the convergence of higher-order orthogonality iteration and its extension Yangyang Xu IMA, University of Minnesota SIAM Conference LA15, Atlanta October 27, 2015
2 Best low-multilinear-rank approximation of tensors min X C 1 A 1... N A N 2 F, C,A subject to A n O In r n {A n R In rn : A n A n = I}, n. Any tensor has a multilinear SVD [Lathauwer-Moor-Vandewalle 00] Truncated multilinear SVD is not the best [L-M-V 00] 2/17
3 Face recognition by low-rank tensor approximation Matrix Decomposition D UC Tensor Decomposition D C 1 U 1 5 U 5 Tensor decomposition can improve recognition accuracy over 50% [Vasilescu-Terzopoulos 02] 3/17
4 Higher-order Orthogonality Iteration (HOOI) Given A, the optimal C is C = X 1 A 1... N A N. With this C plugged in, the approximation problem becomes which is equivalent to min A X X 1 A 1 A 1... N A N A N 2 F, subject to A n O In r n, n, max A X 1 A 1... N A N 2 F = A n unfold n (X i n A i ) 2 F, subject to A n O In r n, n. 4/17
5 Higher-order Orthogonality Iteration (HOOI) Algorithm 1: Higher-order orthogonality iteration (HOOI) Input: X and (r 1,..., r N ) Initialization: choose (A 0 1,..., A 0 N ) with A0 n O In r n, n and set k = 0 while not convergent do for n = 1,..., N do Set A k+1 n to an orthonormal basis of the dominant r n -dimensional left singular subspace of increase k k + 1 G k n = unfold n (X i<n (A k+1 i ) i>n (A k i ) ) Widely used and fit to large-scale problems Better scalability than Newton-type methods such as Newton-Grassmann [Elden-Savas 09] and Riemannian trust region [Ishteva et. al 11] Convergence is still an open question 4/17
6 this talk addresses the open question subspace convergence of HOOI Why care about convergence Without convergence, HOOI can give severely different multilinear subspace by running to different numbers of iterations Challenges Non-convexity Non-uniqueness of dominant subspace Non-uniqueness of orthonormal basis How to establish convergence Greedy selection of orthonormal basis Kurdyka- Lojasiewicz property 5/17
7 Greedy higher-order orthogonality iteration Algorithm 2: Greedy HOOI Input: X and (r 1,..., r N ) Initialization: choose (A 0 1,..., A 0 N ) with A0 n O In r n, n and set k = 0 while not convergent do for n = 1,..., N do Set A k+1 n argmin An A n A k n F over all orthonormal bases of the dominant r n -dimensional left singular subspace of increase k k + 1 G k n = unfold n (X i<n (A k+1 i ) i>n (A k i ) ) Iterate mapping is continuous Will be used to establish convergence of original HOOI 6/17
8 Block-nondegeneracy A is a block-nondegenerate point if σ rn (G n ) > σ rn+1(g n ), n where G n = unfold n (X i n A i ) Dominant multilinear subspace is unique Equivalent to negative definiteness of block Hessian on O In rn local maximizer Necessary condition for convergence A degenerate first-order optimal point is not stationary for a 7/17
9 Convergence analysis of greedy HOOI Let {A k } k 1 be the sequence from greedy HOOI. Assume there is a block-nondegenerate limit point Ā. Ā is a block-wise maximizer and a critical point There is α > 0 such that if A k sufficiently close to Ā, then α A k+1 A k 2 F F (A k+1 ) F (A k ). where F (A) = X 1 A 1... N A N 2 F N n=1 ι O In rn If A k is sufficiently close to Ā, then dist(0, F (A k )) C A k A k 1 F. Further with Kurdyka- Lojasiewicz property A k Ā as k Local convergence to globally optimal solution 8/17
10 Convergence analysis of greedy HOOI Kurdyka- Lojasiewicz Property: the following quantity is bounded near Ā for a certain θ [0, 1) F (A) F (Ā) θ dist(0, F (A)) Use (1 θ)(a b) a θ (a 1 θ b 1 θ ), a, b 0 and previous inequalities to have A k+1 A k 2 F F (A k+1 ) F (A k ) (F (Ā) F (Ak )) θ( (F (Ā) F (Ak )) 1 θ (F (Ā) F (Ak+1 )) 1 θ) dist(0, F (A)) ( (F (Ā) F (Ak )) 1 θ (F (Ā) F (Ak+1 )) 1 θ) A k A k 1 F ( (F ( Ā) F (A k )) 1 θ (F (Ā) F (Ak+1 )) 1 θ). Together with Young s inequality gives k Ak+1 A k F < 8/17
11 Convergence analysis of original HOOI Starting from A k 0 sufficiently close to a block-nondegenerate limit point Ā of {A k } k 1, we have A k n(a k n) = Ãk n(ãk n), n, k k 0 where {A k } k k0 and {Ãk } k k0 are sequences from the original and greedy HOOIs respectively. Ā is a block-wise maximizer and a critical point A k n(a k n) ĀnĀ n as k Local convergence to globally optimal multilinear subspace 9/17
12 Best low-rank tensor approximation with missing values min X C 1 A 1... N A N 2 F, X,C,A subject to A n O In r n, n; P Ω (X ) = P Ω (M) Reduces to the best low-rank tensor approximation if Ω contains all elements Can estimate the original tensor M simultaneously 10/17
13 Best low-rank tensor approximation with missing values Low-rank tensor approximation with missing values [Geng-Miles-Zhou-Wang 11] 10/17
14 Incomplete higher-order orthogonality iteration Algorithm 3: incomplete HOOI (ihooi) Input: P Ω (M) and (r 1,..., r N ) Initialization: choose X 0 and (A 0 1,..., A 0 N ) and set k = 0 while not convergent do for n = 1,..., N do Set A k+1 n to an orthonormal basis of the dominant r n -dimensional left singular subspace of G k n = unfold n (X i<n (A k+1 i ) i>n (A k i ) ) X k+1 P Ω (M) + P Ω c( X k N n=1 A k+1 increase k k + 1 n (A k+1 n ) ) X -update is one step of the alternating minimization for min X C C,X N i=1 A k+1 i (A k+1 i ) 2 F, subject to P Ω (X ) = P Ω (M) 11/17
15 Experimental results Low-multilinear-rank tensor approximation Relation between iterate sequences by original and greedy HOOIs Low-rank tensor completion Phase transition plots (simulate how many samples guarantee exact reconstruction) 12/17
16 Comparing HOOIs on subspace learning Subspace relative changes Greedy HOOI Original HOOI Number of iterations Gaussian random tensor full size: core size: A k n(a k n) = Ãk n(ãk n), n, k n = 1 n = 2 n = 3 15 r n th singular value (r n +1) th singular value Number of iterations 13/17
17 Comparing HOOIs on subspace learning Subspace relative changes Greedy HOOI Original HOOI Number of iterations Yale Face Database full size: core size: A k n(a k n) = Ãk n(ãk n), n, k n = 1 n = 2 n = r n th singular value (r n +1) th singular value Number of iterations 13/17
18 Phase transition plots ihooi (prop d) WTucker geomcg sample ratio sample ratio sample ratio rank r rank r rank r randomly generated tensor with rank (r, r, r) True rank provided (adaptive rank estimate can be applied) WTucker [Filipovic-Jukic 13]: alternating minimization to min A,C P Ω (C N i=1 A i) P Ω (M) 2 F geomcg [Kressner-Steinlechner-Vandereycken 13]: Riemannian optimization method Proposed method achieves near-optimal reconstruction 14/17
19 Convergence to globally optimal solution Observation: P Ω(C N n=1 An) PΩ(M) F P Ω(M) F always small Low sampling ratio possibly more than one feasible low-rank solutions A 0 sufficiently close to bases of M C k N n=1 A k n M 50 success times among noise level δ tensor size: (50,50,50); rank: (20,20,20); sample ratio: 10% A 0 truncated HOSVD of M + δ M F N N F 15/17
20 Conclusions Give subspace sequence convergence of the HOOI method first result on convergence of HOOI via the convergence of a greedy HOOI and Kurdyka- Lojasiewicz inequality block-nondegeneracy is sufficient and necessary Propose an incomplete HOOI method for applications with missing values Subspace learning and tensor completion simultaneously Empirically, near-optimal recovery on low-rank tensors 16/17
21 References Y. Xu. On the convergence of higher-order orthogonality iteration. arxiv, Y. Xu. On higher-order singular value decomposition from incomplete data. arxiv, /17
A new truncation strategy for the higher-order singular value decomposition
A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,
More informationThe multiple-vector tensor-vector product
I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition
More informationA Multi-Affine Model for Tensor Decomposition
Yiqing Yang UW Madison breakds@cs.wisc.edu A Multi-Affine Model for Tensor Decomposition Hongrui Jiang UW Madison hongrui@engr.wisc.edu Li Zhang UW Madison lizhang@cs.wisc.edu Chris J. Murphy UC Davis
More informationKronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm
Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,
More informationTensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo
Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery Florian Römer and Giovanni Del Galdo 2nd CoSeRa, Bonn, 17-19 Sept. 2013 Ilmenau University of Technology Institute for Information
More informationTheoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices
Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices Himanshu Nayar Dept. of EECS University of Michigan Ann Arbor Michigan 484 email:
More informationLow Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:
More informationLOW MULTILINEAR RANK TENSOR APPROXIMATION VIA SEMIDEFINITE PROGRAMMING
17th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 LOW MULTILINEAR RANK TENSOR APPROXIMATION VIA SEMIDEFINITE PROGRAMMING Carmeliza Navasca and Lieven De Lathauwer
More informationTHERE is an increasing need to handle large multidimensional
1 Matrix Product State for Feature Extraction of Higher-Order Tensors Johann A. Bengua 1, Ho N. Phien 1, Hoang D. Tuan 1 and Minh N. Do 2 arxiv:1503.00516v4 [cs.cv] 20 Jan 2016 Abstract This paper introduces
More informationRobust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds
Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.
More informationA Tensor Approximation Approach to Dimensionality Reduction
Int J Comput Vis (2008) 76: 217 229 DOI 10.1007/s11263-007-0053-0 A Tensor Approximation Approach to Dimensionality Reduction Hongcheng Wang Narendra Ahua Received: 6 October 2005 / Accepted: 9 March 2007
More informationParallel Tensor Compression for Large-Scale Scientific Data
Parallel Tensor Compression for Large-Scale Scientific Data Woody Austin, Grey Ballard, Tamara G. Kolda April 14, 2016 SIAM Conference on Parallel Processing for Scientific Computing MS 44/52: Parallel
More information3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy
3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy Introduction. Seismic data are often sparsely or irregularly sampled along
More informationWafer Pattern Recognition Using Tucker Decomposition
Wafer Pattern Recognition Using Tucker Decomposition Ahmed Wahba, Li-C. Wang, Zheng Zhang UC Santa Barbara Nik Sumikawa NXP Semiconductors Abstract In production test data analytics, it is often that an
More informationRanking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization
Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization Jialin Dong ShanghaiTech University 1 Outline Introduction FourVignettes: System Model and Problem Formulation Problem Analysis
More informationA Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem
A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu
More informationEfficient Low Rank Tensor Ring Completion
1 Efficient Low Rank Tensor Ring Completion Wenqi Wang, Vaneet Aggarwal, and Shuchin Aeron arxiv:1707.08184v1 [cs.lg] 23 Jul 2017 Abstract Using the matrix product state (MPS) representation of the recently
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More information/16/$ IEEE 1728
Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin
More informationCoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles
CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal
More informationsparse and low-rank tensor recovery Cubic-Sketching
Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru
More informationHigher-Order Singular Value Decomposition (HOSVD) for structured tensors
Higher-Order Singular Value Decomposition (HOSVD) for structured tensors Definition and applications Rémy Boyer Laboratoire des Signaux et Système (L2S) Université Paris-Sud XI GDR ISIS, January 16, 2012
More informationvmmlib Tensor Approximation Classes Susanne K. Suter April, 2013
vmmlib Tensor pproximation Classes Susanne K. Suter pril, 2013 Outline Part 1: Vmmlib data structures Part 2: T Models Part 3: Typical T algorithms and operations 2 Downloads and Resources Tensor pproximation
More informationAn Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition
An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition Jinshi Yu, Guoxu Zhou, Qibin Zhao and Kan Xie School of Automation, Guangdong University of Technology, Guangzhou,
More informationJACOBI ALGORITHM FOR THE BEST LOW MULTILINEAR RANK APPROXIMATION OF SYMMETRIC TENSORS
JACOBI ALGORITHM FOR THE BEST LOW MULTILINEAR RANK APPROXIMATION OF SYMMETRIC TENSORS MARIYA ISHTEVA, P.-A. ABSIL, AND PAUL VAN DOOREN Abstract. The problem discussed in this paper is the symmetric best
More informationGeneralized Higher-Order Tensor Decomposition via Parallel ADMM
Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Generalized Higher-Order Tensor Decomposition via Parallel ADMM Fanhua Shang 1, Yuanyuan Liu 2, James Cheng 1 1 Department of
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationBlock Coordinate Descent for Regularized Multi-convex Optimization
Block Coordinate Descent for Regularized Multi-convex Optimization Yangyang Xu and Wotao Yin CAAM Department, Rice University February 15, 2013 Multi-convex optimization Model definition Applications Outline
More informationSpectral Clustering using Multilinear SVD
Spectral Clustering using Multilinear SVD Analysis, Approximations and Applications Debarghya Ghoshdastidar, Ambedkar Dukkipati Dept. of Computer Science & Automation Indian Institute of Science Clustering:
More informationBlock stochastic gradient update method
Block stochastic gradient update method Yangyang Xu and Wotao Yin IMA, University of Minnesota Department of Mathematics, UCLA November 1, 2015 This work was done while in Rice University 1 / 26 Stochastic
More informationComputing and decomposing tensors
Computing and decomposing tensors Tensor rank decomposition Nick Vannieuwenhoven (FWO / KU Leuven) Sensitivity Condition numbers Tensor rank decomposition Pencil-based algorithms Alternating least squares
More information2.3. Clustering or vector quantization 57
Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :
More informationEfficient Sparse Low-Rank Tensor Completion Using the Frank-Wolfe Algorithm
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) Efficient Sparse Low-Rank Tensor Completion Using the Frank-Wolfe Algorithm Xiawei Guo, Quanming Yao, James T. Kwok
More informationMultiscale Tensor Decomposition
Multiscale Tensor Decomposition Alp Ozdemir 1, Mark A. Iwen 1,2 and Selin Aviyente 1 1 Department of Electrical and Computer Engineering, Michigan State University 2 Deparment of the Mathematics, Michigan
More informationAlgebraic models for higher-order correlations
Algebraic models for higher-order correlations Lek-Heng Lim and Jason Morton U.C. Berkeley and Stanford Univ. December 15, 2008 L.-H. Lim & J. Morton (MSRI Workshop) Algebraic models for higher-order correlations
More informationFast Hankel Tensor-Vector Products and Application to Exponential Data Fitting
Fast Hankel Tensor-Vector Products and Application to Exponential Data Fitting Weiyang Ding Liqun Qi Yimin Wei arxiv:1401.6238v1 [math.na] 24 Jan 2014 January 27, 2014 Abstract This paper is contributed
More informationTensors and graphical models
Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationA Constraint Generation Approach to Learning Stable Linear Dynamical Systems
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems Sajid M. Siddiqi Byron Boots Geoffrey J. Gordon Carnegie Mellon University NIPS 2007 poster W22 steam Application: Dynamic Textures
More informationMultisets mixture learning-based ellipse detection
Pattern Recognition 39 (6) 731 735 Rapid and brief communication Multisets mixture learning-based ellipse detection Zhi-Yong Liu a,b, Hong Qiao a, Lei Xu b, www.elsevier.com/locate/patcog a Key Lab of
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview
More informationc 2008 Society for Industrial and Applied Mathematics
SIAM J MATRIX ANAL APPL Vol 30, No 3, pp 1219 1232 c 2008 Society for Industrial and Applied Mathematics A JACOBI-TYPE METHOD FOR COMPUTING ORTHOGONAL TENSOR DECOMPOSITIONS CARLA D MORAVITZ MARTIN AND
More informationRank-constrained Optimization: A Riemannian Manifold Approach
Introduction Rank-constrained Optimization: A Riemannian Manifold Approach Guifang Zhou 1, Wen Huang 2, Kyle A. Gallivan 1, Paul Van Dooren 2, P.-A. Absil 2 1- Florida State University 2- Université catholique
More informationPrincipal components analysis COMS 4771
Principal components analysis COMS 4771 1. Representation learning Useful representations of data Representation learning: Given: raw feature vectors x 1, x 2,..., x n R d. Goal: learn a useful feature
More informationHigher Order Orthogonal Iteration of Tensors (HOOI) and its Relation to PCA and GLRAM
Higher Order Orthogonal Iteration of Tensors HOOI) and its Relation to PCA and GLRAM Bernard N. Sheehan Yousef Saad March 19, 2007 Abstract This paper presents a unified view of a number of dimension reduction
More informationMax-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig
Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Convergence analysis of Riemannian GaussNewton methods and its connection with the geometric condition number by Paul Breiding and
More informationBook of Abstracts - Extract th Annual Meeting. March 23-27, 2015 Lecce, Italy. jahrestagung.gamm-ev.de
GESELLSCHAFT für ANGEWANDTE MATHEMATIK und MECHANIK e.v. INTERNATIONAL ASSOCIATION of APPLIED MATHEMATICS and MECHANICS 86 th Annual Meeting of the International Association of Applied Mathematics and
More informationTwo-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix
Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Stefano Soatto Shankar Sastry Department of EECS, UC Berkeley Department of Computer Sciences, UCLA 30 Cory Hall,
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationQuadratic Programming
Quadratic Programming Outline Linearly constrained minimization Linear equality constraints Linear inequality constraints Quadratic objective function 2 SideBar: Matrix Spaces Four fundamental subspaces
More information20th European Signal Processing Conference (EUSIPCO 2012) Bucharest, Romania, August 27-31, 2012
2th European Signal Processing Conference (EUSIPCO 212) Bucharest, Romania, August 27-31, 212 A NEW TOOL FOR MULTIDIMENSIONAL LOW-RANK STAP FILTER: CROSS HOSVDS Maxime Boizard 12, Guillaume Ginolhac 1,
More informationSupplementary Materials for Riemannian Pursuit for Big Matrix Recovery
Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Mingkui Tan, School of omputer Science, The University of Adelaide, Australia Ivor W. Tsang, IS, University of Technology Sydney,
More informationarxiv: v2 [cs.na] 24 Mar 2015
Volume X, No. 0X, 200X, X XX doi:1934/xx.xx.xx.xx PARALLEL MATRIX FACTORIZATION FOR LOW-RANK TENSOR COMPLETION arxiv:1312.1254v2 [cs.na] 24 Mar 2015 Yangyang Xu Department of Computational and Applied
More informationthe tensor rank is equal tor. The vectorsf (r)
EXTENSION OF THE SEMI-ALGEBRAIC FRAMEWORK FOR APPROXIMATE CP DECOMPOSITIONS VIA NON-SYMMETRIC SIMULTANEOUS MATRIX DIAGONALIZATION Kristina Naskovska, Martin Haardt Ilmenau University of Technology Communications
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.
More informationLow Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Sampling based approximation Aim: Obtain rank-r approximation
More informationDonald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016
Optimization for Tensor Models Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 1 Tensors Matrix Tensor: higher-order matrix
More informationMultilinear Subspace Analysis of Image Ensembles
Multilinear Subspace Analysis of Image Ensembles M. Alex O. Vasilescu 1,2 and Demetri Terzopoulos 2,1 1 Department of Computer Science, University of Toronto, Toronto ON M5S 3G4, Canada 2 Courant Institute
More informationCVPR A New Tensor Algebra - Tutorial. July 26, 2017
CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic
More informationTECHNISCHE UNIVERSITÄT BERLIN
TECHNISCHE UNIVERSITÄT BERLIN On best rank one approximation of tensors S. Friedland V. Mehrmann R. Pajarola S.K. Suter Preprint 2012/07 Preprint-Reihe des Instituts für Mathematik Technische Universität
More informationOrthogonal tensor decomposition
Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.
More informationJOHANNES KEPLER UNIVERSITY LINZ. Institute of Computational Mathematics
JOHANNES KEPLER UNIVERSITY LINZ Institute of Computational Mathematics Greedy Low-Rank Approximation in Tucker Format of Tensors and Solutions of Tensor Linear Systems Irina Georgieva Institute of Mathematics
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationHigher Order Separable LDA Using Decomposed Tensor Classifiers
Higher Order Separable LDA Using Decomposed Tensor Classifiers Christian Bauckhage, Thomas Käster and John K. Tsotsos Centre for Vision Research, York University, Toronto, ON, M3J 1P3 http://cs.yorku.ca/laav
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationTensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia
Network Feature s Decompositions for Machine Learning 1 1 Department of Computer Science University of British Columbia UBC Machine Learning Group, June 15 2016 1/30 Contact information Network Feature
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE7C (Spring 08): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee7c@berkeley.edu October
More informationLecture 4. CP and KSVD Representations. Charles F. Van Loan
Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix
More informationN-mode Analysis (Tensor Framework) Behrouz Saghafi
N-mode Analysis (Tensor Framework) Behrouz Saghafi N-mode Analysis (Tensor Framework) Drawback of 1-mode analysis (e.g. PCA): Captures the variance among just a single factor Our training set contains
More informationMulti-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems
Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear
More informationA Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations
A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations Chuangchuang Sun and Ran Dai Abstract This paper proposes a customized Alternating Direction Method of Multipliers
More informationHOSVD Based Image Processing Techniques
HOSVD Based Image Processing Techniques András Rövid, Imre J. Rudas, Szabolcs Sergyán Óbuda University John von Neumann Faculty of Informatics Bécsi út 96/b, 1034 Budapest Hungary rovid.andras@nik.uni-obuda.hu,
More informationKey words. multiway array, Tucker decomposition, low-rank approximation, maximum block improvement
ON TENSOR TUCKER DECOMPOSITION: THE CASE FOR AN ADJUSTABLE CORE SIZE BILIAN CHEN, ZHENING LI, AND SHUZHONG ZHANG Abstract. This paper is concerned with the problem of finding a Tucer decomposition for
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationMath 671: Tensor Train decomposition methods II
Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition
More informationStochastic dynamical modeling:
Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou
More informationFourier PCA. Navin Goyal (MSR India), Santosh Vempala (Georgia Tech) and Ying Xiao (Georgia Tech)
Fourier PCA Navin Goyal (MSR India), Santosh Vempala (Georgia Tech) and Ying Xiao (Georgia Tech) Introduction 1. Describe a learning problem. 2. Develop an efficient tensor decomposition. Independent component
More informationSome Recent Advances. in Non-convex Optimization. Purushottam Kar IIT KANPUR
Some Recent Advances in Non-convex Optimization Purushottam Kar IIT KANPUR Outline of the Talk Recap of Convex Optimization Why Non-convex Optimization? Non-convex Optimization: A Brief Introduction Robust
More informationFundamentals of Multilinear Subspace Learning
Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with
More informationA few applications of the SVD
A few applications of the SVD Many methods require to approximate the original data (matrix) by a low rank matrix before attempting to solve the original problem Regularization methods require the solution
More informationDimensionality Reduction for Exponential Family Data
Dimensionality Reduction for Exponential Family Data Yoonkyung Lee* Department of Statistics The Ohio State University *joint work with Andrew Landgraf July 2-6, 2018 Computational Strategies for Large-Scale
More informationLift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints
Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints Nicolas Boumal Université catholique de Louvain (Belgium) IDeAS seminar, May 13 th, 2014, Princeton The Riemannian
More informationOnline Tensor Factorization for. Feature Selection in EEG
Online Tensor Factorization for Feature Selection in EEG Alric Althoff Honors Thesis, Department of Cognitive Science, University of California - San Diego Supervised by Dr. Virginia de Sa Abstract Tensor
More informationDynamical low rank approximation in hierarchical tensor format
Dynamical low rank approximation in hierarchical tensor format R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Motivation Equations describing complex systems with multi-variate solution
More informationPRINCIPAL CUMULANT COMPONENT ANALYSIS JASON MORTON AND LEK-HENG LIM
PRINCIPAL CUMULANT COMPONENT ANALYSIS JASON MORTON AND LEK-HENG LIM Abstract. Multivariate Gaussian data is completely characterized by its mean and covariance, yet modern non-gaussian data makes higher-order
More informationSparseness Constraints on Nonnegative Tensor Decomposition
Sparseness Constraints on Nonnegative Tensor Decomposition Na Li nali@clarksonedu Carmeliza Navasca cnavasca@clarksonedu Department of Mathematics Clarkson University Potsdam, New York 3699, USA Department
More informationAdaptive low-rank approximation in hierarchical tensor format using least-squares method
Workshop on Challenges in HD Analysis and Computation, San Servolo 4/5/2016 Adaptive low-rank approximation in hierarchical tensor format using least-squares method Anthony Nouy Ecole Centrale Nantes,
More informationDistributed Inexact Newton-type Pursuit for Non-convex Sparse Learning
Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Bo Liu Department of Computer Science, Rutgers Univeristy Xiao-Tong Yuan BDAT Lab, Nanjing University of Information Science and Technology
More informationA Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices
A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices ao Shen and Martin Kleinsteuber Department of Electrical and Computer Engineering Technische Universität München, Germany {hao.shen,kleinsteuber}@tum.de
More informationA Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing
A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard
More informationarxiv: v1 [math.st] 6 Sep 2018
Optimal Sparse Singular Value Decomposition for High-dimensional High-order Data arxiv:809.0796v [math.st] 6 Sep 08 Anru Zhang and Rungang Han University of Wisconsin-Madison September 7, 08 Abstract In
More informationContents. 1 Introduction. 1.1 History of Optimization ALG-ML SEMINAR LISSA: LINEAR TIME SECOND-ORDER STOCHASTIC ALGORITHM FEBRUARY 23, 2016
ALG-ML SEMINAR LISSA: LINEAR TIME SECOND-ORDER STOCHASTIC ALGORITHM FEBRUARY 23, 2016 LECTURERS: NAMAN AGARWAL AND BRIAN BULLINS SCRIBE: KIRAN VODRAHALLI Contents 1 Introduction 1 1.1 History of Optimization.....................................
More informationExact Low-Rank Matrix Completion from Sparsely Corrupted Entries via Adaptive Outlier Pursuit
Noname manuscript No. (will be inserted by the editor) Exact Low-Rank Matrix Completion from Sparsely Corrupted Entries via Adaptive Outlier Pursuit Ming Yan Yi Yang Stanley Osher Received: date / Accepted:
More informationROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION
ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION Niannan Xue, George Papamakarios, Mehdi Bahri, Yannis Panagakis, Stefanos Zafeiriou Imperial College London, UK University of Edinburgh,
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationStatistical Performance of Convex Tensor Decomposition
Slides available: h-p://www.ibis.t.u tokyo.ac.jp/ryotat/tensor12kyoto.pdf Statistical Performance of Convex Tensor Decomposition Ryota Tomioka 2012/01/26 @ Kyoto University Perspectives in Informatics
More informationTruncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences
Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy
More information