Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices

Size: px
Start display at page:

Download "Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices"

Transcription

1 Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices Himanshu Nayar Dept. of EECS University of Michigan Ann Arbor Michigan Raj Rao Nadakuditi Dept. of EECS University of Michigan Ann Arbor Michigan Abstract The Tucker Higher Order SVD is a popular algorithm for uncovering structure from tensor datacubes. This algorithm has been successfully used in many signal processing machine learning data mining applications. In this work we use recent results from rom matrix theory to analyze the performance of the HOSVD algorithm. In particular we focus on the performance HOSVD on 3-D tensors for extraction of structure from signal-plus-noise tensors. We analyze the missing data setting where the entries of the signal-plus-noise tensor are romly deleted. Our analysis bring into focus a phase transition phenomenon which separates a regime where the HOSVD can accurately estimate the latent signal matrix from a regime where it cannot. The threshold depends on the signal-to-noise ratio the fraction of missing entries observed in a manner we make explicit. Finally we illustrate the predicted performance curves using numerical simulations illustrate implication of our predictions on an widely used facial recognition dataset. I. INTRODUCTION The Tucker HOSVD is a popular algorithm for extracting structure from datacubes [] [3]. In the signal processing literature the HOSVD has been shown to successfully tease out weak signals from multiple datasets that are organized as a tensor or a datacube; see for e.g. [4] [8]. The authors of these methods have pointed out the need to rigorously quantify the performance of the HOSVD so that it may be compared to the performance of other non-hosvd based methods for extracting structure from datacubes [9] []. In this work we consider a setting where we are given n m m 2 signal-plus-noise matrices X (: : )... X (: :n). Motivated by problems arising in EEG analysis image processing other such data fusion applications [] [5] we model these matrices as X (: :i) = k v(i j)a (j) + Z(: :i) for i =...n j= analyze the performance of the HOSVD algorithm via its ability to recover the latent A (j) matrices. The main contribution of this paper is the first mathematically rigorous characterization of the performance of the HOSVD in extracting underlying latent signal matrices. The results build on recent results from rom matrix theory [9] [2]. We expect that this work can form the basis for improve the understing of the fundamental performance limits of the HOSVD in the context of the many signal processing applications listed earlier. Our results bring into sharp focus a phase transition which separates a regime where the HOSVD recovers correlated estimates of the latent signal matrices from a regime where the HOSVD recovers (asymptotically) uncorrelated estimates of the latent signal matrices. We precisely characterize the phase transition boundary how it is affected in the missing data setting. The paper is organized as follows. In Section II we discuss the signal-plus-noise models for the datacube; we summarize the HOSVD algorithm the quantity of interest in Section III. We present the main theoretical results in Section IV provide a sketch of the proof in Section V corroborate the theoretical results with numerical simulations in Section VI. We present some concluding remarks in Section VII. II. SIGNAL-PLUS-NOISE TENSOR MODELS We are given a 3-D tensor X whose i th layer X (: :i) is modeled as X (: :i) = k v(i j)a (j) +Z(: :i) j= }{{} L(::i) for i =...n() where v(i j) are the entries of an arbitrarily chosen matrix v with orthonormal columns A(j) R m m2 : j {...k} for j {...k} are arbitrary 2-D matrices such that A (i) A (j) = Tr(A T (i) A (j)) =θ i θ j δ ij k is fixed with respect to n. The tensor L should be considered as the signal tensor. Each entry of noise-only tensor Z is assumed to be i.i.d. Gaussian with mean zero variance /n. Thus () describes X as a signal-plus-noise tensor. We also consider the missing data setting. Here we are given a 3-D tensor X whose i th layer for i =...n is modeled as X (: :i)=[ k v(i j)a (j) + Z(: :i)] M p (: :i) (2) j= where v {A (j) : j {...k}}k are defined as in () denotes the Hadamard or element-wise product. For x =

2 ...m y =...m 2 i =...n the tensor M in (2) is defined as { with probability p M p (x y i) = with probability ( p) where p ( ] is the probability of observing a given entry in the signal-plus-noise tensor. Note that when p = (2) is equivalent to (). III. THE HIGHER ORDER SVD If X R m m2 n is a 3-D tensor then its Higher Order SVD is given by [7 page 95] X = S U 2 U 2 3 U 3 where S R m m2 n is the so-called core tensor which satisfies the super orthogonality property i.e. S(i : :) S(j : :) = S(:i:) S(:j:) = S(: :i) S(: :j) = U R m m U 2 R m2 m2 U 3 R n n are orthogonal matrices which are obtained as U d = LSVD(unfold d (X )). (3) In (3) LSVD(.) returns the left singular vectors of the argument. If X R m m2 n then the unfold operator is defined as [7 page 93-94] unfold (X ):=[X (: :)... X (:m 2 :)] R n mm2 unfold 2 (X ):=[X (: : )... X (: :n)] R m2 mn unfold 3 (X ):= [ X ( : :) T... X (m : :) T ] R m nm2. The d-mode vectors X are the column vectors of unfold d (X ). For a vector v R mm2 the operation fold is defined as v()... v(m 2 ) fold(v [m m 2 ]) :=.... v((m )m 2 +)... v(m m 2 ) The unfold operation should be viewed as the inverse operation of fold operation. It can be shown that if the signal tensor L has HOSVD given by L = S U 2 U 2 3 U 3 then the matrix A (j) in () can be expressed as A (j) = S(: :j) U 2 U 2. These A (j) images are the primary HOSVD related objects that appear in detection classification algorithms that utilize the HOSVD of tensor datacubes; see for e.g. [8 eq. no. (6) page 996] for its appearance in the context of hwritten digit recognition. Thus it is of interest to quantify the accuracy of the estimates of A (j) formed from the signalplus-noise tensor X. To that end let the HOSVD of the tensor X be given by X = S Ũ 2 Ũ 2 3 Ũ 3. Then an estimate of A (j) which we denote by Ã(j) can be computed from X as à (j) = S(: :j) Ũ 2 Ũ 2. (4) In this paper we are interested in characterizing how well A (j) is approximated by its estimate Ã(j). We shall quantify this using their cosine similarity which is given by ( ) Ã(j) A (j) à T (j) A (j) =trace. (5) In what follows we will theoretically characterize when we can expect the images to be correlated when they will be uncorrelated. IV. MAIN RESULTS We now state our main results next. In what follows denotes almost sure convergence. Theorem. For the setup in () let θ >θ 2 > >θ k. Then as n m m 2 with m m 2 /n c we have that 2 Ã(j) A (j) c( + θ2 j ) θj 2(c + θ2 j ) if θ j >c 4 otherwise ( + θ 2 j )(c + θj 2) θj 2 if θ j >c 4 + c otherwise. This result implies that the estimate Ã(j) is asymptotically localized around the latent A (j) that this estimate is generically biased hence inconsistent. An additional insight from Theorem is that the cosine similarity undergoes a phase transition i.e. below a certain critical SNR the estimate à (j) is orthogonal to A (j). We now provide an analogous characterization for the missing data setting. Theorem 2. In (2) let θ >θ 2 > >θ k assume that A (j) v satisfy the low coherence condition i.e. there exist non-negative constants η A η v C A C v independent of n such that for each j =...k we have that max A (j) η A log CA (m m 2 ) j m m 2 max v(:j) η v log Cv n. j n Then for p ( ) asn m m 2 with m m 2 /n c we have that Ã(j) A (j) 2 ( + pθ 2 j )(c + pθj 2) p ( + c) c( + pθ2 j ) pθj 2(c + pθ2 j ) if θ j > c 4 p otherwise θ 2 j if θ j > c 4 p otherwise.

3 Theorem 2 shows that there is a phase transition induced by romly missing data. The key difference between Theorem Theorem 2 is that the SNR parameter θ i is scaled by p in Theorem 2. In other words the critical SNR for the phase transition has increased by a factor of / p. An inspection of Theorem 2 reveals that for fixed θ j there is a certain critical value p crit = c/θ 2 j such that if p<p crit the estimate Ã(j) is asymptotically orthogonal to A (j). Above this critical threshold the estimates are correlated. The low coherence assumption echoes the results in [6] the matrix completion literature. V. SKETCH OF THE PROOF The results follow from an extension of the recent results from rom matrix theory [9] [2]. To that end we first note that the underlying matrix A (j) its estimate Ã(j) can be written in terms of the matrix unfolding of the signal corrupted tensor along the third dimension. Let the SVD of the unfolded signal tensor L be given by unfold 3 (L) =U 3 Θ 3 V H 3 (6) where U 3 V 3 are respectively the left right singular vector matrices Θ 3 is the diagonal matrix of the singular values of the unfolded signal tensor. Analogously let the SVD of the unfolded signal-plus-noise tensor X be given by ( ) unfold 3 X = Ũ3 Θ 3 Ṽ3 H (7) where Ũ3 Ṽ3 are respectively the left right singular vector matrices Θ 3 is the diagonal matrix of singular values of the unfolded signal-plus-noise tensor. The key observation that is a straightforward consequence of tensor algebra is that A (j) can be expressed as A (j) =fold(v 3 (:j)[m m 2 ]) with =Θ 3 (j j) while Ã(j) can be expressed as à (j) =fold(ṽ3(:j) [m m 2 ]) with = Θ 3 (j j). Consequently the cosine similarity between the A (j) its estimate à (j) isgivenby Ã(j) Ã(j) A (j) = V 3 (:j) A (j) Ṽ3(:j). From (6) (7) we have that Ũ 3 Θ3 Ṽ H 3 = unfold 3 (L) + unfold 3 (Z) }{{} =U 3Θ 3V3 H where unfold 3 (Z) is a noise-only matrix with i.i.d. N ( /n) entries the matrix U 3 Θ 3 V3 H is a rank- k matrix. Thus a direct application of the result in [9 Section 3.] gives us 2 Ã(j) A (j) c( + θ2 j ) θj 2(c + θ2 j ) if θ j >c 4 otherwise Squared inner product p = p = / θ Fig.. The cosine similarity as a function of θ for different values of p for the model in (2). Here k = m = m 2 =5 n = 25. The empirical averages were computed over trials. The solid line is the theoretical prediction from Theorem 2. Squared inner product θ =.9375 θ = p Fig. 2. The cosine similarity as a function of p for different values of θ for the model in (2). Here k = m = m 2 =5 n = 25. The empirical averages were computed over trials. The solid line is the theoretical prediction from Theorem 2. as stated in Theorem. An application of the same result gives us the stated result for in Theorem. For the missing data setting in Theorem 2 the only difference is the presence of the masking matrix unfold 3 (M p ) on the right-h side of (2). Consequently we have that Ũ 3 Θ3 Ṽ H 3 = [ U 3 Θ 3 V H 3 + unfold 3 (Z) ] unfold 3 (M p ). Via the same argument as the preceding Theorem we can apply the results of [2 Theorem 2.4]. This gives us the results stated in Theorem 2. VI. SIMULATION RESULTS We now validate our theoretical predictions using numerical simulations. For the model in (2) we set m =25m 2 = 4 n = generated a rom (incoherent) 2-D matrix A () constructed a 3-D tensor by taking an outer product of this 2-D matrix with a rom isotropic vector v with v = θ. This signal tensor was then corrupted with additive Gaussian noise masked with a binary tensor as in (2). Figure shows the agreement between the prediction of the cosine similarity given by Theorem 2 the empirical average for the same computed over Monte-Carlo trials.

4 Figure 2 shows the agreement between the theoretical prediction in Theorem 2 empirical simulation for the model in (2). Figure 3 shows a heatmap of the empirically averaged cosine similarity on a logscale as a function of θ p. The heatmap clearly illustrates the phase transition confirms the accuracy of the threshold (shown with the solid white line) predicted in Theorem 2. θ =26 θ 2 =3.78 θ 3 =2.4 p θ 4 =.5 θ 5 =.26 θ 6 =.6 Observed:.998 Predicted:.998 (a) The ground truth images A (j) for j =...6. Observed:.949 Predicted:.982 Observed:.686 Predicted: θ Fig. 3. Empirical heatmap of the logarithm of the cosine similarity given by (5) averaged over trials for the model in (2) with k = m = 25m 2 =4 n =. The deep blue region is the regime where the estimates are nearly orthogonal to the latent signal matrix. The solid white line denotes the phase transition threshold θ crit = c /4 / p as predicted by Theorem 2. Observed:.3664 Predicted:.493 Observed:.279 Predicted:.676 Observed:.26 Predicted: Finally we present a simulation that shows how the results can provide a principled justification for the empirical observation that the HOSVD is robust to missing data []. To illustrate its robustness we compare the ground-truth A (j) s (see Figure 4(a)) with the Ã(j) s (see Figure 4(b)) for images from the Yale Faces dataset that were corrupted samples as in (2) with p =.75 an associated θ = [ ]; here k =in (). The shows that there is a strong correlation between the similarity of an estimated representative image à (j) with the original representative image A (j) the visual contrast of the estimated representative images. The higher the cosine similarity the more features are preserved in Ã(j). When the cosine similarity is low (for j = 6) then the images are uncorrelated as predicted by Theorem 2 utilizing them in an inferential scheme will result in a degradation in performance. VII. CONCLUSION We theoretically analyzed the performance of the HOSVD showed that when the SNR is strong enough then the HOSVD produces estimates of the signal matrix that are correlated with the latent signal matrix. When the SNR is below a critical threshold then the estimates are asymptotically uncorrelated. We analyzed the performance with missing data showed the presence of a similar critical threshold for the probability of observing an entry below which the estimates (b) Estimated images Ã(j) for j =...6. Fig. 4. The HOSVD is robust to missing data noise. The ground truth images in (a) were corrupted as in (2) with p =.75. The images estimated using (4) are shown in (b). The cosine similarity between the ground truth the estimated images was computed using (5) compared with the predictions from Section IV. were uncorrelated. We corroborated the theoretical predictions with empirical simulations. A natural next step is to analyze the performance of other tensor decomposition schemes such as the PARAFAC/CANDECOMP []. This can facilitate a comparison of their performance relative to the fundamental limits of the HOSVD uncovered in this paper provide a better understing of which technique(s) best uncover weak latent structure in the presence of missing data from noisy datacubes. ACKNOWLEDGMENT This work was supported by an ONR Young Investigator Award N466 an AFOSR Young Investigator Award

5 FA a NSF award CCF-65 a ARO MURI grant W9NF REFERENCES [2] R. Nadakuditi Optshrink: An algorithm for improved low-rank signal matrix denoising by optimal data-driven singular value shrinkage IEEE Transactions on Information Theory vol. 6 no. 6 pp. 7 May 24. [] T. G. Kolda B. W. Bader Tensor decompositions applications SIAM Review vol. 5 no. 3 pp [2] A. Cichocki Y. Washizawa T. M. Rutkowski H. Bakardjian A. H. Phan S. Choi H. Lee Q. Zhao L. Zhang Y. Li Noninvasive bcis: Multiway signal-processing array decompositions. IEEE Computer vol. 4 no. pp [3] G. Bergqvist E. G. Larsson The higher-order singular value decomposition: Theory an application [lecture notes] IEEE Signal Processing Magazine vol. 27 no. 3 pp [4] N. D. Sidiropoulos R. Bro G. B. Giannakis Parallel factor analysis in sensor array processing IEEE Trans. on Signal Processing vol. 48 no. 8 pp [5] M. A. O. Vasilescu D. Terzopoulos Multilinear analysis of image ensembles: Tensorfaces in Computer Vision - ECCV 22 pp [6] C. Tian G. Fan X. Gao Q. Tian Multiview face recognition: from tensorface to v-tensorface k-tensorface IEEE Trans. on Systems Man Cybernetics Part B: Cybernetics vol. 42 no. 2 pp [7] D. Terzopoulos Y. Lee M. A. O. Vasilescu Model-based image-based methods for facial image synthesis analysis recognition pp. 3 8 Proc. Sixth IEEE Int. Conf. on Automatic Face Gesture Recognition 24. [8] T. G. Kolda J. Sun Scalable tensor decompositions for multiaspect data mining Eighth IEEE International Conference on Data Mining pp [9] A. Karami M. Yazdi G. Mercier Compression of hyperspectral images using discerete wavelet transform tucker decomposition IEEE Journal of Selected Topics in Applied Earth Observations Remote Sensing vol. 5 no. 2 pp [] M. Signoretto R. Van de Plas B. De Moor J. A. Suykens Tensor versus matrix completion: a comparison with application to spectral data IEEE Signal Processing Letters vol. 8 no. 7 pp [] M. Weis F. Romer M. Haardt D. Jannek P. Husar Multidimensional space-time-frequency component analysis of event related eeg data using closed-form parafac Proc. of IEEE Int. Conf. on Acoustics Speech Signal Processing pp [2] Y. Washizawa H. Higashi T. Rutkowski T. Tanaka A. Cichocki Tensor based simultaneous feature extraction sample weighting for eeg classification in Neural Information Processing. Models Applications pp [3] M. Weis D. Jannek T. Guenther P. Husar F. Roemer M. Haardt Temporally resolved multi-way component analysis of dynamic sources in event-related eeg data using parafac2 in Proceedings of the 8th European Signal Processsing Conference (EUSIPCO-2) Aalborg Denmark 2. [4] S. L. Freire T. J. Ulrych Application of singular value decomposition to vertical seismic profiling Geophysics vol. 53 no. 6 pp [5] J.-F. Cardoso Eigen-structure of the fourth-order cumulant tensor with application to the blind source separation problem in Proc. of IEEE Conf. on Acoustics Speech Signal Processing pp [6] E. J. Cès X. Li Y. Ma J. Wright Robust principal component analysis? Journal of the ACM vol. 58 no. 3 p. 2. [7] L. Eldén Matrix methods in data mining pattern recognition vol. 4 SIAM 27. [8] B. Savas L. Eldén Hwritten digit classification using higher order singular value decomposition Pattern recognition vol. 4 no. 3 pp [9] F. Benaych-Georges R. R. Nadakuditi The singular values vectors of low rank perturbations of large rectangular rom matrices Journal of Multivariate Analysis vol. pp

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,

More information

the tensor rank is equal tor. The vectorsf (r)

the tensor rank is equal tor. The vectorsf (r) EXTENSION OF THE SEMI-ALGEBRAIC FRAMEWORK FOR APPROXIMATE CP DECOMPOSITIONS VIA NON-SYMMETRIC SIMULTANEOUS MATRIX DIAGONALIZATION Kristina Naskovska, Martin Haardt Ilmenau University of Technology Communications

More information

Multiscale Tensor Decomposition

Multiscale Tensor Decomposition Multiscale Tensor Decomposition Alp Ozdemir 1, Mark A. Iwen 1,2 and Selin Aviyente 1 1 Department of Electrical and Computer Engineering, Michigan State University 2 Deparment of the Mathematics, Michigan

More information

A Multi-Affine Model for Tensor Decomposition

A Multi-Affine Model for Tensor Decomposition Yiqing Yang UW Madison breakds@cs.wisc.edu A Multi-Affine Model for Tensor Decomposition Hongrui Jiang UW Madison hongrui@engr.wisc.edu Li Zhang UW Madison lizhang@cs.wisc.edu Chris J. Murphy UC Davis

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

On the convergence of higher-order orthogonality iteration and its extension

On the convergence of higher-order orthogonality iteration and its extension On the convergence of higher-order orthogonality iteration and its extension Yangyang Xu IMA, University of Minnesota SIAM Conference LA15, Atlanta October 27, 2015 Best low-multilinear-rank approximation

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

Tensor MUSIC in Multidimensional Sparse Arrays

Tensor MUSIC in Multidimensional Sparse Arrays Tensor MUSIC in Multidimensional Sparse Arrays Chun-Lin Liu 1 and P. P. Vaidyanathan 2 Dept. of Electrical Engineering, MC 136-93 California Institute of Technology, Pasadena, CA 91125, USA cl.liu@caltech.edu

More information

THERE is an increasing need to handle large multidimensional

THERE is an increasing need to handle large multidimensional 1 Matrix Product State for Feature Extraction of Higher-Order Tensors Johann A. Bengua 1, Ho N. Phien 1, Hoang D. Tuan 1 and Minh N. Do 2 arxiv:1503.00516v4 [cs.cv] 20 Jan 2016 Abstract This paper introduces

More information

Multilinear Subspace Analysis of Image Ensembles

Multilinear Subspace Analysis of Image Ensembles Multilinear Subspace Analysis of Image Ensembles M. Alex O. Vasilescu 1,2 and Demetri Terzopoulos 2,1 1 Department of Computer Science, University of Toronto, Toronto ON M5S 3G4, Canada 2 Courant Institute

More information

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,

More information

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Noisy Word Recognition Using Denoising and Moment Matrix Discriminants

Noisy Word Recognition Using Denoising and Moment Matrix Discriminants Noisy Word Recognition Using Denoising and Moment Matrix Discriminants Mila Nikolova Département TSI ENST, rue Barrault, 753 Paris Cedex 13, France, nikolova@tsi.enst.fr Alfred Hero Dept. of EECS, Univ.

More information

Postgraduate Course Signal Processing for Big Data (MSc)

Postgraduate Course Signal Processing for Big Data (MSc) Postgraduate Course Signal Processing for Big Data (MSc) Jesús Gustavo Cuevas del Río E-mail: gustavo.cuevas@upm.es Work Phone: +34 91 549 57 00 Ext: 4039 Course Description Instructor Information Course

More information

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES S. Visuri 1 H. Oja V. Koivunen 1 1 Signal Processing Lab. Dept. of Statistics Tampere Univ. of Technology University of Jyväskylä P.O.

More information

N-mode Analysis (Tensor Framework) Behrouz Saghafi

N-mode Analysis (Tensor Framework) Behrouz Saghafi N-mode Analysis (Tensor Framework) Behrouz Saghafi N-mode Analysis (Tensor Framework) Drawback of 1-mode analysis (e.g. PCA): Captures the variance among just a single factor Our training set contains

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 30, No 3, pp 1219 1232 c 2008 Society for Industrial and Applied Mathematics A JACOBI-TYPE METHOD FOR COMPUTING ORTHOGONAL TENSOR DECOMPOSITIONS CARLA D MORAVITZ MARTIN AND

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

To be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative Tensor Title: Factorization Authors: Ivica Kopriva and A

To be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative Tensor Title: Factorization Authors: Ivica Kopriva and A o be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative ensor itle: Factorization Authors: Ivica Kopriva and Andrzej Cichocki Accepted: 21 June 2009 Posted: 25 June

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

20th European Signal Processing Conference (EUSIPCO 2012) Bucharest, Romania, August 27-31, 2012

20th European Signal Processing Conference (EUSIPCO 2012) Bucharest, Romania, August 27-31, 2012 2th European Signal Processing Conference (EUSIPCO 212) Bucharest, Romania, August 27-31, 212 A NEW TOOL FOR MULTIDIMENSIONAL LOW-RANK STAP FILTER: CROSS HOSVDS Maxime Boizard 12, Guillaume Ginolhac 1,

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Slice Oriented Tensor Decomposition of EEG Data for Feature Extraction in Space, Frequency and Time Domains

Slice Oriented Tensor Decomposition of EEG Data for Feature Extraction in Space, Frequency and Time Domains Slice Oriented Tensor Decomposition of EEG Data for Feature Extraction in Space, and Domains Qibin Zhao, Cesar F. Caiafa, Andrzej Cichocki, and Liqing Zhang 2 Laboratory for Advanced Brain Signal Processing,

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Non-local Image Denoising by Using Bayesian Low-rank Tensor Factorization on High-order Patches

Non-local Image Denoising by Using Bayesian Low-rank Tensor Factorization on High-order Patches www.ijcsi.org https://doi.org/10.5281/zenodo.1467648 16 Non-local Image Denoising by Using Bayesian Low-rank Tensor Factorization on High-order Patches Lihua Gui, Xuyang Zhao, Qibin Zhao and Jianting Cao

More information

arxiv: v1 [math.st] 12 Oct 2016

arxiv: v1 [math.st] 12 Oct 2016 Towards a Theoretical Analysis of PCA for Heteroscedastic Data David Hong, aura Balzano, and Jeffrey A. Fessler arxiv:6.3595v math.st] Oct 6 Abstract Principal Component Analysis (PCA is a method for estimating

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia Network Feature s Decompositions for Machine Learning 1 1 Department of Computer Science University of British Columbia UBC Machine Learning Group, June 15 2016 1/30 Contact information Network Feature

More information

A Coupled Helmholtz Machine for PCA

A Coupled Helmholtz Machine for PCA A Coupled Helmholtz Machine for PCA Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 3 Hyoja-dong, Nam-gu Pohang 79-784, Korea seungjin@postech.ac.kr August

More information

Available Ph.D position in Big data processing using sparse tensor representations

Available Ph.D position in Big data processing using sparse tensor representations Available Ph.D position in Big data processing using sparse tensor representations Research area: advanced mathematical methods applied to big data processing. Keywords: compressed sensing, tensor models,

More information

Nonnegative Tensor Factorization with Smoothness Constraints

Nonnegative Tensor Factorization with Smoothness Constraints Nonnegative Tensor Factorization with Smoothness Constraints Rafal ZDUNEK 1 and Tomasz M. RUTKOWSKI 2 1 Institute of Telecommunications, Teleinformatics and Acoustics, Wroclaw University of Technology,

More information

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 Tensors Computations and the GPU AGENDA Tensor Networks and Decompositions Tensor Layers in

More information

Breaking the Limits of Subspace Inference

Breaking the Limits of Subspace Inference Breaking the Limits of Subspace Inference Claudia R. Solís-Lemus, Daniel L. Pimentel-Alarcón Emory University, Georgia State University Abstract Inferring low-dimensional subspaces that describe high-dimensional,

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

Tensor-Tensor Product Toolbox

Tensor-Tensor Product Toolbox Tensor-Tensor Product Toolbox 1 version 10 Canyi Lu canyilu@gmailcom Carnegie Mellon University https://githubcom/canyilu/tproduct June, 018 1 INTRODUCTION Tensors are higher-order extensions of matrices

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

Deep Learning: Approximation of Functions by Composition

Deep Learning: Approximation of Functions by Composition Deep Learning: Approximation of Functions by Composition Zuowei Shen Department of Mathematics National University of Singapore Outline 1 A brief introduction of approximation theory 2 Deep learning: approximation

More information

Local Feature Extraction Models from Incomplete Data in Face Recognition Based on Nonnegative Matrix Factorization

Local Feature Extraction Models from Incomplete Data in Face Recognition Based on Nonnegative Matrix Factorization American Journal of Software Engineering and Applications 2015; 4(3): 50-55 Published online May 12, 2015 (http://www.sciencepublishinggroup.com/j/ajsea) doi: 10.11648/j.ajsea.20150403.12 ISSN: 2327-2473

More information

Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising

Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Curt Da Silva and Felix J. Herrmann 2 Dept. of Mathematics 2 Dept. of Earth and Ocean Sciences, University

More information

Analysis of a Privacy-preserving PCA Algorithm using Random Matrix Theory

Analysis of a Privacy-preserving PCA Algorithm using Random Matrix Theory Analysis of a Privacy-preserving PCA Algorithm using Random Matrix Theory Lu Wei, Anand D. Sarwate, Jukka Corander, Alfred Hero, and Vahid Tarokh Department of Electrical and Computer Engineering, University

More information

Analysis of Spectral Kernel Design based Semi-supervised Learning

Analysis of Spectral Kernel Design based Semi-supervised Learning Analysis of Spectral Kernel Design based Semi-supervised Learning Tong Zhang IBM T. J. Watson Research Center Yorktown Heights, NY 10598 Rie Kubota Ando IBM T. J. Watson Research Center Yorktown Heights,

More information

A Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition

A Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition A Simpler Approach to Low-ank Tensor Canonical Polyadic Decomposition Daniel L. Pimentel-Alarcón University of Wisconsin-Madison Abstract In this paper we present a simple and efficient method to compute

More information

A NO-REFERENCE SHARPNESS METRIC SENSITIVE TO BLUR AND NOISE. Xiang Zhu and Peyman Milanfar

A NO-REFERENCE SHARPNESS METRIC SENSITIVE TO BLUR AND NOISE. Xiang Zhu and Peyman Milanfar A NO-REFERENCE SARPNESS METRIC SENSITIVE TO BLUR AND NOISE Xiang Zhu and Peyman Milanfar Electrical Engineering Department University of California at Santa Cruz, CA, 9564 xzhu@soeucscedu ABSTRACT A no-reference

More information

3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy

3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy 3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy Introduction. Seismic data are often sparsely or irregularly sampled along

More information

Fitting a Tensor Decomposition is a Nonlinear Optimization Problem

Fitting a Tensor Decomposition is a Nonlinear Optimization Problem Fitting a Tensor Decomposition is a Nonlinear Optimization Problem Evrim Acar, Daniel M. Dunlavy, and Tamara G. Kolda* Sandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia

More information

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012 Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Principal Components Analysis Le Song Lecture 22, Nov 13, 2012 Based on slides from Eric Xing, CMU Reading: Chap 12.1, CB book 1 2 Factor or Component

More information

Multiple Wavelet Coefficients Fusion in Deep Residual Networks for Fault Diagnosis

Multiple Wavelet Coefficients Fusion in Deep Residual Networks for Fault Diagnosis Multiple Wavelet Coefficients Fusion in Deep Residual Networks for Fault Diagnosis Minghang Zhao, Myeongsu Kang, Baoping Tang, Michael Pecht 1 Backgrounds Accurate fault diagnosis is important to ensure

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

arxiv: v1 [cs.lg] 18 Nov 2018

arxiv: v1 [cs.lg] 18 Nov 2018 THE CORE CONSISTENCY OF A COMPRESSED TENSOR Georgios Tsitsikas, Evangelos E. Papalexakis Dept. of Computer Science and Engineering University of California, Riverside arxiv:1811.7428v1 [cs.lg] 18 Nov 18

More information

ECE 598: Representation Learning: Algorithms and Models Fall 2017

ECE 598: Representation Learning: Algorithms and Models Fall 2017 ECE 598: Representation Learning: Algorithms and Models Fall 2017 Lecture 1: Tensor Methods in Machine Learning Lecturer: Pramod Viswanathan Scribe: Bharath V Raghavan, Oct 3, 2017 11 Introduction Tensors

More information

Constrained Projection Approximation Algorithms for Principal Component Analysis

Constrained Projection Approximation Algorithms for Principal Component Analysis Constrained Projection Approximation Algorithms for Principal Component Analysis Seungjin Choi, Jong-Hoon Ahn, Andrzej Cichocki Department of Computer Science, Pohang University of Science and Technology,

More information

Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016

Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 Optimization for Tensor Models Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 1 Tensors Matrix Tensor: higher-order matrix

More information

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, 2017 Spis treści Website Acknowledgments Notation xiii xv xix 1 Introduction 1 1.1 Who Should Read This Book?

More information

SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS. Eva L. Dyer, Christoph Studer, Richard G. Baraniuk

SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS. Eva L. Dyer, Christoph Studer, Richard G. Baraniuk SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS Eva L. Dyer, Christoph Studer, Richard G. Baraniuk Rice University; e-mail: {e.dyer, studer, richb@rice.edu} ABSTRACT Unions of subspaces have recently been

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo

Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery Florian Römer and Giovanni Del Galdo 2nd CoSeRa, Bonn, 17-19 Sept. 2013 Ilmenau University of Technology Institute for Information

More information

Recovering Tensor Data from Incomplete Measurement via Compressive Sampling

Recovering Tensor Data from Incomplete Measurement via Compressive Sampling Recovering Tensor Data from Incomplete Measurement via Compressive Sampling Jason R. Holloway hollowjr@clarkson.edu Carmeliza Navasca cnavasca@clarkson.edu Department of Electrical Engineering Clarkson

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2

EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2 EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME Xavier Mestre, Pascal Vallet 2 Centre Tecnològic de Telecomunicacions de Catalunya, Castelldefels, Barcelona (Spain) 2 Institut

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

A Tensor Approximation Approach to Dimensionality Reduction

A Tensor Approximation Approach to Dimensionality Reduction Int J Comput Vis (2008) 76: 217 229 DOI 10.1007/s11263-007-0053-0 A Tensor Approximation Approach to Dimensionality Reduction Hongcheng Wang Narendra Ahua Received: 6 October 2005 / Accepted: 9 March 2007

More information

Novel methods for multilinear data completion and de-noising based on tensor-svd

Novel methods for multilinear data completion and de-noising based on tensor-svd Novel methods for multilinear data completion and de-noising based on tensor-svd Zemin Zhang, Gregory Ely, Shuchin Aeron Department of ECE, Tufts University Medford, MA 02155 zemin.zhang@tufts.com gregoryely@gmail.com

More information

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor

More information

Dealing with curse and blessing of dimensionality through tensor decompositions

Dealing with curse and blessing of dimensionality through tensor decompositions Dealing with curse and blessing of dimensionality through tensor decompositions Lieven De Lathauwer Joint work with Nico Vervliet, Martijn Boussé and Otto Debals June 26, 2017 2 Overview Curse of dimensionality

More information

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations Linear Algebra in Computer Vision CSED441:Introduction to Computer Vision (2017F Lecture2: Basic Linear Algebra & Probability Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Mathematics in vector space Linear

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

Wavelet de-noising for blind source separation in noisy mixtures.

Wavelet de-noising for blind source separation in noisy mixtures. Wavelet for blind source separation in noisy mixtures. Bertrand Rivet 1, Vincent Vigneron 1, Anisoara Paraschiv-Ionescu 2 and Christian Jutten 1 1 Institut National Polytechnique de Grenoble. Laboratoire

More information

SYMMETRIC MATRIX PERTURBATION FOR DIFFERENTIALLY-PRIVATE PRINCIPAL COMPONENT ANALYSIS. Hafiz Imtiaz and Anand D. Sarwate

SYMMETRIC MATRIX PERTURBATION FOR DIFFERENTIALLY-PRIVATE PRINCIPAL COMPONENT ANALYSIS. Hafiz Imtiaz and Anand D. Sarwate SYMMETRIC MATRIX PERTURBATION FOR DIFFERENTIALLY-PRIVATE PRINCIPAL COMPONENT ANALYSIS Hafiz Imtiaz and Anand D. Sarwate Rutgers, The State University of New Jersey ABSTRACT Differential privacy is a strong,

More information

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Peter Tiňo and Ali Rodan School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {P.Tino,

More information

Parallel Tensor Compression for Large-Scale Scientific Data

Parallel Tensor Compression for Large-Scale Scientific Data Parallel Tensor Compression for Large-Scale Scientific Data Woody Austin, Grey Ballard, Tamara G. Kolda April 14, 2016 SIAM Conference on Parallel Processing for Scientific Computing MS 44/52: Parallel

More information

Lecture Notes 2: Matrices

Lecture Notes 2: Matrices Optimization-based data analysis Fall 2017 Lecture Notes 2: Matrices Matrices are rectangular arrays of numbers, which are extremely useful for data analysis. They can be interpreted as vectors in a vector

More information

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES John Lipor Laura Balzano University of Michigan, Ann Arbor Department of Electrical and Computer Engineering {lipor,girasole}@umich.edu ABSTRACT This paper

More information

Wafer Pattern Recognition Using Tucker Decomposition

Wafer Pattern Recognition Using Tucker Decomposition Wafer Pattern Recognition Using Tucker Decomposition Ahmed Wahba, Li-C. Wang, Zheng Zhang UC Santa Barbara Nik Sumikawa NXP Semiconductors Abstract In production test data analytics, it is often that an

More information

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES Mika Inki and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O. Box 54, FIN-215 HUT, Finland ABSTRACT

More information

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s Blind Separation of Nonstationary Sources in Noisy Mixtures Seungjin CHOI x1 and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University 48 Kaeshin-dong, Cheongju Chungbuk

More information

Single Channel Signal Separation Using MAP-based Subspace Decomposition

Single Channel Signal Separation Using MAP-based Subspace Decomposition Single Channel Signal Separation Using MAP-based Subspace Decomposition Gil-Jin Jang, Te-Won Lee, and Yung-Hwan Oh 1 Spoken Language Laboratory, Department of Computer Science, KAIST 373-1 Gusong-dong,

More information

Data Mining Techniques

Data Mining Techniques Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 12 Jan-Willem van de Meent (credit: Yijun Zhao, Percy Liang) DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Linear Dimensionality

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Higher-Order Singular Value Decomposition (HOSVD) for structured tensors

Higher-Order Singular Value Decomposition (HOSVD) for structured tensors Higher-Order Singular Value Decomposition (HOSVD) for structured tensors Definition and applications Rémy Boyer Laboratoire des Signaux et Système (L2S) Université Paris-Sud XI GDR ISIS, January 16, 2012

More information

L26: Advanced dimensionality reduction

L26: Advanced dimensionality reduction L26: Advanced dimensionality reduction The snapshot CA approach Oriented rincipal Components Analysis Non-linear dimensionality reduction (manifold learning) ISOMA Locally Linear Embedding CSCE 666 attern

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna Kwan Hyeong Lee Dept. Electriacal Electronic & Communicaton, Daejin University, 1007 Ho Guk ro, Pochen,Gyeonggi,

More information

Dynamic Data Factorization

Dynamic Data Factorization Dynamic Data Factorization Stefano Soatto Alessro Chiuso Department of Computer Science, UCLA, Los Angeles - CA 90095 Department of Electrical Engineering, Washington University, StLouis - MO 63130 Dipartimento

More information

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Stefano Soatto Shankar Sastry Department of EECS, UC Berkeley Department of Computer Sciences, UCLA 30 Cory Hall,

More information

SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS. Eva L. Dyer, Christoph Studer, Richard G. Baraniuk. ECE Department, Rice University, Houston, TX

SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS. Eva L. Dyer, Christoph Studer, Richard G. Baraniuk. ECE Department, Rice University, Houston, TX SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS Eva L. Dyer, Christoph Studer, Richard G. Baraniuk ECE Department, Rice University, Houston, TX ABSTRACT Unions of subspaces have recently been shown to provide

More information

New Cramér-Rao Bound Expressions for Coprime and Other Sparse Arrays

New Cramér-Rao Bound Expressions for Coprime and Other Sparse Arrays New Cramér-Rao Bound Expressions for Coprime and Other Sparse Arrays Chun-Lin Liu 1 P. P. Vaidyanathan 2 1,2 Dept. of Electrical Engineering, MC 136-93 California Institute of Technology, Pasadena, CA

More information