Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Size: px
Start display at page:

Download "Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy"

Transcription

1 Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille Univ. I2M IHP, Feb / 32

2 Outline Introduction 3D fluorescence spectroscopy Tensor definition Goal Proximal tools Criterion formulation Proximity operator Proximal algorithm Application to CPD Minimization problem Algorithm Numerical simulations Synthetic case Real case: water monitoring Conclusion and future work 2 / 32

3 3D fluorescence spectroscopy.35 Normalized excitation spectra.4 Normalized emission spectra 25 Normalized concentrations Experiment 3 / 32

4 Coumpounds characterisation.4 Normalized emission spectra FEEM Normalized excitation spectra / 32

5 Tensor What is a tensor? An Nth-order tensor is represented by an N-way array in a chosen basis. Example: N = 1: a vector. N = 2: a matrix. 5 / 32

6 Third-order tensors A special case: nonnegative third-order tensors (N = 3) T = (t i1i 2i 3 ) i1,i 2,i 3 R +I1 I2 I3. The Canonical Polyadic (CP) decomposition: Tensor rank T = R r=1 ā (1) r ā (2) r ā (3) r = [Ā(1), Ā(2), Ā(3) ] Loading vectors n {1, 2, 3}, ā (n) r R +In and Ā(n) R In R : the outer product. Entry-wise form: Loading matrices t i1i 2i 3 = R r=1 ā (1) i 1r ā(2) i 2r ā(3) i 3r, (i 1, i 2, i 3 ) 6 / 32

7 Standard operations Outer product: let u R I, v R J, u v = uv R I J Khatri-Rao product: let U = [u 1, u 2,..., u J ] R I J and V = [v 1, v 2,..., v J ] R K J U V = [u 1 v 1, u 2 v 2,..., u J v J ] R IK J. where u v = [u 1 v;... ; u I v] R IK (Kronecker product). Hadamard division: let U R I J, V R I J, U V = (u ij /v ij ) i,j R I J 7 / 32

8 Tensor flattening: example Objective: to handle matrices instead of tensors. Tensor Mode 1 Mode 2 Mode 3 8 / 32

9 3D fluorescence spectroscopy and tensors T = R r=1 ā (1) r ā (2) r ā (3) r.35 Normalized excitation spectra.4 Normalized emission spectra 25 Normalized concentrations Experiment 9 / 32

10 Objective: tensor decomposition Input: Output: Observed tensor T : observation of an original (unknown) tensor T possibly degraded (noise). Estimated loading matrices  (n) for all n {1, 2, 3} Difficulty: Rank R unknown (i.e. R R): generally i) estimated or ii) overestimated. Proposed approach Formulate the problem under a variational approach. 1 / 32

11 Minimization problem Standard problem: minimize x R L F(x) }{{} Fidelity + R(x) }{{}. Regularization Taking into account several regularizations (J terms): R(x) = J R j (x) For large size problem or for other reasons, can be interesting to work on data blocks x (j) of size L j (x = (x (j) ) 1 j J ) j=1 R(x) = J R j (x (j) ) Technical assumptions: F, R and R j are proper lower semi-continuous functions. F is differentiable with a β-lipschitz gradient. R j is assumed to be bounded from below by an affine function, and its restriction to its domain is continuous. j=1 11 / 32

12 Proximity operator let ϕ : R ], + ] be a proper lower semi-continuous function. The proximity operator is defined as 1 prox ϕ : R R: v arg min u R 2 u v 2 + ϕ(u), let ϕ : R L ], + ] be a proper lower semi-continuous function. The proximity operator associated with a Symmetric Positive Definite (SPD) matrix P is defined as prox P,ϕ : R L R L 1 : v arg min u R L 2 u v 2 P + ϕ(u), where x R L, x 2 P = x, Px and, is the inner product. Remark : Note that if P reduces to the identity matrix, then the two definitions coincides. 12 / 32

13 Criterion to be minimized minimize F(x) + x R L J R j (x (j) ) j=1 Some solutions (non exhaustive list, CPD oriented): Proximal Alternating Linearized Minimization (PALM) [Bolte et al., 214] A Block Coordinate Descent Method for both CPD and Tucker decomposition [Xu and Yin, 213] An accelerated projection gradient based algorithm [Zhang et al., 216] Block-Coordinate Variable Metric Forward-Backward (BC-VMFB) algorithm [Chouzenoux et al., 216] Advantages of the BC-VMFB: flexible, stable, integrates preconditionning, relatively fast. 13 / 32

14 Block coordinate proximal algorithm 1: Let x domr, k N and γ k ], + [ // Initialization step 2: for k =, 1,... do // k-th iteration of the algorithm 3: Let j k {1,..., J} // Processing of block number j k (chosen, here, according to a quasi cyclic rule) 4: Let P jk (x k) be a SPD matrix // Construction of the preconditioner P jk (x k) 5: Let jk F(x k) be the Gradient // Calculation of Gradient 6: x (j k) k = x (j k) k γ kp jk (x k) 1 jk F(x k) // Updating of block j k according to a 7: Gradient step ( ) x (j k) k+1 prox γ 1 k P jk (x k ),R jk x (j k) k Proximal step // Updating of block j k according to a 8: k x j k+1 = k x j k where j = {1,..., J} \ {j} // Other blocks are kept unchanged 9: end for 14 / 32

15 Prox for CP decomposition CP decomposition: decompose a tensor into a (minimal) sum of rank-1 terms. Order 3: T = R r=1 ā (1) r ā (2) r ā (3) r = [Ā(1), Ā(2), Ā(3) ], (1) Tensor structure: naturally leads to consider 3 blocks corresponding to the loading matrices A (1), A (2) and A (3). Proposed optimization problem minimize A (n) R In R, n {1,2,3} F(A (1), A (2), A (3) )+R 1 (A (1) )+R 2 (A (2) )+R 3 (A (3) ). Some of the fastest classical approaches: Fast HALS [Phan et al., 213] and N-Way [Bro, 1997]. 15 / 32

16 Tensor matricization T (n) I n,i n R In I n + the matrix obtained by unfolding the tensor T in the n-th mode where the size I n is equal to I 1 I 2 I 3 /I n Tensor expressed under matrix form as where T (n) I n,i n = Ā(n) (Z ( n) ), n {1, 2, 3} Z ( 1) = Ā(3) Ā(2) R I 1 R +, Z ( 2) = Ā(3) Ā(1) R I 2 R +, Z ( 3) = Ā(2) Ā(1) R I 3 R +, 16 / 32

17 Function choice F(A (1), A (2), A (3) ): quadratic data fidelity term F(A (1), A (2), A (3) ) = 1 2 T [A(1), A (2), A (3) ] 2 F = 1 2 T(n) I n,i n A (n) Z ( n) 2 F R n (A (n) ): block dependent penalty terms enforcing sparsity and nonnegativity R n (A (n) ) = I n R i n=1 r=1 ρ n (a (n) i nr ) n {1, 2, 3} where loading matrices are defined element wise as A (n) = (a (n) i ) nr (i n,r) {1,...,I n} {1,...,R} and { α (n) ω π(n) if η (n) min ρ n (ω) = ω η(n) max + otherwise α (n) ], + [, π (n) N, η (n) min [, + [ and η(n) max [η (n) min, + ]. block dependent but constant within a block regularization parameters. 17 / 32

18 Preconditionning Preconditionning similar to the one used in NMF [Lee and Seung, 21]. The matrix P for the n-th block can be defined as follows n {1, 2, 3} P (n) (A (1), A (2), A (3) ) = A (n) (Z ( n) Z ( n) ) A (n), Remark: n {1, 2, 3}, A (n) must be non zero. 18 / 32

19 Gradient and proximity operator Gradient matrices of F with respect to A (n) for all n = 1,..., 3, defined as n F(A (1), A (2), A (3) ) = (T (n) I n,i n A (n) Z ( n) )Z ( n). Proximity operator given by ( y = (y (i) ) i {1,...,RIn} R RIn )) prox γ[k] 1 P (n) [k],r n (y) = where i {1,..., RI n }, we have ( υ R) ( ) prox γ[k] 1 p (n) i [k],ρ n (y (i) ). i {1,...,RI n} { { }} prox γ[k] 1 p (n) i,ρ n (υ) = min η max, (n) max η (n) min, prox (υ) γ[k]α (n) (p (n) i [k]) 1. π(n) (separable structure, diagonal preconditionning matrices, componentwise calculation) 19 / 32

20 Proximal algorithm for tensor decomposition Inputs: 1) Initial loading matrices A n [] 2) Stepsize choice γ 3) Iteration k = Choose randomly a block n {1, 2, 3} to be updated Preconditioner P (n) [k] Partial gradient n[k] Gradient step à (n) [k] = A (n) [k] γ n[k] P (n) [k] No k k + 1 Outputs: Estimated loading matrices Â(n) Yes Is stopping criterion reached? Other blocks unchanged at k Proximal step prox γ 1 P (n) [k],rn (Ã(n) [k]) Figure: BC-VMFB algorithm for CPD. 2 / 32

21 Computer simulation: simulated spectroscopy-like data Simulated tensor: (uni or bimodal type) emission and excitation spectra, random concentrations T R and R = 5. Simulated observed tensor: T = T + B where B stands for an additive white Gaussian noise 2 considered cases : 1. Perturbed case (noiseless): no noise added and R = 6 (overestimation). 2. Perturbed case (noisy): B fixed such that SNR = 17.6 db and R = 6 (overestimation). Error measures 1. Signal to Noise Ratio defined as SNR = 2 log 1 T F T T F 2. Relative Reconstruction Error defined as RRE = 2 log T T 1 1 ( T 1 ) 3 n=1 3. Estimation error: E 1 = 1 log Â(n) (1 : R) Ā(n) n=1 Ā(n) 1 4. Over-factoring error: E 2 = 1 log 1 R r=r+1 â (1) r â (2) r â (3) r 1 21 / 32

22 Numerical results Elapsed time (s) BC-VMFB without penalty BC-VMFB with penalty N-way fast HALS For 5 iterations Noisy case To reach stopping conditions (actual number of iterations) (485) (365) (43) (1856) (SNR,E 1, E 2 ) db (31.3, -12.5, 3.6) (32.7, -11.2, -49) (31.3, -12.5, 3.6) (31.3, -12.5, 3.6) Noiseless case To reach stopping conditions (actual number of iterations) (1) (365) (838) (38) (RRE,E 1, E 2 ) db (-75.1,-12.4,25.6) (-44.7, -15, -49) (-127.9,-8.7, 31.7) (-63.9, -6.1, 31.7) Computation time comparison of BC-VMFB in two cases: with or without penalty, with N-way [Bro, 1997] and fast HALS [Phan et al., 213] using the same initial value in the noiseless and noisy cases. 22 / 32

23 Influence of the initialization 2 Error 5 Over factoring index BC VMFB with penalty BC VMFB without penalty Nway 4 fhals BC VMFB with penalty db 8 db 2 BC VMFB without penalty Nway fhals Initializations Initializations Performance versus different initializations (noisy, overestimated case): error index E 1, overfactoring error index E 2 23 / 32

24 Visual results: noiseless case 5 Reference BC VMFB without penalty BC VMFB with penalty Figure: FEEM of reference (left) - FEEM reconstructed using BC-VMFB without regularization (middle) and with regularization α =.5 (right). 24 / 32

25 Visual results: noiseless case.1 Excitation spectra.4 Emission spectra 1 Concentrations Experiments Figure: R = 6 - reference spectra / BC-VMFB without penalty / BC-VMFB with penalty α = / 32

26 Visual results: noisy case 5 Reference BC VMFB without penalty BC VMFB with penalty Figure: FEEM of reference (left) - FEEM reconstructed using BC-VMFB without regularization (middle) and with regularization α =.5 (right). 25 / 32

27 Visual results: noisy case.1 Excitation spectra.4 Emission spectra 1 Concentrations Experiments Figure: R = 6 - reference spectra / BC-VMFB without penalty / BC-VMFB with penalty α = / 32

28 Computer simulation: real experimental data - water monitoring to detect pollutants Data were acquired automatically every 3 minutes, during a 1 days monitoring campaign performed on water extracted from an urban river tensor of size The excitation wavelengths range from 225nm to 4nm with a 5nm bandwidth, whereas the emission wavelengths range from 28nm to 5nm with a 2nm bandwidth. The FEEM have been pre-processed using the Zepp s method (negative values were set to ). Contamination During this experiment, a contamination with diesel oil appeared 7 days after the beginning of the monitoring. 26 / 32

29 Results: what about the rank? Estimated FEEM Estimated FEEM 4 (1) 4 (2) x 1 5 (1) (2) x (3) 4 (4) (3) 4 (4) penalized BC-VMFB algorithm Bro s N-way algorithm Case R = 4 27 / 32

30 Results: what about the rank? Estimated FEEM Estimated FEEM 4 (1) 4 (2) x 1 5 (1) 4 4 (2) x (3) 4 (4) (3) 4 (4) (5) 4 (6).6 4 (5) 4 (6) penalized BC-VMFB algorithm Bro s N-way algorithm Case R = 6 27 / 32

31 Results: concentrations 14 x 14 Normalized concentrations 14 x 14 Normalized concentrations (1) (1) (2) (2) 12 (3) (4) 12 (3) (4) Experiment Experiment penalized BC-VMFB algorithm Bro s N-way algorithm Case R = 4 28 / 32

32 Concentrations estimated by BC-VMFB 14 x Normalized concentrations (1) (2) (3) (4) Experiment Case R = 4 29 / 32

33 Concentrations estimated by BC-VMFB / 32

34 14 x 14 Normalized concentrations 14 x 14 Normalized concentrations (1) (1) (2) (2) 12 (3) (4) 12 (3) (4) (5) (5) (6) (6) Experiment Experiment penalized BC-VMFB algorithm Bro s N-way algorithm Case R = 6 3 / 32

35 Concentrations estimated by BC-VMFB 14 x Normalized concentrations (1) (2) (3) (4) (5) (6) Experiment Case R = 6 31 / 32

36 Concentrations estimated by BC-VMFB / 32

37 Conclusion clear theoretical and mathematical framework for CPD decomposition; interesting properties of the proposed approach: reliability, robustness versus noise and overestimation of the rank, good performance despite model errors and relative quickness; promising results on simulated and real data. Perspectives: extension to higher order tensor (order N; LVA-ICA Grenoble Feb. 217); possibility of considering missing data; study other preconditionning stategies. 32 / 32

38 Conclusion clear theoretical and mathematical framework for CPD decomposition; interesting properties of the proposed approach: reliability, robustness versus noise and overestimation of the rank, good performance despite model errors and relative quickness; promising results on simulated and real data. Perspectives: extension to higher order tensor (order N; LVA-ICA Grenoble Feb. 217); possibility of considering missing data; study other preconditionning stategies. Thank you! Questions?? 32 / 32

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

Nonnegative Matrix Factorization

Nonnegative Matrix Factorization Nonnegative Matrix Factorization Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

arxiv: v4 [math.na] 10 Nov 2014

arxiv: v4 [math.na] 10 Nov 2014 NEWTON-BASED OPTIMIZATION FOR KULLBACK-LEIBLER NONNEGATIVE TENSOR FACTORIZATIONS SAMANTHA HANSEN, TODD PLANTENGA, TAMARA G. KOLDA arxiv:134.4964v4 [math.na] 1 Nov 214 Abstract. Tensor factorizations with

More information

Fast Coordinate Descent methods for Non-Negative Matrix Factorization

Fast Coordinate Descent methods for Non-Negative Matrix Factorization Fast Coordinate Descent methods for Non-Negative Matrix Factorization Inderjit S. Dhillon University of Texas at Austin SIAM Conference on Applied Linear Algebra Valencia, Spain June 19, 2012 Joint work

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

Improved PARAFAC based Blind MIMO System Estimation

Improved PARAFAC based Blind MIMO System Estimation Improved PARAFAC based Blind MIMO System Estimation Yuanning Yu, Athina P. Petropulu Department of Electrical and Computer Engineering Drexel University, Philadelphia, PA, 19104, USA This work has been

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Block-Randomized Stochastic Proximal Gradient for Low-Rank Tensor Factorization

Block-Randomized Stochastic Proximal Gradient for Low-Rank Tensor Factorization Block-Randomized Stochastic Proximal Gradient for Low-Rank Tensor Factorization Xiao Fu, Cheng Cao, Hoi-To Wai, and Kejun Huang School of Electrical Engineering and Computer Science Oregon State University

More information

ARock: an algorithmic framework for asynchronous parallel coordinate updates

ARock: an algorithmic framework for asynchronous parallel coordinate updates ARock: an algorithmic framework for asynchronous parallel coordinate updates Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin ( UCLA Math, U.Waterloo DCO) UCLA CAM Report 15-37 ShanghaiTech SSDS 15 June 25,

More information

Stochastic Proximal Gradient Algorithm

Stochastic Proximal Gradient Algorithm Stochastic Institut Mines-Télécom / Telecom ParisTech / Laboratoire Traitement et Communication de l Information Joint work with: Y. Atchade, Ann Arbor, USA, G. Fort LTCI/Télécom Paristech and the kind

More information

Introduction to Tensors. 8 May 2014

Introduction to Tensors. 8 May 2014 Introduction to Tensors 8 May 2014 Introduction to Tensors What is a tensor? Basic Operations CP Decompositions and Tensor Rank Matricization and Computing the CP Dear Tullio,! I admire the elegance of

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

Fast Nonnegative Matrix Factorization with Rank-one ADMM

Fast Nonnegative Matrix Factorization with Rank-one ADMM Fast Nonnegative Matrix Factorization with Rank-one Dongjin Song, David A. Meyer, Martin Renqiang Min, Department of ECE, UCSD, La Jolla, CA, 9093-0409 dosong@ucsd.edu Department of Mathematics, UCSD,

More information

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard

More information

Sparse Covariance Selection using Semidefinite Programming

Sparse Covariance Selection using Semidefinite Programming Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support

More information

Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method

Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method Davood Hajinezhad Iowa State University Davood Hajinezhad Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method 1 / 35 Co-Authors

More information

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion Robert M. Freund, MIT joint with Paul Grigas (UC Berkeley) and Rahul Mazumder (MIT) CDC, December 2016 1 Outline of Topics

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising

Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Curt Da Silva and Felix J. Herrmann 2 Dept. of Mathematics 2 Dept. of Earth and Ocean Sciences, University

More information

BOOLEAN TENSOR FACTORIZATIONS. Pauli Miettinen 14 December 2011

BOOLEAN TENSOR FACTORIZATIONS. Pauli Miettinen 14 December 2011 BOOLEAN TENSOR FACTORIZATIONS Pauli Miettinen 14 December 2011 BACKGROUND: TENSORS AND TENSOR FACTORIZATIONS X BACKGROUND: TENSORS AND TENSOR FACTORIZATIONS C X A B BACKGROUND: BOOLEAN MATRIX FACTORIZATIONS

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Block Coordinate Descent for Regularized Multi-convex Optimization

Block Coordinate Descent for Regularized Multi-convex Optimization Block Coordinate Descent for Regularized Multi-convex Optimization Yangyang Xu and Wotao Yin CAAM Department, Rice University February 15, 2013 Multi-convex optimization Model definition Applications Outline

More information

the tensor rank is equal tor. The vectorsf (r)

the tensor rank is equal tor. The vectorsf (r) EXTENSION OF THE SEMI-ALGEBRAIC FRAMEWORK FOR APPROXIMATE CP DECOMPOSITIONS VIA NON-SYMMETRIC SIMULTANEOUS MATRIX DIAGONALIZATION Kristina Naskovska, Martin Haardt Ilmenau University of Technology Communications

More information

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,

More information

CONVOLUTIVE NON-NEGATIVE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT

CONVOLUTIVE NON-NEGATIVE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT CONOLUTIE NON-NEGATIE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT Paul D. O Grady Barak A. Pearlmutter Hamilton Institute National University of Ireland, Maynooth Co. Kildare, Ireland. ABSTRACT Discovering

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

Big Data Analytics: Optimization and Randomization

Big Data Analytics: Optimization and Randomization Big Data Analytics: Optimization and Randomization Tianbao Yang Tutorial@ACML 2015 Hong Kong Department of Computer Science, The University of Iowa, IA, USA Nov. 20, 2015 Yang Tutorial for ACML 15 Nov.

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

1 Non-negative Matrix Factorization (NMF)

1 Non-negative Matrix Factorization (NMF) 2018-06-21 1 Non-negative Matrix Factorization NMF) In the last lecture, we considered low rank approximations to data matrices. We started with the optimal rank k approximation to A R m n via the SVD,

More information

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson, Anton van den Hengel School of Computer Science University of Adelaide,

More information

This work has been submitted to ChesterRep the University of Chester s online research repository.

This work has been submitted to ChesterRep the University of Chester s online research repository. This work has been submitted to ChesterRep the University of Chester s online research repository http://chesterrep.openrepository.com Author(s): Daniel Tock Title: Tensor decomposition and its applications

More information

Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds

Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds Jiho Yoo and Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 31 Hyoja-dong,

More information

Lecture 9: September 28

Lecture 9: September 28 0-725/36-725: Convex Optimization Fall 206 Lecturer: Ryan Tibshirani Lecture 9: September 28 Scribes: Yiming Wu, Ye Yuan, Zhihao Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers. MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

Functional SVD for Big Data

Functional SVD for Big Data Functional SVD for Big Data Pan Chao April 23, 2014 Pan Chao Functional SVD for Big Data April 23, 2014 1 / 24 Outline 1 One-Way Functional SVD a) Interpretation b) Robustness c) CV/GCV 2 Two-Way Problem

More information

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction

A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction A New Look at First Order Methods Lifting the Lipschitz Gradient Continuity Restriction Marc Teboulle School of Mathematical Sciences Tel Aviv University Joint work with H. Bauschke and J. Bolte Optimization

More information

Descent methods for Nonnegative Matrix Factorization

Descent methods for Nonnegative Matrix Factorization Descent methods for Nonnegative Matrix Factorization Ngoc-Diep Ho, Paul Van Dooren and Vincent D. Blondel CESAME, Université catholique de Louvain, Av. Georges Lemaître 4, B-1348 Louvain-la-Neuve, Belgium.

More information

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Efficient CP-ALS and Reconstruction From CP

Efficient CP-ALS and Reconstruction From CP Efficient CP-ALS and Reconstruction From CP Jed A. Duersch & Tamara G. Kolda Sandia National Laboratories Livermore, CA Sandia National Laboratories is a multimission laboratory managed and operated by

More information

Non-Negative Matrix Factorization

Non-Negative Matrix Factorization Chapter 3 Non-Negative Matrix Factorization Part : Introduction & computation Motivating NMF Skillicorn chapter 8; Berry et al. (27) DMM, summer 27 2 Reminder A T U Σ V T T Σ, V U 2 Σ 2,2 V 2.8.6.6.3.6.5.3.6.3.6.4.3.6.4.3.3.4.5.3.5.8.3.8.3.3.5

More information

Homework 5. Convex Optimization /36-725

Homework 5. Convex Optimization /36-725 Homework 5 Convex Optimization 10-725/36-725 Due Tuesday November 22 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization Proximal Newton Method Zico Kolter (notes by Ryan Tibshirani) Convex Optimization 10-725 Consider the problem Last time: quasi-newton methods min x f(x) with f convex, twice differentiable, dom(f) = R

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Compressive Sensing of Sparse Tensor

Compressive Sensing of Sparse Tensor Shmuel Friedland Univ. Illinois at Chicago Matheon Workshop CSA2013 December 12, 2013 joint work with Q. Li and D. Schonfeld, UIC Abstract Conventional Compressive sensing (CS) theory relies on data representation

More information

Sequential Unconstrained Minimization: A Survey

Sequential Unconstrained Minimization: A Survey Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.

More information

Towards a Regression using Tensors

Towards a Regression using Tensors February 27, 2014 Outline Background 1 Background Linear Regression Tensorial Data Analysis 2 Definition Tensor Operation Tensor Decomposition 3 Model Attention Deficit Hyperactivity Disorder Data Analysis

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs

Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs Bhargav Kanagal Department of Computer Science University of Maryland College Park, MD 277 bhargav@cs.umd.edu Vikas

More information

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Tensor Decompositions and Applications

Tensor Decompositions and Applications Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =

More information

A Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression

A Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression A Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression Noah Simon Jerome Friedman Trevor Hastie November 5, 013 Abstract In this paper we purpose a blockwise descent

More information

SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS. Emad M. Grais and Hakan Erdogan

SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS. Emad M. Grais and Hakan Erdogan SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS Emad M. Grais and Hakan Erdogan Faculty of Engineering and Natural Sciences, Sabanci University, Orhanli

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization

Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization 1/33 Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization Xiaoqun Zhang xqzhang@sjtu.edu.cn Department of Mathematics/Institute of Natural Sciences, Shanghai Jiao Tong University

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization

Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization Christine De Mol (joint work with Loïc Lecharlier) Université Libre de Bruxelles Dept Math. and ECARES MAHI 2013 Workshop Methodological

More information

Sparseness Constraints on Nonnegative Tensor Decomposition

Sparseness Constraints on Nonnegative Tensor Decomposition Sparseness Constraints on Nonnegative Tensor Decomposition Na Li nali@clarksonedu Carmeliza Navasca cnavasca@clarksonedu Department of Mathematics Clarkson University Potsdam, New York 3699, USA Department

More information

COMS 4721: Machine Learning for Data Science Lecture 18, 4/4/2017

COMS 4721: Machine Learning for Data Science Lecture 18, 4/4/2017 COMS 4721: Machine Learning for Data Science Lecture 18, 4/4/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University TOPIC MODELING MODELS FOR TEXT DATA

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Generalized greedy algorithms.

Generalized greedy algorithms. Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 6 Non-Negative Matrix Factorization Rainer Gemulla, Pauli Miettinen May 23, 23 Non-Negative Datasets Some datasets are intrinsically non-negative: Counters (e.g., no. occurrences

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

A FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS

A FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS A FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS Evrim Acar, Mathias Nilsson, Michael Saunders University of Copenhagen, Faculty of Science, Frederiksberg C, Denmark University

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Factor Analysis (FA) Non-negative Matrix Factorization (NMF) CSE Artificial Intelligence Grad Project Dr. Debasis Mitra

Factor Analysis (FA) Non-negative Matrix Factorization (NMF) CSE Artificial Intelligence Grad Project Dr. Debasis Mitra Factor Analysis (FA) Non-negative Matrix Factorization (NMF) CSE 5290 - Artificial Intelligence Grad Project Dr. Debasis Mitra Group 6 Taher Patanwala Zubin Kadva Factor Analysis (FA) 1. Introduction Factor

More information

Natural Evolution Strategies for Direct Search

Natural Evolution Strategies for Direct Search Tobias Glasmachers Natural Evolution Strategies for Direct Search 1 Natural Evolution Strategies for Direct Search PGMO-COPI 2014 Recent Advances on Continuous Randomized black-box optimization Thursday

More information

Linear Regression. Volker Tresp 2018

Linear Regression. Volker Tresp 2018 Linear Regression Volker Tresp 2018 1 Learning Machine: The Linear Model / ADALINE As with the Perceptron we start with an activation functions that is a linearly weighted sum of the inputs h = M j=0 w

More information

A Multi-Affine Model for Tensor Decomposition

A Multi-Affine Model for Tensor Decomposition Yiqing Yang UW Madison breakds@cs.wisc.edu A Multi-Affine Model for Tensor Decomposition Hongrui Jiang UW Madison hongrui@engr.wisc.edu Li Zhang UW Madison lizhang@cs.wisc.edu Chris J. Murphy UC Davis

More information

Automated Unmixing of Comprehensive Two-Dimensional Chemical Separations with Mass Spectrometry. 1 Introduction. 2 System modeling

Automated Unmixing of Comprehensive Two-Dimensional Chemical Separations with Mass Spectrometry. 1 Introduction. 2 System modeling Automated Unmixing of Comprehensive Two-Dimensional Chemical Separations with Mass Spectrometry Min Chen Stephen E. Reichenbach Jiazheng Shi Computer Science and Engineering Department University of Nebraska

More information

Matrix-Vector Nonnegative Tensor Factorization for Blind Unmixing of Hyperspectral Imagery

Matrix-Vector Nonnegative Tensor Factorization for Blind Unmixing of Hyperspectral Imagery Matrix-Vector Nonnegative Tensor Factorization for Blind Unmixing of Hyperspectral Imagery Yuntao Qian, Member, IEEE, Fengchao Xiong, Shan Zeng, Jun Zhou, Senior Member, IEEE, and Yuan Yan Tang, Fellow,

More information

r=1 r=1 argmin Q Jt (20) After computing the descent direction d Jt 2 dt H t d + P (x + d) d i = 0, i / J

r=1 r=1 argmin Q Jt (20) After computing the descent direction d Jt 2 dt H t d + P (x + d) d i = 0, i / J 7 Appendix 7. Proof of Theorem Proof. There are two main difficulties in proving the convergence of our algorithm, and none of them is addressed in previous works. First, the Hessian matrix H is a block-structured

More information

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices ao Shen and Martin Kleinsteuber Department of Electrical and Computer Engineering Technische Universität München, Germany {hao.shen,kleinsteuber}@tum.de

More information

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

Coupled Matrix/Tensor Decompositions:

Coupled Matrix/Tensor Decompositions: Coupled Matrix/Tensor Decompositions: An Introduction Laurent Sorber Mikael Sørensen Marc Van Barel Lieven De Lathauwer KU Leuven Belgium Lieven.DeLathauwer@kuleuven-kulak.be 1 Canonical Polyadic Decomposition

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction Yao Xie Project Report for EE 391 Stanford University, Summer 2006-07 September 1, 2007 Abstract In this report we solved

More information

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China)

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China) Network Newton Aryan Mokhtari, Qing Ling and Alejandro Ribeiro University of Pennsylvania, University of Science and Technology (China) aryanm@seas.upenn.edu, qingling@mail.ustc.edu.cn, aribeiro@seas.upenn.edu

More information

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction A Limited Memory, Quasi-Newton Preconditioner for Nonnegatively Constrained Image Reconstruction Johnathan M. Bardsley Department of Mathematical Sciences, The University of Montana, Missoula, MT 59812-864

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information