Towards A New Multiway Array Decomposition Algorithm: Elementwise Multiway Array High Dimensional Model Representation (EMAHDMR)
|
|
- Elijah Butler
- 5 years ago
- Views:
Transcription
1 Towards A New Multiway Array Decomposition Algorithm: Elementwise Multiway Array High Dimensional Model Representation (EMAHDMR) MUZAFFER AYVAZ İTU Informatics Institute, Computational Sci. and Eng. Program, İTU Ayazağa Campus, 34469, İstanbul, TÜRKİYE (Turey), muzaffer.ayvaz@be.itu.edu.tr METİN DEMİRALP İTU Informatics Institute, Computational Sci. and Eng. Program, İTU Ayazağa Campus, 34469, İstanbul, TÜRKİYE (Turey), metin.demiralp@be.itu.edu.tr Abstract: In this study, very early steps of a new algorithm for multiway array decomposition via High Dimensional Model Representation (HDMR) is proposed. HDMR is originally developed for the multivariate functions and represents them as the sum of lower variate functions inluding the constant term, and thus HDMR is an inherent candidate for decomposing multiway arrays. The proposed algorithm represents given multiway array valued multivariate function as the sum of same type multiway array valued functions with lower multivariances starting from the constancy in ascending multivariance. This algorithm generalizes and unifies the recently proposed algorithms Vector HDMR and Matrix HDMR by Demiralp and his group and thus enlights the big picture of a new family of multiway array decomposition algorithms based on High Dimensional Model Representation. Key Words: Multilinear algebra, multidimensional arrays, tensor decomposition, singular value decomposition, High Dimensional Model Representation. 1 Introduction In the last decades, multilinear algebra has been an important scientific field with the primary emphasis on decomposing a multiway array (also called multidimensional array, tensor, N way array, etc.) Since processing large amount of multidimensional data generally requires its compression, decomposing a multiway array and approximating this N dimensional array by the lower ran N way arrays have great importance in the areas including physicometrics, chemometrics, neuroscience, signal and image processing, networ analysis, and human motion recognition amongst the others [1, 2]. The Tucer and CANDECOMP/PARAFAC decompositions are the most famous decompositions. The Tucer decomposition represents a given multiway array as the reduced dimensional core multiway array which is outer producted by the orthonormal matrices. HOSVD and N way SVD (SVD is abbreviation for singular value decomposition) are originally proposed algorithms to compute the Tucer decomposition of a multiway array [1]. Since these algorithms are natural generalizations of matrix SVD, they preserve some properties of the matrix SVD such as orthogonality, the equality of the Frobenious norm of given multiway array and core multiway array, etc. But some very important properties of the matrix SVD could not be preserved in these algorithms. While matrix SVD transforms given matrix to a diagonal form, HOSVD transforms given multiway array to a dimensionally reduced multiway array. While matrix SVD guarantees the best ran 1, ran 2,..., ran n approximations to a matrix by the Ecart Young theorem, HOSVD does not guarantee the best ran n approximation [1, 2] 1 CANDECOMOP (canonical decomposition) and PARAFAC (parallel factors) approximate to the given multiway array by a finite sum of ran one tensors. The main problem for the algorithms that produces CANDECOMP/PARAFAC decomposition is defining the ran of given higher order arrays [2]. Only the upper bound for multiway arrays ran is nown by Krusal theorem [2, 4, 5]. Defining the tensor or multiway array ran is an NP hard problem as recently proven. However there are lots of difficulties to define the ran of multiway arrays, and, there are lots of algorithms presented in the literature. Most of the in- 1 Also, a counter example to the Ecart Young Theorem for multidimensional case is given by Tamara Kolda. Interested reader should see reference [3]. ISBN:
2 troduced algorithms are iterative procedures that run until the desired accuracy is achieved. The higher order generalizations of the power method and Rayleigh Quotient Iteration algorithms are presented to calculate lower ran approximation to a multiway array. These algorithms do not guarantee the convergence to global optimum points. For different initial values, they may converge to different local optimum points [6]. Simultaneous diagonalization of a set of matrices is also an effective way to compute low ran approximation of a multiway array or to compute CAN- DECOMP/PARAFAC decomposition of a tensor under consideration. Several methods, including simultaneous generalized Schur decomposition, simultaneous EVD and Jacobi type algorithms, are developed. Jacobi type (also called Jacobi lie) algorithms are the generalization of the Jacobi algorithm for matrix diagonalization [7]. For specifically structured multiway arrays such as symmetric multiway arrays and multiway arrays having Hermitian slices Jacobi type algorithms are presented in the literature [8]. More general form of Jacobi lie algorithm presented for third order multiway arrays to maximize the trace and square sum of the elements on the multiway array diagonal. This algorithm produces quite good results in sense of diagonal dominancy by means of Frobenious norm [7]. 2 In the last 3 4 years, a new multiway array representation algorithm named TT (tensor train) and QTT (quantics tensor train) are also proposed [9 11]. TT and QTT represent a given multiway array as the products of lover dimensional multiway arrays and can be computed by a recursive procedure. These representations are used for some high complexity algorithms lie multiparticle Schrödinger equation, DMRG (Density matrix renormalization group) algorithm, and, produced very promising results to brea the bounds of curse of dimensionality 3. In this study, the introductory steps of the theory of a new framewor, Elementwise Multiway Array HDMR is discussed in some detail. The proposed algorithm represents a given multidimensional array valued function as the sum of same type multiway array valued functions with lower multivariances starting from constancy. A single constant multiway array with same type as the target array is followed by the same type multiway array valued univariate functions. Then bivariate arrays come and so on. In the 2 This paper does not cover all of the literature review of the presented wors. Comprehensive review of the multiway array decomposition and low ran approximation algorithms can be found in reference [2]. 3 The term curse of dimensionality is originally used by Bellman [12] representation all components are of same type array. The changing property is the multivariance as in Vector HDMR [13] and Matrix HDMR [14]. Since this algorithm is based on HDMR, it can be used for both function valued multiway arrays and applications where only discrete data is available. The theory presented in this paper can be extended for the EMPR (Enhanced Multivariate Product Representation) straightforwardly, which is out of the scope of this paper. The remaining part of the paper is organized as follows. The second section is devoted to the explanation of High Dimensional Model Representation. The detailed description of the proposed algorithm is discussed in the third section. The folded weight matrix construction, which is the most important part of the algorithm, is explained in the fourth section. The paper is finalized by the future directionings and concluding remars in the fifth section. 2 Basics of High Dimensional Model Representation High Dimensional Model Representation, originally introduced by I. M. Sobol, inspired from the theoretical wor of Kolmogorov [15], represents a multivariate function as the finite sum of lower variate and orthogonal functions and a constant term as follows [16]. f (x 1,...,x N ) = f 0 + N f i1 (x i1 ) i 1 =1 N + f (x i1i2 i1,x ) i2 i 1,i 2 =1 i 1 <i f 12...N (x 1,...,x N ) (1) The right hand side of the above equation consists 2 N terms. When all 2 N components are used, the multivariate function under consideration is expressed exactly. It is observed that the truncation at most at bivariate terms produces acceptable quality approximations to the function under consideration. All the right hand side components of HDMR can be determined uniquely under the vanishing under integration condition [16 18] w i (x i )dx i f i1 i 2...i (x i1,x i2,...,x i ) = 0, [a i,b i ] i = i 1,i 2,... i (2) Here, w i (x i ) denotes a weight function which has only one independent variable. The integration of this ISBN:
3 weight function over the independent variable domain should be equal to one for simplicity. w i (x i )dx i = 1 (3) [a i,b i ] The vanishing under integration condition also assures the orthogonality of the right hand side of the HDMR expansion. Using abovementioned properties every component of the expansion can be determined as follows. N f 0 = D[x] w (x )f(x 1,...,x N ) (4) Ω N f i (x i ) = D[x {x i }] Ω N 1 N w (x )f(x 1,...,x N ) f 0 (5) i f ij (x i,x j ) = D[x {x i,x j }] Ω N 2 N w (x )f(x 1,...,x N ) i,j f 0 f i (x i ) f j (x j ) (6) Here, Ω N denotes an N dimensional hyperprism while D[x] represents integration over all independent variables. D[x {x i }] stands for the integration over all independent variables except x i and so on. It is quite important here to note that the total weight functions used in the integrations has multiplicative nature. W(x 1,...,x N ) = N w i (x i ) (7) i=1 There may be lots of measures to determine the quality of the approximation. The most preferred one in the literature is the additivity measurers [18 29]. σ 0 = σ 1 =. σ N = 1 f 2 f f 2 N f i 2 + σ 0 i=1 1 f 2 f 12...N 2 + σ N 1 (8) Here, f 2 denotes the norm square, that is the inner product of the function under consideration by itself. It is quite easy to notice that all σ values are well ordered. 0 σ 1 σ 2... σ N = 1 (9) The presented HDMR method is generally called Plain HDMR. There are many other HDMR methods lie Generalized HDMR [24], Logarithmic HDMR, Factorized HDMR [26], Hybrid HDMR [23, 27], RS-HDMR [28], Transformational HDMR, EMPR amongst the others. All these methods can produce different results according to the function under consideration. In this study, only the Plain HDMR method is used for function valued multiway arrays and their decomposition. However, it is possible to extend the presented algorithm to the other methods and discrete data sets. 3 Elementwise Multiway Array High Dimensional Model Representation (EMAHDMR) In this section, we will demonstrate how to evaluate the HDMR components, in particular constant term, of a multiway array valued multivariate function F i1 i 2...i N (x 1,x 2,...,x M ). We do not intend to expand this entity to HDMR with respect to its indices. Indices will be ept unchanging in the expansion which will be therefore in independent variables x 1,..., x M. Hence what we are going to obtain will be called Elementwise Multiway Array High Dimensional Model Representation (EMAHDMR). F i1 i 2...i N (x 1,x 2,...,x M ) represents an N index array whose each entry is an M variate function. It is important to remar here that if the function values are discretely available, F i1 i 2...i N (x 1,x 2,...,x M ) represents (N + M) index multiway array. The simplest case is an M variate function F (x 1,x 2,...,x M ) and can be represented by Plain HDMR. The second and third cases are F i1 (x 1,x 2,...,x M ) and F i1 i 2 (x 1,x 2,...,x M ), and, can be respectively represented by Vector HDMR [13] and Matrix HDMR [14] which are the algorithms recently developed in related wors. The most general case is F i1 i 2...i N (x 1,x 2,...,x M ) and its HDMR expansion is as follows when we eep indices unchanged in the components F i1 i 2...i N (x 1,...,x M ) = F (i 1i 2...i N ) + M j 1 =1 0 F (i 1i 2...i N ) j 1 (x j1 ) + +F (i 1i 2...i N ) 12...M (x 1,...,x M ) (10) ISBN:
4 where we have used the superscript (i 1 i 2...i N ) to explicility show the index dependence (especially its unchanging nature). Here F (i 1i 2...i N ) 0 denotes an N index array whose elements are constants. (x i1 ) stands for an N index array whose elements are univariate functions depending on same independent variable, i 1, and so on. The evaluation F (i 1i 2...i N ) i 1 of F (i 1i 2...i N ) 0 can be realized explicitly using (4) and result is given as follows. F (i 1i 2...i N ) 0 = [ M ] D[x] W (x ) Ω N F i1 i 2...i N (x 1,...,x M ) (11) In this form the product type weight stands independent of i indices. This means no difference between this representation and the plain HDMR since a single HDMR decomposes all elements of the multiway array valued function in the same fashion. However our purpose is to bring more flexibility to this type HDMR. This can be done by extending the weight function structure. We are going to focus on this issue in the next section. 4 Weight Considerations in Elementwise Multiway Array High Dimensional Model Representation The product type weight in (11) is too restricted as we stated above. Weight does not depend on the i indices. So as an extension we may bring the index dependence to each factor of the weight. Then the overall weight can be rewritten as follows W (x 1,...,x M ) M W ({i}) (x ) (12) where we have used the shorthand notation {i} to represent all i indices. This extension separates the HDMR on each individual element of EMAHDMR above. However it does not permit the interaction to exist between different elements of the multiway array. Whereas this interaction is quite important to control the decomposition since it can bring more flexibilities as the vector and matrix HDMRs do. Therefore it is better to create more extension in the weight definition of (12) and to write the following formula for new extension. W (x 1,...,x M ) M W ({i},{j}) (x ) (13) where the new indices denoted by the shorthand notation {j} are individually symbolized as j 1,...,j M and j corresponds to i by taing same positive integer values. We have deliberately used comma to distinguish the roles of the i and j indices. As a matter fact i indices play the role of the row indices in a matrix while the j indices somehow correspond to the column indices of the same matrix. Now we propose to employ another widely used shorthand notation. The statement of the rule is as follows: If any index is repeated in a formula then it means that there is an implicit sum over the formula for all possible values of that index. The more number of repeated indices, the more number of sums. By having this rule we can rewrite (11) as follows [ M ] F ({i}) 0 = D[x] W ({i},{j}) (x ) Ω N F {j} (x 1,...,x M ) (14) where the multiway arrays given below [ ] W (x ) W ({i},{j}) (x ), = 1,2,...,M (15) act as if matrices. Each array maps from a multiway array of N indices to another multiway array of N indices. It behaves lie a square matrix. We call this entity as a folded matrix or briefly folmat [14]. This is because each row of a matrix can be divided into segments and those segments are reordered to get a multiway array so can be the columns of the same matrix, of course, by obeying certain compatibility conditions. Folded matrices or folmats in our shorthand terminology can be added, multiplied by scalars, and multiplied as in the case of matrices, standard items in linear algebra. In addition procedure the sum of the corresponding elements of two folmats gives the corresponding element of the resulting folmat. Similarly, multiplying by a scalar produces a folmat whose elements are the product of the corresponding elements of the target folmat by that scalar. In the multiplication of two folmats, first a set of row index values and a set of column index values are chosen, then the first folmat elements having these row index values are multiplied by the second folmat elements having these column index values if the column index of the first factor matches the row index of the second factor. The sum over the all possible matching index values is the resulting folmat s corresponding element which has the same row indices with the first folmat element and the same column indices with the second folmat element. All elements of zero folmat, which is actionless element in addition, are zero while the unit or identity ISBN:
5 folmat, which does not change its operand in multiplication, is defined as follows I N δ i j (16) where the row indices is and column indices js tae values from the index domain of the multiway array which is under consideration for HDMR and δ stands for the Kronecer s delta. The weight folmats have univariate function elements above. To facilitate the HDMR construction following features are expected to be possessed by them: The integral of each weight folmat factor over its independent variable and corresponding interval should be equal to unit or identity folmat. W (x )dx = I (17) [a,b ] All folded weight matrices must commute with each other. W i W j W j W i = 0, i,j (1,2,...,M) (18) All folded weight matrices should be positive definite and symmetric. Since the unfolding of folmats to ordinary matrices (two index arrays) converts all folmat sums and folmat products to ordinary matrix sums and products, EMAHDMR can be in fact considered as folded version of the vector HDMR [13]. Even matrix HDMR [14] can be related to this case via appropriate foldings. Of course unfolding is more suitable to computational aspects in computers as long as they use linear arrays for data storages. However, in the sense of lumping, EMAHDMR can be preferred. According to the application areas and/or structure of the multiway arrays under consideration, EMAHDMR may mean sometimes difficulty and sometimes mean flexibility. Setting a strategy to find the optimum unfolding of the multiway arrays for special types is left as the future wor. Before closing the section we need to mention about how the weight folmat is chosen. Of course the easiest form is proportional to the unit folmat. The next easiest case involves the super diagonal folmats where each element is proportional to the product of the Kronecer deltas appearing in the definition of the unit folmat with element depending proportionality functions. We do not intend to go beyond this level discussion here. 5 Conclusion In this study, fundamental concepts of the theory of a new family of multiway array valued multivariate function decomposition methods based on Plain HDMR philosophy is discussed in some detail. The proposed method is the generalization of the methods Vector HDMR [13] and Matrix HDMR [14]. Even though the method presented here is given for multiway array valued functions, it is quite straightforward to use it for the discretely available data as it is done for data partitioning through HDMR. An interpolation stage may tae us from discreteness to the realm of the functions. This ind of research is at the focus in our group and will be reported somewhere else in future. The presented method is also a powerful candidate for the decomposition of bloc structured multiway arrays. With the same philosophy mentioned in this study, it is possible to develop more efficient methods based on Enhanced Multivariate Product Representation (EMPR), which is left as a future wor. Acnowledgements: The first author thans Turish Scientific and Technological Research Council of Turey (TUBITAK) for its support and the second author is grateful to Turish Academy of Sciences, where he is a principal member, for its support and motivation. References: [1] De Lathauwer, L.; De Moor B; Vandewalle J.; A Multilinear Singular Value Decomposition, SIAM J. Matrix Anal. Appl, Vol. 21, No.4, pp [2] Kolda, T.G.; Bader B.W.; Tensor Decompositions and Applications, SIAM Review, Vol. 51, No.3, pp [3] Kolda, T.G.; A counterexample to the possibility of an extension of the Ecart-Young low-ran approximation theorem for the orthogonal ran tensor decomposition, SIAM J. Matrix. Anal. Appl, Vol. 24, No.3, pp [4] Krusal, J.B.; Three-way arrays: Ran and uniqueness of trilinear decompositions with applications to arithmetic complexity and statistics, Linear Algebra and its Applications, Vol. 18, pp ,1977 [5] Krusal, J.B.; Ran, decomposition, and u- niqueness for 3-way and N-way arrays, Multiway Data Analysis, R.Coppi and S. Bolasco (Eds.),pp. 7-18,Noth-Holland,1989 ISBN:
6 [6] Zhang T.; Golub G.H.; Ran-one approximation to high order tensors, SIAM J. Matrix. Anal. Appl, Vol.23, No.2, pp , 2001 [7] MArtini C.D.M.; Van Loan C.F.; A Jacobi-type Method for computing Orthogonal Tensor Decompositions, SIAM J. Matrix. Anal. Appl, Vol. 30, No.3, pp [8] Badeau, R.; Boyer R.; Fast Multilinear Singular Value Decomposition For Structured Tensors, SIAM J. Matrix Anal. Appl, Vol. 30, No.3, pp [9] Oseledets, I.V.; Tyrtyshniov, E.E.; Breaing the Curse of Dimensionality, or How to Use SVD in Many Dimensions, SIAM J. Sci. Compt, Vol. 33, No.5, pp [10] Oseledets, I.V.; Approximation of 2(d) x 2(d) Matrices Using Tensor Decompositions, SIAM J. Matrix Anal. Appl. Sci. Compt, Vol. 31, No.4, pp [11] Oseledets, I.V.; Tyrtyshniov, E.E.; Recursive decomposition of multidimensional tensors, Dolady Mathematics, Vol. 80, No.1, pp [12] Bellman, R.E.; Dynamic Programming, Princeton University Press, 1957 [13] Tunga, B.; Demiralp, M.; Basic Issues in Vector High Dimensional Model Representation, 9th International Conference of Numerical Analysis and Applied Mathematics, IC- NAAM 2011 (to be published). [14] Tuna S.; Demiralp M.; Matrix HDMR with the Weight Matrices Generated by Subspace Construction, 9th International Conference of Numerical Analysis and Applied Mathematics, IC- NAAM 2011 (to be published). [15] Kolmogorov, A. N.: On the Representation of Continuous Functions of Many Variables by Superposition of One Variable and Addition, English Translation: American Math. Soc., 2, 28, 1963, pp [16] Sobol, I. M.: Sensitivity Estimates for Nonlinear Mathematical Models, English Translation: MMCE, Vol.1, No.4,1993, pp [17] Alış, Ö. F.; Rabitz, H.: General Foundations of High Dimensional Model Representation, Journal of Mathematical Chemistry, 25, 1999, pp [18] Alış, Ö. F.; Shorter, J.; Shim, K.; Rabitz, H.: Efficient Input-Output Model Representation, Computer Physics Communications, 117,1999, pp [19] Alış, Ö. F.; Rabitz, H.: Efficient Implementation of HDMR, Journal of Mathematical Chemistry, 29-2,2001, pp [20] Li, G.; Rosenthal, C.; Rabitz, H.: High Dimensional Model Representation, Journal of Physical Chemistry A, ,2001, pp [21] Demiralp, M.: High Dimensional Model Representation and Its Application Varieties, The Fourth International Conference on Tools for Mathematical Modeling, St. Petersburg, Russia, June 23-28, 2003 [22] Kurşunlu, A.; Demiralp, M.: Additive and Factorized HDMR Applications to the Multivariate Diffusion Equation Under Vanishing Derivative Boundary Conditions, The Fourth International Conference on Tools for Mathematical Modeling, St. Petersburg, Russia, June 23-28,2003 [23] Tunga, B.; Demiralp, M.: Hybrid HDMR Approximants and Their Utilization in Applications, The Fourth International Conference on Tools for Mathematical Modeling, St. Petersburg, Russia, June 23-28, 2003 [24] Tunga, M. A.; Demiralp, M.: Data Partitioning Via Generalized HDMR and Multivariate Interpolative Applications, The Fourth International Conference on Tools for Mathematical Modeling, St. Petersburg, Russia, June 23-28,2003 [25] Yaman, İ.; Demiralp, M.: HDMR Approximation of an Evolution Operator with a First Order Partial Differential Operator Argument, App. Num. Anal. and Comp. Math., Wiley CHV, I, 2003, pp [26] Tunga, M.A.; Demiralp, M.: A Factorized High Dimensional Model Representation on the Nodes of a Finite Hyperprismatic Regular Grid, Applied Mathematics and Computation, Volume 164, Issue 3,2005,, [27] Tunga, B.; Demiralp, M.: A Novel Hybrid High Dimensional Model Representation (HHDMR) Based on Combination of Plain and Logarithmic High Dimensional Model Representations, WSEAS-2007 Proceedings, WSEAS 12-th International Conference on Applies Mathematics for Science and Engineering, Vol.1, 2007, pp [28] Li, G.; Wang, S.; Rabitz, H.: Fractical Approaches to Construct RS-HDMR Component Functions, Journal of Physical Chemistry A, 106, 2002, [29] Li, G.; Schoendorf, J.; Ho, T.; Rabitz, H.; Multicut-HDMR with an Application to an Ionospheric Model, Journal of Computational Chemistry, 25-9, 2004, ISBN:
High Dimensional Model Representation (HDMR) Based Folded Vector Decomposition
High Dimensional Model Representation (HDMR) Based Folded Vector Decomposition LETİSYA DİVANYAN İstanbul Technical University Informatics Institute Maslak, 34469 İstanbul TÜRKİYE (TURKEY) letisya.divanyan@be.itu.edu.tr
More informationA Reverse Technique for Lumping High Dimensional Model Representation Method
A Reverse Technique for Lumping High Dimensional Model Representation Method M. ALPER TUNGA Bahçeşehir University Department of Software Engineering Beşiktaş, 34349, İstanbul TURKEY TÜRKİYE) alper.tunga@bahcesehir.edu.tr
More informationWeighted Singular Value Decomposition for Folded Matrices
Weighted Singular Value Decomposition for Folded Matrices SÜHA TUNA İstanbul Technical University Informatics Institute Maslak, 34469, İstanbul TÜRKİYE (TURKEY) suha.tuna@be.itu.edu.tr N.A. BAYKARA Marmara
More informationMATHEMATICAL METHODS AND APPLIED COMPUTING
Numerical Approximation to Multivariate Functions Using Fluctuationlessness Theorem with a Trigonometric Basis Function to Deal with Highly Oscillatory Functions N.A. BAYKARA Marmara University Department
More informationFirst Degree Rectangular Eigenvalue Problems of Cubic Arrays Over Two Dimensional Ways: A Theoretical Investigation
First Degree Rectangular Eigenvalue Problems of Cubic Arrays Over Two Dimensional Ways: A Theoretical Investigation GİZEM ÖZDEMİR Istanbul Technical University Informatics Institute Ayazağa Campus-İstanbul
More informationThird-Order Tensor Decompositions and Their Application in Quantum Chemistry
Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).
More informationThe multiple-vector tensor-vector product
I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition
More informationFundamentals of Multilinear Subspace Learning
Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with
More informationFluctuationlessness Theorem and its Application to Boundary Value Problems of ODEs
Fluctuationlessness Theorem and its Application to Boundary Value Problems o ODEs NEJLA ALTAY İstanbul Technical University Inormatics Institute Maslak, 34469, İstanbul TÜRKİYE TURKEY) nejla@be.itu.edu.tr
More informationParallel Tensor Compression for Large-Scale Scientific Data
Parallel Tensor Compression for Large-Scale Scientific Data Woody Austin, Grey Ballard, Tamara G. Kolda April 14, 2016 SIAM Conference on Parallel Processing for Scientific Computing MS 44/52: Parallel
More informationNumerical Solution of Ordinary Differential Equations in Fluctuationlessness Theorem Perspective
Numerical Solution o Ordinary Dierential Equations in Fluctuationlessness Theorem Perspective NEJLA ALTAY Bahçeşehir University Faculty o Arts and Sciences Beşiktaş, İstanbul TÜRKİYE TURKEY METİN DEMİRALP
More informationMath 671: Tensor Train decomposition methods
Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3
More informationPermutation transformations of tensors with an application
DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn
More informationarxiv: v1 [math.ra] 13 Jan 2009
A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical
More information/16/$ IEEE 1728
Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationKronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm
Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,
More informationfür Mathematik in den Naturwissenschaften Leipzig
ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.
More informationA concise proof of Kruskal s theorem on tensor decomposition
A concise proof of Kruskal s theorem on tensor decomposition John A. Rhodes 1 Department of Mathematics and Statistics University of Alaska Fairbanks PO Box 756660 Fairbanks, AK 99775 Abstract A theorem
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationFrom Matrix to Tensor. Charles F. Van Loan
From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or
More informationNUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA
NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov
More informationIntroduction to the Tensor Train Decomposition and Its Applications in Machine Learning
Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March
More informationMath 671: Tensor Train decomposition methods II
Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition
More informationTensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia
Network Feature s Decompositions for Machine Learning 1 1 Department of Computer Science University of British Columbia UBC Machine Learning Group, June 15 2016 1/30 Contact information Network Feature
More informationMulti-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems
Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationA new truncation strategy for the higher-order singular value decomposition
A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,
More informationA randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors
A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationDomain decomposition on different levels of the Jacobi-Davidson method
hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional
More informationAn Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition
An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition Jinshi Yu, Guoxu Zhou, Qibin Zhao and Kan Xie School of Automation, Guangdong University of Technology, Guangzhou,
More informationTENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY
TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors
More informationTENSORS AND COMPUTATIONS
Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 September 2013 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d
More informationNEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING
NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:
More informationAn Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples
An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation
More informationc 2008 Society for Industrial and Applied Mathematics
SIAM J MATRIX ANAL APPL Vol 30, No 3, pp 1219 1232 c 2008 Society for Industrial and Applied Mathematics A JACOBI-TYPE METHOD FOR COMPUTING ORTHOGONAL TENSOR DECOMPOSITIONS CARLA D MORAVITZ MARTIN AND
More informationTensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition
Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationMultilinear Singular Value Decomposition for Two Qubits
Malaysian Journal of Mathematical Sciences 10(S) August: 69 83 (2016) Special Issue: The 7 th International Conference on Research and Education in Mathematics (ICREM7) MALAYSIAN JOURNAL OF MATHEMATICAL
More informationTensor rank-one decomposition of probability tables
Tensor rank-one decomposition of probability tables Petr Savicky Inst. of Comp. Science Academy of Sciences of the Czech Rep. Pod vodárenskou věží 2 82 7 Prague, Czech Rep. http://www.cs.cas.cz/~savicky/
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationThis work has been submitted to ChesterRep the University of Chester s online research repository.
This work has been submitted to ChesterRep the University of Chester s online research repository http://chesterrep.openrepository.com Author(s): Daniel Tock Title: Tensor decomposition and its applications
More informationA Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition
A Simpler Approach to Low-ank Tensor Canonical Polyadic Decomposition Daniel L. Pimentel-Alarcón University of Wisconsin-Madison Abstract In this paper we present a simple and efficient method to compute
More informationIntroduction to Tensors. 8 May 2014
Introduction to Tensors 8 May 2014 Introduction to Tensors What is a tensor? Basic Operations CP Decompositions and Tensor Rank Matricization and Computing the CP Dear Tullio,! I admire the elegance of
More informationA Multi-Affine Model for Tensor Decomposition
Yiqing Yang UW Madison breakds@cs.wisc.edu A Multi-Affine Model for Tensor Decomposition Hongrui Jiang UW Madison hongrui@engr.wisc.edu Li Zhang UW Madison lizhang@cs.wisc.edu Chris J. Murphy UC Davis
More informationWafer Pattern Recognition Using Tucker Decomposition
Wafer Pattern Recognition Using Tucker Decomposition Ahmed Wahba, Li-C. Wang, Zheng Zhang UC Santa Barbara Nik Sumikawa NXP Semiconductors Abstract In production test data analytics, it is often that an
More informationChapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of
Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple
More informationEfficient algorithms for finding the minimal polynomials and the
Efficient algorithms for finding the minimal polynomials and the inverses of level- FLS r 1 r -circulant matrices Linyi University Department of mathematics Linyi Shandong 76005 China jzh108@sina.com Abstract:
More informationJOS M.F. TEN BERGE SIMPLICITY AND TYPICAL RANK RESULTS FOR THREE-WAY ARRAYS
PSYCHOMETRIKA VOL. 76, NO. 1, 3 12 JANUARY 2011 DOI: 10.1007/S11336-010-9193-1 SIMPLICITY AND TYPICAL RANK RESULTS FOR THREE-WAY ARRAYS JOS M.F. TEN BERGE UNIVERSITY OF GRONINGEN Matrices can be diagonalized
More informationLecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)
Lecture 0: A (Brief) Introduction to Group heory (See Chapter 3.3 in Boas, 3rd Edition) Having gained some new experience with matrices, which provide us with representations of groups, and because symmetries
More informationLet p 2 ( t), (2 t k), we have the scaling relation,
Multiresolution Analysis and Daubechies N Wavelet We have discussed decomposing a signal into its Haar wavelet components of varying frequencies. The Haar wavelet scheme relied on two functions: the Haar
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationIntroduction to Group Theory
Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)
More informationFast multilinear Singular Values Decomposition for higher-order Hankel tensors
Fast multilinear Singular Values Decomposition for higher-order Hanel tensors Maxime Boizard, Remy Boyer, Gérard Favier, Pascal Larzabal To cite this version: Maxime Boizard, Remy Boyer, Gérard Favier,
More informationKrylov Techniques for Model Reduction of Second-Order Systems
Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of
More informationDecomposing a three-way dataset of TV-ratings when this is impossible. Alwin Stegeman
Decomposing a three-way dataset of TV-ratings when this is impossible Alwin Stegeman a.w.stegeman@rug.nl www.alwinstegeman.nl 1 Summarizing Data in Simple Patterns Information Technology collection of
More informationTECHNISCHE UNIVERSITÄT BERLIN
TECHNISCHE UNIVERSITÄT BERLIN On best rank one approximation of tensors S. Friedland V. Mehrmann R. Pajarola S.K. Suter Preprint 2012/07 Preprint-Reihe des Instituts für Mathematik Technische Universität
More information12.4 Known Channel (Water-Filling Solution)
ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More informationOrthogonal tensor decomposition
Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.
More informationHOSVD Based Image Processing Techniques
HOSVD Based Image Processing Techniques András Rövid, Imre J. Rudas, Szabolcs Sergyán Óbuda University John von Neumann Faculty of Informatics Bécsi út 96/b, 1034 Budapest Hungary rovid.andras@nik.uni-obuda.hu,
More informationNotes on basis changes and matrix diagonalization
Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix
More informationCVPR A New Tensor Algebra - Tutorial. July 26, 2017
CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More informationTutorial on MATLAB for tensors and the Tucker decomposition
Tutorial on MATLAB for tensors and the Tucker decomposition Tamara G. Kolda and Brett W. Bader Workshop on Tensor Decomposition and Applications CIRM, Luminy, Marseille, France August 29, 2005 Sandia is
More informationStochastic Processes
qmc082.tex. Version of 30 September 2010. Lecture Notes on Quantum Mechanics No. 8 R. B. Griffiths References: Stochastic Processes CQT = R. B. Griffiths, Consistent Quantum Theory (Cambridge, 2002) DeGroot
More informationMatrix-Product-States/ Tensor-Trains
/ Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationMATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. .
MATH 030: MATRICES Matrix Operations We have seen how matrices and the operations on them originated from our study of linear equations In this chapter we study matrices explicitely Definition 01 A matrix
More informationRobust Low-Rank Modelling on Matrices and Tensors
Imperial College London Department of Computing MSc in Advanced Computing Robust Low-Ran Modelling on Matrices and Tensors by Georgios Papamaarios Submitted in partial fulfilment of the requirements for
More informationLinear Algebra and Robot Modeling
Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses
More information0.1. Linear transformations
Suggestions for midterm review #3 The repetitoria are usually not complete; I am merely bringing up the points that many people didn t now on the recitations Linear transformations The following mostly
More information3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy
3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy Introduction. Seismic data are often sparsely or irregularly sampled along
More informationHigher-Order Singular Value Decomposition (HOSVD) for structured tensors
Higher-Order Singular Value Decomposition (HOSVD) for structured tensors Definition and applications Rémy Boyer Laboratoire des Signaux et Système (L2S) Université Paris-Sud XI GDR ISIS, January 16, 2012
More informationReview of Linear Algebra
Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources
More informationFaloutsos, Tong ICDE, 2009
Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity
More informationSymmetries, Groups, and Conservation Laws
Chapter Symmetries, Groups, and Conservation Laws The dynamical properties and interactions of a system of particles and fields are derived from the principle of least action, where the action is a 4-dimensional
More informationMultiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions
Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions March 6, 2013 Contents 1 Wea second variation 2 1.1 Formulas for variation........................
More informationConstructing an orthonormal set of eigenvectors for DFT matrix using Gramians and determinants
Constructing an orthonormal set of eigenvectors for DFT matrix using Gramians and determinants Vadim Zaliva, lord@crocodile.org July 17, 2012 Abstract The problem of constructing an orthogonal set of eigenvectors
More informationLecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan
From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010
More informationWe wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form
Linear algebra This chapter discusses the solution of sets of linear algebraic equations and defines basic vector/matrix operations The focus is upon elimination methods such as Gaussian elimination, and
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationMatrix Algebra: Summary
May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................
More information15 Singular Value Decomposition
15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationOn the convergence of higher-order orthogonality iteration and its extension
On the convergence of higher-order orthogonality iteration and its extension Yangyang Xu IMA, University of Minnesota SIAM Conference LA15, Atlanta October 27, 2015 Best low-multilinear-rank approximation
More informationSingular Value Decomposition
Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the
More informationthe tensor rank is equal tor. The vectorsf (r)
EXTENSION OF THE SEMI-ALGEBRAIC FRAMEWORK FOR APPROXIMATE CP DECOMPOSITIONS VIA NON-SYMMETRIC SIMULTANEOUS MATRIX DIAGONALIZATION Kristina Naskovska, Martin Haardt Ilmenau University of Technology Communications
More informationMath Linear Algebra
Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner
More informationAN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION
AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION JOEL A. TROPP Abstract. Matrix approximation problems with non-negativity constraints arise during the analysis of high-dimensional
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationSimple Examples on Rectangular Domains
84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH
More informationMatrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein
Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,
More informationNumerical Methods for Inverse Kinematics
Numerical Methods for Inverse Kinematics Niels Joubert, UC Berkeley, CS184 2008-11-25 Inverse Kinematics is used to pose models by specifying endpoints of segments rather than individual joint angles.
More information