Weighted Singular Value Decomposition for Folded Matrices

Size: px
Start display at page:

Download "Weighted Singular Value Decomposition for Folded Matrices"

Transcription

1 Weighted Singular Value Decomposition for Folded Matrices SÜHA TUNA İstanbul Technical University Informatics Institute Maslak, 34469, İstanbul TÜRKİYE (TURKEY) N.A. BAYKARA Marmara University Mathematics Department Göztepe Campus, İstanbul TÜRKİYE (TURKEY) METİN DEMİRALP İstanbul Technical University Informatics Institute Maslak, 34469, İstanbul TÜRKİYE (TURKEY) Abstract: Multilinear arrays have attracted an increasing tendency in especially the last two decades. Treating the data in many science and engineering areas like signal processing, computer vision, neuroscience started to use these type of data even though the data were transformed to vectors or matrices in order to efficiently use linear algebraic tools. One of the main goals has been to decompose the given multilinear array data to sums of some products of less variate arrays. This has been accomplished although the uniqueness could have not been achieved because of certain features. Nevertheless the obtained form which was basically developed by Lathauwer has been based on unfolding and folding of multilinear arrays and then using standard linear algebra. Some reductive array decomposition methods have also been developed quite recently by our group members without using a folding or an unfolding of arrays. This work tries to extend this recently proposed idea, without expecting any reduction in the resulting structure via consecutive application of the method, and, does not exclude the possibility of using folding and unfolding. Key Words: Multilinear Arrays, Singular Value Decomposition, Reductive Array Decomposition, Folded Vectors, Folded Matrices, Folded Arrays. 1 Introduction An algebraic entity indexed by more than two integers is mostly called multilinear array. Hence they can be considered as extensions to one index arrays, vectors, or to two index arrays, matrices, which are the most basic and therefore standard components in linear algebra. One of the most important issues is the decomposition problem. Either a vector or a matrix is tried to be decomposed to same type but rather simple components. The expression of a vector in terms of the elements of a basis set is the simplest issue to this end. We consider a Cartesian space in which our vector lies and try to find a basis set for uniquely representing any vector in this space as a linear combination of the basis set elements. What we know is that the linear combination coefficients are peculiar to the vector under consideration and is unique as long as the so-called basis set is complete (sufficing to represent any vector in the space). The basis set elements need to be vectors of the same type as the one under consideration. If we denote the given n-element vector we want to decompose by a and the basis set elements by u i s which are to be n-element vectors also; then we can write n a = a i u i (1) i=1 where n may be any positive integer (it can grow even unboundedly) and the scalar a i coefficients can be determined from algebraic equations to be constructed through elementwise comparison of both sides. The most natural basis set for (1) is composed of the standard unit vectors ith of which has its only nonzero element with value 1 at the ith position. If this is used in (1) then a i becomes the ith element of a. The most important facilitating feature of this set is the mutual orthonormality. We could use some other basis set whose elements are mutually orthonormal for (1). Then the general coefficient term a i would be the inner product of u i with a. Therefore we can draw attention to the point of the basis set construction. To compute the linear combination coefficients of the representation in an easier way brings more efficiency in utilization. Although the above representation contains a finite number of terms, its truncation may gain great importance for approximating the orginal vector especially when n number of the elements grows unboundedly. If we use an orthonormal basis set then the absolute values of coefficients, a i s, become the norm of the corresponding summand. If the ordering of the basis set element is such that the sum in (1) becomes descending in absolute values of a i s then the truncation at the first few terms gain more importance since we take the terms most dominantly contributing to the norm of a. ISBN:

2 The situation is not different for the case of the two index arrays, that is, matrices. Similar to what we have done for vectors above, this time a certain matrix will be decomposed by using the basis set whose elements are in the same structure, meaning matrices, to contribute to the norm of the original array dominantly for truncating the representation at a certain number of terms. If we denote the given matrix by A which is assumed to be an m n array and the basis matrices of same type by U ij s, then we can write m n A = a ij U ij (2) i=1 j=1 where we have preferred to use double sum even though we could use just a single sum between 1 and mn inclusive. This form, in fact, somehow reflects two index array feature of the given matrix. If we are to use the abovementioned single sum, then we would need to utilize one index arrays which may be considered unfolded counterparts of the relevant two index arrays. The a ij coefficients in (2) can be determined from the algebraic equations which can be constructed from the elementwise comparison of both sides and depends on how the U ij basis elements are chosen. First, they must be linearly independent and should span the space where the given matrix A resides. The most natural construction for U is to choose U ij s as the matrices whose only nonzero element with value 1 reside in the intersection of ith row and jth column. If we denote the ith standard unit vectors of the Cartesian spaces of m and n element vectors by e (m) i respectively then we can write e (n) j U ij = e (m) i e (n) T j, and i = 1,2,...,m j = 1,2,...,n (3) where, as we mentioned above, e (m) i and e (n) j are the m and n element vectors whose only nonzero elements are located at the ith and jth positions respectively and have the value 1. This choice of the basis matrices in (3) implies that the coefficient a ij becomes the element of A at the intersection of the ith row and jth column. The matrices in (3) are outer products and therefore their ranks are just 1. The m n type matrices form a linear vector space where it is possible to define some geometrical entities like distance, norm, and angle. We can define the following inner product for any couple of m n type matrices, A 1 and A 2 as follows (A 1,A 2 ) Tr ( A T 1 A ) 2 (4) which takes us to the following Frobenius norm definition for any matrix A A Tr (A T A) (5) We can easily show that the U ij matrices form an orthonormal basis set under this norm definition. As we immediately notice, the mutual orthonormality appears again in the foreground. We do not need to use the outer products composed of the Cartesian unit vectors. As long as mutual orthonormality exists we can use any basis set with the same efficiency since the a ij coefficients are determined as follows a ij = (U ij,a), 1 i m, 1 j n (6) We can call the representation given through (2) Orthonormal Decomposition if the U ij s form an orthonormal basis set because of the above discussions. One of the most widely used orthonormal decompositions is the Spectral Decomposition of the symmetric real matrices (or equivalently Hermitian complex matrices). It can be written for an n n type symmetric real matrix A as follows n A = α i a i a T i (7) i=1 where αs stand for the eigenvalues while as symbolize the corresponding eigenvectors. In the case of Hermitian complex matrices a T i should be exchanged by the Hermitian conjugate. Even though the decomposition in (7) is restricted to the symmetric or Hermitian matrices it is possible to construct its extended form for the nonsymmetric matrices by using left and right eigenvectors as long as the target matrix is square type and its eigenvalues have all the same algebraic and geometric multiplicities. In these cases a T i should be exchanged by the left eigenvector of the relevant eigenvalue while the corresponding right eigenvector should be written in place of a i. If the algebraic and geometric multiplicities of an eigenvalue are different for a given matrix then the above analysis fails to remain valid. The formulae should be revised by adding certain terms stairwise structuring between nearest neighbours in eigenpairs. The additive structure for the expression of Jordan canonical form appears from the analysis. We do not intend to get into more detail since they are far beyond the goal of this work. The second section of the paper is devoted to singular value decomposition of the matrices with some addressings to the reductive array decomposition. The ISBN:

3 third section covers the definitions and some important aspects of the folvecs, folmats and folarrs. The fourth section includes the singular value decomposition of the folmats and the last section finalizes the paper by concluding remarks. 2 Singular Value and Reductive Array Decompositions We can not directly construct a spectral decomposition for the nonsquare matrices since a direct eigenvalue problem can not be defined. This happens since a rectangular matrix of m n type (m n) can not transform a vector from a space to the same space. If we denote the m n type matrix under consideration by A then A maps from the Cartesian space, K n, the space of n-element vectors to the Cartesian space, K m, the space of m-element vectors while the transpose of the same matrix, A, from K m to K n. This brings the idea of two way mapping between K m and K n. We can try to find the possible vector and scalars satisfying the following equations Au = σv, A T v = σu, u K n, v K m (8) which can be converted to the following ones A T Au = σ 2 u, u K n AA T v = σ 2 v, v K m. (9) These are algebraic eigenvalue problems in n and m dimensional Cartesian spaces. Since the relevant matrices are symmetric, the eigenvalues are all real, and beyond that, nonnegative. This guarantees the real values of the σ scalar. Because of the square nature there are two possibilities for each possible σ value, positive and negative for the same absolute value. The general tendency is to choose the positive ones. The corresponding u eigenvectors are mutually orthogonal and can be normalized to get orthonormal basis set. Same thing remains valid also for the v vectors. Due to certain scientific historical reasons σ values are called Singular Values while the vectors us and vs are named Right Singular Vectors and Left Singular Vectors respectively. The equations in (9) urges us to write the following decomposition for the matrix A A = max(m,n) i=1 σ i v i u T i (10) This can be confirmed as follows: 1-The postmultiplication of equation (10) by anyone of the right singular vectors, say u j, produces Au j = σ j v j ; confirming the satisfaction of the first part of the equations in (9); 2-The premultiplication of the equation (10) by the transpose of any left singular vector, say vj T, produces v T A = σ j vj T. This confirms the satisfaction of the second part of the equations (after transposing) in (9). The equation (10) is called Singular Value Decomposition (SVD). It is a widely used concept in data processing. For this purpose, first the data are taken into a form of rectangular matrix. Then that matrix is decomposed via SVD. The terms in SVD are ordered in descending singular values which are the positive square roots of the multiplicatively symmetrized original matrix. Due to the unit norm features of the outer products appearing in SVD the singular values measure the relevant term s contribution to the norm of the sum, therefore to the norm of the original matrix. By looking at the values of the singular values it is possible to truncate the SVD at certain number of the first terms in the sum. This issue is known as Principal Component Analysis and used to get efficient approximations to the related data matrices. The compression algorithms in computer science or dominant network analysis in neuroscientific applications can be given as good examples to this issue. The multilinear counterpart of the SVD is based on the unfolding of a multilinear array. The basic idea is again to decompose the target array into multilinear outer products each of which has the product type elements with factors depending on single but different indices. The algebra behind this logic is not so simple to grasp or envisualize, however it works well in applications although the decomposition is not unique. The sophisticated and rigorous structure of multilinear SVD has urged scientists to develop certain rather easier decomposition methods. To this end, High Dimensional Model Representation [5], Enhanced Multivariance Product Representation [6], Reductive Multilinear Array Decomposition(RMAD) [9] can be mentioned in a trice. This method has been developed by members of our research group. To explain how this approach works we can consider a multilinear array A i1...i N and write the following decomposition formula A i1...i N = m j=1 σ (j) i b (j) i 1 c (j) i 2...i N (11) which is a linear combination of again binary outer products in contrast to N-factor outer products of the vectors. The outer products here are based on vectors again in one factor but the other factor is a multilinear array of (N 1) indices which is one less than the original array s index number. The index number reduction in the multilinear factor of the outer product gives the adjective Reductive to the name of the ISBN:

4 decompostion method. The terms of this decomposition are determined by using weighted or unweighted Euclidean distance minimization between the original array and the approximant truncated from (11). The details of this recently proposed approach will be discussed elsewhere. 3 Folvecs, Folmats and Folarrs The folding and unfolding of an array enable us to use the basic tools of linear algebra. To explain how to use these, first consider a vector a and denote its ith element by a i. If i is an even number then we can write the upper half of the vector as the first column of a two column matrix while the second column of this matrix is formed by the lower half of the vector. This procedure is in fact a reordering and produces a matrix which is a two index array from the vector which is a one index array. We can call this action Folding because of its pictorial appearence. We could reverse this action starting from the two column matrix to a vector. This reverse action can be called Unfolding because of its pictorial appearence similar to the folding procedure. If we denote the two column matrix produced from the vector a by B and the relevant matrix elements by b ij then we can write { ai j = 1 1 i n b ij = 2 j = 2 1 i n 2 a i+ n 2 (12) which explains the basic points of the folding procedures. The above example is not the only possibility of constructing a matrix from a vector. If n is divisible by a positive integer m then the resulting matrix after the folding will be an n m m type matrix. The divisibility by an integer for the number of the elements in the vector to be folded can be relaxed either discarding the residual terms or populating the vector elements by zeros to obtain divisibility. The term discarding means data loss while zero addition may cause some unusual values. The folding examplified above can be realized not only from a vector to matrix but from a vector to a multilinear array by appropriately locating the vector elements in place of the entries of a multilinear array. Regarding this consideration a multilinear array can be considered as a folded vector. To proceed to our aim we call a multilinear array folded vector or briefly folvec by retaining first three letters of each word in the name. The number of the indices in a folvec can be called folding order. This urges us to use the name n-fold vector together with the name folvec equivalently. The folding order is not only item for unique definition of a folvec. The domain of each index must also be specified. The equality of two folvecs is meaningful only when the folding orders and the domain of the indices are the same for both folvecs. As it is valid for same type vectors, all requirements of forming a linear vector space are satisfied by the same type folvecs. In other words, same type folvecs form a linear vector space whose zero element is the folvec composed of only zero elements. To proceed in more detail it is better to precisely define the type of a folvec. For vectors, the type is a single entity which is the number of the elements in the vectors. For matrices there are two numbers, the number of the rows and the number of the columns in the matrix. For an N-fold folvec there are n number of type elements, which are the number of the values to be taken by the indices individually. Hence, we can write down the following definition to precisely define the type for a folvec whose indices, i 1,..., i N, vary between 1 and n 1,..., n N t (n 1,...,n N ) (13) which is in fact an N-tuple. We are going to denote the linear vector space of t type folvecs by F t. The inner product of two folvecs a and b is defined as (a,b) = n 1 i 1 =1... n N i N =1 which yields the following norm a (a,a) 1 n 1 2 =... i 1 =1 a i1...i N b i1...i N (14) 1 n N 2 a 2 i 1...i N i N =1 (15) As we do amongst Cartesian spaces it is possible to map from one folvec space to another by using arrays. We call these entities Folded Matrices or briefly Folmats by taking the first three letters of the words. The folmats mapping from one folvec space to itself will be called Square Folmats. These must have an even number of indices, second half of which are used to act on the folvec in the domain while the first half are used to determine the resulting folvec. This urges us to distinguish the indices into two groups: right and left indices. The left indices correspond to the row number of a matrix while the right indices are the folmat counterparts of the column numbers. We are going to use semicolon to separate the left and right indices. If we consider a folmat, from a folvec space to itself, denoted by the capital letter A then its elements can be given through the general indexed structure A i1...i N ;j 1...j N. ISBN:

5 Now the mapping from F t to F t can be expressed in both closed and explicit forms as follows b = Aa, b i1...i N = n 1 j 1 =1 a,b F t... n N j N =1 A i1...i N ;j 1...j N a j1...j N, 1 i k n k, 1 k N. (16) As we can notice immediately these are quite straightforward extensions of the vector and matrix concepts of the ordinary linear algebra. What we have used to distinguish folmats and folvecs has been the semicolon. However, we can use semicolon to separate the indices to subgroups not only as left and right but also to more than two directions. We call the multiindex arrays having more than one semicolon amongst the indices Folded Arrays or simply Folarr s even though we do not intend to focus on them any further. 4 Singular Value Decomposition for Folmats We can easily define the transpose of a folmat with an impression from the matrices although there are certain difficulties in the definition of a meaningful transpose for a multilinear array. If we consider a folmat A mapping from a folvec space F t of type t to another folvec space F t of type t then the transpose of folmat A is denoted by A T and satisfies the following elementwise equalities ( A T ) i 1...,i N ;j 1...j N A j1...j N ;i 1...i N, 1 i k n k, 1 k N, 1 j l n k, 1 l N (17) A T reversely maps between F t and F t, that is, from F t to F t. This and the contents of the previous section enable us to define the eigenvalue problem of a square folmat. The algebra to this end can be converted to ordinary routines of the linear algebra by unfolding the relevant matrix to an ordinary matrix while the vectors are unfolded to ordinary vectors. The rest is the well known algebraic eigenvalue problem and everything remains exactly the same for the case of folded items. However we can call the folded forms of the eigenvectors Eigenfolvecs to distinguish them from their unfolded forms. All these definitions permit us to construct spectral decompositions and singular value decompositions as we do in the ordinary eigenvalue problems of linear algebra. For the singular value decomposition of the folmat A we can write Au = σv, A T v = σu, u F t, v F t (18) which can be converted to the following equations A T Au = σ 2 u, u F t, AA T v = σ 2 v, v F t (19) These express eigenvalue problems of symmetric square and nonnegative definite folmats. These can be processed by unfolding the folded items first and then solving the resulting ordinary eigenvalue problems. Then the positive roots of the eigenvalues become the singular values while the normalized eigenvectors correspond to the singular vectors. The folding of these singular vectors to recover the original form of the vectors u and v produces the singular folvecs. This is the conceptual structure of the SVD for folvecs. 5 Concluding Remarks This work introduces certain multilinear items to facilitate the utilization of the ordinary linear algebraic tools in construction of a singular value decomposition for certain type multilinear arrays (we call folmats) mapping from one space of certain type of multilinear arrays to another space of some other type of multilinear arrays. We do not attempt to decompose the target array to products of unit norm arrays containing more than two factors. As we know from the reductive multilinear array decomposition the terms composed of binary products of arrays allow us to use all tools of ordinary linear algebra after an unfolding procedure. Having the structures in ordinary algebraic nature we can go back to the case of folded items by folding to the original folded structures. A multilinear array can always be considered as a folmat by grouping its indices to the left and right indices. This procedure is not unique and each different grouping defines a different type folmat between different type folvec spaces. This brings an uncertainty which leaves us to make a choice to find the better decomposition. In other words the decompositions are not unique in this sense and some of them may become more preferable for our needs. References: [1] L. E. Leurgans, T. Ross and R. B. Abel, A Decomposition for Three-way Arrays, SIAM J. Matrix Anal. Appl., 14, 1993, pp [2] J. B. Kruskal, Rank, Decomposition and Uniqueness for 3-way and N-way Arrays, in Multiway Data Analysis, R. Coppi and S. Bolasco,eds., North Holland, Amsterdam, 1989, pp.718 ISBN:

6 [3] M. Marcus, Finite Dimensional Multilinear Algebra, Dekker, New York, 1975 [4] L. D. Lathauwer, B. D. Moor and J. Vandewalle, Multilinear Singular Value Decomposition, SIAM J. Matrix Anal. Appl., 21(4), 2000, pp [5] M. Demiralp, High Dimensional Model Representation and Its Application Varieties, The 4th Int. Conf. on Tools for Mathematical Modelling, St. Petersburg, Russia, 2003 [6] A. Okan, N.A. Baykara, M. Demiralp, Weight Optimization in Enhanced Multivariance Product Representation (EMPR) Method, ICNA- AM 10, Rhodes, Greece, 2010 [7] M. Demiralp, E. Demiralp, An Orthonormal Decomposition Method For Multidimensional Matrices, AIP Proceedings on ICNAAM 09, Creete, Greece, September, 2009 [8] E. Demiralp, Application of Reductive Decomposition Method for Multilinear Arrays (RDMM- A) to Animations, (WSEAS) MMACTEE 09, 2009 [9] M. Demiralp and E. Demiralp, Reductive Multilinear Array Decomposition Based Support Functions in Enhanced Multivariance Product Representation(EMPR), International Conf. on Appl. Comp. Sci., Malta, [10] F. B. Hildebrand, Introduction to Numerical Analysis, Dover Publications, Inc., ISBN-13: , (1987) [11] B. Hunt, R. Limpson, J. Rosenberg with K. Coombes, J. Osborn and G. Stuck, A Guide to MATLAB for Beginners and Experienced Users, Cambridge University Press, (2000). ISBN:

High Dimensional Model Representation (HDMR) Based Folded Vector Decomposition

High Dimensional Model Representation (HDMR) Based Folded Vector Decomposition High Dimensional Model Representation (HDMR) Based Folded Vector Decomposition LETİSYA DİVANYAN İstanbul Technical University Informatics Institute Maslak, 34469 İstanbul TÜRKİYE (TURKEY) letisya.divanyan@be.itu.edu.tr

More information

First Degree Rectangular Eigenvalue Problems of Cubic Arrays Over Two Dimensional Ways: A Theoretical Investigation

First Degree Rectangular Eigenvalue Problems of Cubic Arrays Over Two Dimensional Ways: A Theoretical Investigation First Degree Rectangular Eigenvalue Problems of Cubic Arrays Over Two Dimensional Ways: A Theoretical Investigation GİZEM ÖZDEMİR Istanbul Technical University Informatics Institute Ayazağa Campus-İstanbul

More information

MATHEMATICAL METHODS AND APPLIED COMPUTING

MATHEMATICAL METHODS AND APPLIED COMPUTING Numerical Approximation to Multivariate Functions Using Fluctuationlessness Theorem with a Trigonometric Basis Function to Deal with Highly Oscillatory Functions N.A. BAYKARA Marmara University Department

More information

Towards A New Multiway Array Decomposition Algorithm: Elementwise Multiway Array High Dimensional Model Representation (EMAHDMR)

Towards A New Multiway Array Decomposition Algorithm: Elementwise Multiway Array High Dimensional Model Representation (EMAHDMR) Towards A New Multiway Array Decomposition Algorithm: Elementwise Multiway Array High Dimensional Model Representation (EMAHDMR) MUZAFFER AYVAZ İTU Informatics Institute, Computational Sci. and Eng. Program,

More information

A Reverse Technique for Lumping High Dimensional Model Representation Method

A Reverse Technique for Lumping High Dimensional Model Representation Method A Reverse Technique for Lumping High Dimensional Model Representation Method M. ALPER TUNGA Bahçeşehir University Department of Software Engineering Beşiktaş, 34349, İstanbul TURKEY TÜRKİYE) alper.tunga@bahcesehir.edu.tr

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Numerical Solution of Ordinary Differential Equations in Fluctuationlessness Theorem Perspective

Numerical Solution of Ordinary Differential Equations in Fluctuationlessness Theorem Perspective Numerical Solution o Ordinary Dierential Equations in Fluctuationlessness Theorem Perspective NEJLA ALTAY Bahçeşehir University Faculty o Arts and Sciences Beşiktaş, İstanbul TÜRKİYE TURKEY METİN DEMİRALP

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Fluctuationlessness Theorem and its Application to Boundary Value Problems of ODEs

Fluctuationlessness Theorem and its Application to Boundary Value Problems of ODEs Fluctuationlessness Theorem and its Application to Boundary Value Problems o ODEs NEJLA ALTAY İstanbul Technical University Inormatics Institute Maslak, 34469, İstanbul TÜRKİYE TURKEY) nejla@be.itu.edu.tr

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

This appendix provides a very basic introduction to linear algebra concepts.

This appendix provides a very basic introduction to linear algebra concepts. APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

Example Linear Algebra Competency Test

Example Linear Algebra Competency Test Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

arxiv: v1 [math.ra] 13 Jan 2009

arxiv: v1 [math.ra] 13 Jan 2009 A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Jeffrey D. Ullman Stanford University

Jeffrey D. Ullman Stanford University Jeffrey D. Ullman Stanford University 2 Often, our data can be represented by an m-by-n matrix. And this matrix can be closely approximated by the product of two matrices that share a small common dimension

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

A concise proof of Kruskal s theorem on tensor decomposition

A concise proof of Kruskal s theorem on tensor decomposition A concise proof of Kruskal s theorem on tensor decomposition John A. Rhodes 1 Department of Mathematics and Statistics University of Alaska Fairbanks PO Box 756660 Fairbanks, AK 99775 Abstract A theorem

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Linear Algebra V = T = ( 4 3 ).

Linear Algebra V = T = ( 4 3 ). Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

An Introduction to Matrix Algebra

An Introduction to Matrix Algebra An Introduction to Matrix Algebra EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #8 EPSY 905: Matrix Algebra In This Lecture An introduction to matrix algebra Ø Scalars, vectors, and matrices

More information

Dimensionality Reduction

Dimensionality Reduction 394 Chapter 11 Dimensionality Reduction There are many sources of data that can be viewed as a large matrix. We saw in Chapter 5 how the Web can be represented as a transition matrix. In Chapter 9, the

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Multilinear Singular Value Decomposition for Two Qubits

Multilinear Singular Value Decomposition for Two Qubits Malaysian Journal of Mathematical Sciences 10(S) August: 69 83 (2016) Special Issue: The 7 th International Conference on Research and Education in Mathematics (ICREM7) MALAYSIAN JOURNAL OF MATHEMATICAL

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

The SVD-Fundamental Theorem of Linear Algebra

The SVD-Fundamental Theorem of Linear Algebra Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and

More information

7 Principal Component Analysis

7 Principal Component Analysis 7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Chapter 3. Matrices. 3.1 Matrices

Chapter 3. Matrices. 3.1 Matrices 40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple

More information

Review of linear algebra

Review of linear algebra Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information