Efficient and Accurate Rectangular Window Subspace Tracking
|
|
- Georgia Lyons
- 5 years ago
- Views:
Transcription
1 Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA Abstract In this paper, we describe a rectangular window subspace tracking algorithm, which tracks the r largest singular values and corresponding left singular vectors of a sequence of n c matrices in Onr ) time. This algorithm is designed to track rapidly changing subspaces. It uses a rectangular window to include a finite number of approximately stationary data columns. This algorithm is based on the Improved Fast Adaptive Subspace Tracking IFAST) algorithm of Toolan and Tufts, but reforms the rth order eigendecomposition with an alternative method that takes advantage of matrix structure. This matrix is a special rank-six modification of a diagonal matrix, so its eigendecomposition can be determined with only a single Or 3 ) matrix product to rotate its eigenvectors, and all other computation is Or ). Methods for implementing this algorithm in a numerically stable way are also discussed. I. INTRODUCTION Let us assume that we have a data source which produces a new length n column vector at regular intervals. This situation applies when we have an n element sensor array and take snapshots at regularly spaced time intervals, it applies when we analyze a digital image by panning across either the columns or rows, and it applies to a single sensor which takes regularly spaced samples in time, then creates a vector from the last n samples. The goal of subspace tracking, is to apply a sliding window to our data to create a sequence of overlapping matrices, then track the r principal singular values and left singular vectors of that windowed matrix, or an orthogonal set of columns which span that r dimensional principal subspace. The large overlap between successive windowed matrices allows one to reduce computation by determining the new subspace as an update of the subspace determined from the previous matrix. Because the singular value decomposition SVD) is a full matrix operation, these update methods are generally approximations. One reason we are often only interested in the left singular vectors, is that they are also the eigenvectors of the sample correlation matrix. Most of the earlier methods of subspace tracking, are based on applying an exponential window to the data [] [4], but recently there have been some methods which apply a rectangular window [5] [8]. The rectangular window methods allow inclusion of a finite amount of data, and can give better performance when the data is highly non-stationary, or there are abrupt changes in the data. II. SLIDING RECTANGULARLY WINDOWED DATA Letting x i be the ith column vector that we received, a length c rectangularly windowed matrix is formed by creating a matrix from the last c column vectors. At time t, we can write our previous matrix, M, and current matrix, M, as M = [ x t c x t c x t c ] x t, ) [ ] M = xt c xt c xt xt. ) Note that M and M share all but one column. We can write the current matrix in terms of the previous matrix as M = M x t c e ) P xt e c, 3) where e i is the ith length c canonical vector the ith column of a c c identity matrix) and P = [e, e 3,, e c e ]. Since the left singular vectors of a matrix are the eigenvectors of that matrix times its conjugate transpose, then from 3), we can write M M = MM x t c x t c x t x t, 4) which makes it clear that M M is a rank-two perturbation of MM. The problem of updating the eigendecomposition of a rank-one perturbation of a symmetric matrix has been well studied both theoretically and numerically, see [9] and references therein). A rank-two perturbation of this form has been much less studied, even though it is equivalent to two sequential rank-one perturbations. III. DEFINING TE PROBLEM We start with two sequential n c rectangularly windowed complex matrices M and M, along with approximations to the r largest singular values and corresponding left singular vectors of M, which we will call R r r and U C n r respectively. Our goal is to determine a good approximation of the r or possibly r ) largest singular values and corresponding left singular vectors of M by updating U and. From ) and ), we can see that M and M share all but one column, and from 4) we can see that the ordering of the columns has no effect on left singular vectors, therefore the column space spanned by U should be close to the column space spanned by the r dominant left singular vectors of the n c ) matrix of shared columns, [ x t c x t c x t x t ]. The subspace that is not included in U which may have a strong influence on the
2 dominant left singular vectors of M is the part of both xt c and x t that is orthogonal to U. If we use a Gram-Schmidt like method [] to augment U by this subspace, and call this augmentation Q C n, then the subspace spanned by [ U Q ] should be close to the subspace spanned by the r largest singular vectors of M. In [8], a detailed analytical analysis is presented, which shows why the subspace [ U Q ] is a good approximation to the subspace spanned by the r dominant left singular values of M, and that it is not trivial to find an r dimensional subspace that will give a better approximation. The idea behind the improved fast adaptive subspace tracking IFAST) algorithm [8], [], is to approximate the matrix M by the rank r matrix M = [ U Q ][ U Q ] M, then determine the singular values and left singular vectors of M in an efficient way. Approximating M by M is the only approximation involved in the algorithm. Determining the nonzero singular values and corresponding left singular vectors of M, is equivalent to applying the Rayleigh-Ritz procedure [] to M M using the subspace [ U Q ]. This means that the SVD of M is the optimal approximation to the SVD of M, when limited to the subspace spanned by the columns of [ U Q ] []. Table I is intended to clearly show how IFAST works, and will produce the exact singular values and left singular vectors of M. Table I will also produce the same result as the algorithm presented in this paper, as well as that in [8], []. The only difference between Table I and the IFAST algorithm in [8], [], is that the Oncr) computation in step is performed as an equivalent Onc) computation. The contribution of this paper is to replace the eigendecomposition in step 3 with a single matrix product of two r ) r ) matrices, and all other computation will be Or ). This is approximately four times faster than a conventional EVD []. Because the efficient computation of the EVD is done using secular equations, [3], we will call the version of the algorithm presented in this paper the Improved Secular Fast Adaptive Subspace Tracking ISFAST) algorithm. The only computation in this algorithm that is not the equivalent of a matrix times a vector, is a the product of two r ) r ) matrices, and the product of a n r ) matrix with a r ) r ) matrix. IV. TE DETAILS OF, U, AND Q Although and U are approximations, there are a few properties that they must have. must be diagonal and nonnegative, the columns of U must form an orthonormal set ie. U U = I), and U MM U must equal. These conditions will always be satisfied if U and were generated by a previous iteration of this algorithm [8], or from the SVD of M. All these constraints really say, is that U should be rotated so that U MM U is diagonal. For any matrix U x C n r, whose columns form an orthonormal set, we can generate a U and that satisfy these constraints by taking the EVD of Ux MM U x = U y y Uy, then setting = y and U = U x U y. TABLE I TEORETICAL IFAST ALGORITM Step Calculation `I U U x t c ) q = I U U ) x t c `I U U q q xt q = `I U U q q xt ) F = [ U Q ] M M [ U Q ] 3) U f f U f = F 4) Ũ = [ U Q ]U f = f In the last section, we mentioned that we will create the matrix Q using a Gram-Schmidt like method [] to augment U with x t c the column we are discarding from M) and x t the column we are adding to M). We can augment U with x t c followed by x t, or x t followed by x t c, as they will both produce a Q which span the same column space. In fact, we can rotate Q by any unitary matrix, and it will not change the column space spanned by Q. We will take advantage of this property to make later computation easier. We start by creating the matrix Q = [ q q ] using Gram- Schmidt augmentation of U, I U U ) x t c q = I U U ) x t c, ) I U U q q xt q = ) I U U q q xt. The two vectors q and q are an orthonormal set that span the desired column space, but in general the ermitian matrix ˆT = Q MM Q will not be diagonal, therefore we will rotate Q to form Q, such that ˆ = Q MM Q is diagonal. In order to determine the necessary rotation, we can take the EVD of ˆT, which we will write as [ q ˆT = MM q q MM ] q q MM q q MM = q Û ˆÛ. Because the formation of ˆT is required later in the algorithm, there is no additional computational load introduced by forming it now. The eigenvalues of a matrix are the roots of its characteristic equation, and since ˆT is a matrix, we can use the quadratic equation to find the eigenvalues. Defining b = ˆt, ˆt, )/ and c = ˆt,ˆt, ˆt,, where ˆt i,j is the i, j)th element of ˆT, allows us to write the eigenvalues of ˆT as [ ] b b c ˆ = b. b c Now that we have the eigenvalues of ˆT, we can determine its eigenvectors. We can use the definition of an eigenvector
3 [], and write ˆT = ˆσ β, β where [, β] T is an unnormalized eigenvector of ˆT corresponding to eigenvalue ˆσ. Solving for β, normalizing the eigenvector, then repeating for the second eigenvector, we get α α Û =, φα φα where α = / ˆσ ˆt, )/ˆt,, α = α, and φ = ˆt, / ˆt,. This allows us to define Q = QÛ, which will have the property Q MM Q = ˆ. V. EFFICIENTLY CALCULATING TE SVD OF M Determining the singular values and left singular vectors of M is equivalent to determining the eigenvalues and eigenvectors of M M [], therefore we will start by analyzing the eigendecomposition of M M = [ U Q ][ U Q ] M M [ U Q ][ U Q ]. If we define F = [ U Q ] M M [ U Q ] = U f f U f, 5) then we can write M M = [ U Q ]U f ) f U f [ U Q ] ), and it becomes obvious that the non-zero eigenvalues of M M will be the eigenvalues of F, and the corresponding eigenvectors of M M will be [ U Q ]U f. From 4), we know M M = MM x t c x t c x t x t, which we can substitute into 5), to get F = [ U Q ] MM [ U Q ] aa bb, 6) where the vectors a and b are defined as a = [ U Q ] x t c, and b = [ U Q ] x t. Multiplying out the matrix blocks in the first term of 6) gives us [ U F MM U U MM ] Q = Q MM U Q MM aa bb. 7) Q In section IV we made it clear that U MM U = and Q MM Q = ˆ, both of which we already have. If we define Z = U MM Q, remember M Q = M Q)Û, and we already computed M Q when constructing ˆT ), we can write 7) as F = [ ˆ ] [ Z Z ] aa bb. 8) The matrix F, as it is written in 8), is a rank-six modification of a diagonal matrix. The matrices aa and bb are each rank-one, and the matrix Z Z is rank-four, which can be seen from 6) in the appendix. Because the eigendecomposition of certain low rank modifications of diagonal matrices can be computed in On ) time, we will take advantage of this to compute the EVD of F more efficiently. We will compute the EVD of F in two steps. The first step is to compute the EVD of Z Ġ = ˆ Z = U U, 9) which is a rank-four modification of a diagonal matrix. Since F = Ġ aa bb, if we rotate F by the eigenvectors of Ġ, we can define the matrix Ḧ = U F U. Therefore, defining ȧ = U a and ḃ = U b, the second step is to compute the EVD of Ḧ = ȧȧ ḃḃ = Ü Ü, which is a rank-two modification of a diagonal matrix. Finally, we can determine the EVD of F as f = and U f = UÜ, allowing us to write the singular values and left singular vectors of M as = and Ũ = [U Q] UÜ. VI. COMPUTING TE EVD OF Ġ The eigenvalues of Ġ from 9) are the roots of its characteristic polynomial, Ċλ) = det[ġ λi], which can also be written as λi Z Ċλ) = det. ) ˆ λi Z Using [8, pp. 6-6], the determinant of a bordered diagonal matrix in the form of ) can be written as r Ċλ) = σ i λ)) ẇ λ)ẇ λ) ẇ x λ) ), where i= ẇ i λ) = ˆσ i λ z j,i σ j λ, ẇ xλ) = z j, z j, σ j λ, ) σ i is the ith diagonal element of, z j,i is the j, i)th element of Z, and ˆσ i is the ith diagonal element of ˆ. Because the product term in front of Ċλ) has no effect on its roots, we can cancel it to get ẇλ) = ẇ λ)ẇ λ) ẇ x λ), which is a rank-four version of the rank-two formula given in [4], and behaves similarly to the rank-two secular function from [8]. We can now get the r eigenvalues of Ġ by determining the roots of ẇλ), which we will call σ through σ r. If u is the unnormalized ith eigenvector of Ġ, then from the definition of an eigenvector [], we get Ġ u = u σ i. Solving for u, the details of which are given in the appendix), we get [ ] σ i I u = z ẇ σ i ) z ) I ẇ x σ i )
4 where and z are the first and second columns of Z, respectively. This allows us to write the normalized ith eigenvector of Ġ as u i = u/ u. To stably calculate the eigenvalues of Ġ, they should be calculated as two sequential updates using [4], combined with the techniques described at the end of the next section. VII. COMPUTING TE EVD OF Ḧ After determining the EVD of Ġ, we can determine the EVD of Ḧ. The eigenvalues of Ḧ = ȧȧ ḃḃ are the roots of the rank-two secular equation ẅλ) = ẅ a λ)ẅ b λ) ẅ ab λ), 3) which is a rank-two version of the secular function given in [3], and whose derivation is in [8]. The parts of 3) are defined as ẅ a λ) = ȧ j σ j λ, ẅ bλ) = ẅ ab λ) = ȧ j ḃj σ j λ, ḃj σ j λ, where ȧ j and ḃj are the jth elements of ȧ and ḃ, respectively. The unnormalized ith eigenvector of Ḧ is a linear combination of ȧ and ḃ, and from [8] can be written as ) ü = σi I ȧ ẅa σ ) i ). 4) ẅ ab σ i )ḃ This allows us to write the normalized ith eigenvector of Ḧ as ü i = ü/ ü. To stably calculate the eigenvalues of Ḧ, they should be calculated as two sequential updates using [5], with the stopping criterion from [6], and the first set of eigenvectors should be calculated using the method in [6], [7]. To avoid an Or 3 ) rotation, the eigenvectors of Ḧ are calculated using the method in this paper. For closely spaced eigenvalues, there will be some loss of orthogonality, which should be able to be addressed using methods like [8], [9] and []. VIII. CONCLUSION In this paper, we present a method for efficiently computing the small EVD of the IFAST algorithm. We call this modified version of IFAST the Improved Secular Fast Adaptive Subspace Tracking ISFAST) algorithm. The steps of the ISFAST algorithm are presented in table II, and produce the same result as Table I. The dominant step computationally is step 4-a, as it has one Or 3 ) matrix product, and one Onr ) matrix product. All other steps in the algorithm only contain terms which are matrices times vectors, or their computational equivalent. The computational improvement introduced by performing the calculations as presented in this paper are shown in Fig. for a 5 5 complex matrix. The full decomposition represents determining the full set of singular values and left singular vectors of M using a conventional SVD []. The ISFAST algorithm and the IFAST algorithm both produce Approximate FLOPS 4 3 TABLE II TE ISFAST ALGORITM Step Calculation `I U U x t c -a) q = I U U ) x t c `I U U q q xt b) q = `I U U q q xt c) ˆT = Q MM Q d) Û ˆÛ = ˆT e) Q = QÛ -a) Z = U» MM Q» b) Ġ = ˆ 3-a) x 6 c) U U = Ġ ȧ = U [ U Q ] x t c b) ḃ = U [ U Q ] x t c) Ḧ = ȧȧ ḃḃ d) Ü Ü = Ḧ 4-a) Ũ = [ U Q ] UÜ b) = full decomposition secular update FAST ց Z Z տ ISFAST subspace dimension r) տ IFAST Fig.. Computational requirements of various algorithms for a 5 5 complex matrix vs. subspace dimension. the singular values and left singular vectors of M. The FAST algorithm [5] is given for comparison, and is almost computationaly equivalent to the steps in Table I. The secular update method uses the method from VII directly on M, and will give the same results as the full decomposition. Plots of approximate FLOPS vs. subspace dimension for other square matrix dimensions look very similar, even for very large n. We have not mentioned how we determine the subspace dimension, r, but the method in [], which is based on the energy in the noise subspace works well with this algorithm for determining r at each iteration.
5 APPENDIX TE EIGENVECTORS OF Ġ In this appendix, we derive the formula for the unnormalized ith eigenvector of Ġ. If the vector u is the ith eigenvector of Ġ multiplied by some scale factor, then from the definition of an eigenvector [], we get Ġu = u σ i. Using 9), we can write ) Z zz u = u σ i, ˆ where Z zz = Z Z. After [ rearranging ] some ) terms, we get Du = Z zz u, where D = ˆ σ i I. If we deflated the problem by removing unchanged eigenvalues as in [8], [5], then D is invertible, and we can write u = D Z zz u. 5) The matrix Z zz is a rank-four ermitian matrix, and can be written as the sum of four symmetric rank-one matrices, Z zz = / / / / z z z z / / / /. 6) If we partition the length r vector u as [ ū u r u r ] T, and substitute this partitioned u along with 6) into 5), we get u = D u r z ū u r z z ū. 7) If we partition the r r diagonal matrix D as D D = d r, d r then from 7), we get the two equalities u r = z ū/d r and u r = z ū/d r. Substituting for u r and u r in 7), we get u = D z ū /d r z ū z /d r. 8) where z ū and z ū are scalars. If we multiply both sides of the first r elements of 8) by z from the left, we get z ū = z D z ū z ū ) z. 9) d r d r Solving for z ū in terms of ū gives us ) z ū = ū dr dr z D ) d r z D z ) From ), we can see that ẇ σ i ) = d r z D and ẇ x σ i ) = z D z. Using these identities, and plugging ) into 8) gives us ) z u = ū D z d r ẇ σ i ) z d r ẇ x σ i ) d r Because u is unnormalized, we can discard the scalar z ū/d r, and because the r )th and r )th diagonal elements of D are /d r and /d r respectively, they will cancel the d r and d r in the two vectors, leaving us with ). REFERENCES [] I. Karasalo, Estimating the covariance matrix by signal subspace averaging, IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP- 34, no., pp. 8, Feb [] B. Yang, Projection approximation subspace tracking, IEEE Trans. Signal Processing, vol. 43, no., pp. 95 7, 995. [3] D. J. Rabideau, Fast, rank adaptive subspace tracking and applications, IEEE Trans. Signal Processing, vol. 44, no. 9, pp. 9 44, 996. [4] P. Strobach, Bi-iteration SVD subspace tracking algorithms, IEEE Trans. Signal Processing, vol. 45, no. 5, pp. 4, 997. [5] E. C. Real, D. W. Tufts, and J. W. Cooley, Two algorithms for fast approximate subspace tracking, IEEE Trans. Signal Processing, vol. 47, no. 7, pp , July 999. [6] D. J. Rabideau, Subspace tracking with quasi-rectangular data windows, in Proc. SPIE - The International Society for Optical Engineering, vol. 596, Apr. 3, pp [7] R. Badeau, G. Richard, and B. David, Sliding window adaptive SVD algorithms, IEEE Trans. Signal Processing, vol. 5, no., pp., 4. [8] T. M. Toolan, Advances in sliding window subspace tracking, Ph.D. dissertation, University of Rhode Island, 5. [9] J. W. Demmel, Applied Numerical Linear Algebra. Philadelphia, PA: Society for Industrial and Applied Mathematics, 997. [] G.. Golub and C. F. van Loan, Matrix Computations, 3rd ed. Baltimore, MD: Johns opkins Univ. Press, 996. [] T. M. Toolan and D. W. Tufts, Improved fast adaptive subspace tracking, in Proc. Thirteenth Annual Adaptive Sensor Array Processing Workshop ASAP), June 5. [] B. N. Parlett, The Symmetric Eigenvalue Problem. Englewood Cliffs, NJ: Prentice-all, 98. [3] G.. Golub, Some modified matrix eigenvalue problems, SIAM Rev., vol. 5, no., pp , Apr [4] D. R. Fuhrmann, An algorithm for subspace computation with applications in signal processing, SIAM J. Matrix Anal. Appl., vol. 9, no., pp. 3, 988. [5] J. R. Bunch, C. P. Nielsen, and D. C. Sorensen, Rank-one modification of the symmetric eigenproblem, Numer. Math., vol. 3, pp. 3 48, 978. [6] M. Gu and S. C. Eisenstat, A stable and efficient algorithm for the rank-one modification of the symmetric eigenproblem, SIAM J. Matrix Anal. Appl., vol. 5, no. 4, pp , Oct [7], Downdating the singular value decomposition, SIAM J. Matrix Anal. Appl., vol. 6, no. 3, pp , July 995. [8] D. C. Sorensen and P. T. P. Tang, On the orthogonality of eigenvectors computed by divide-and-conquer techniques, SIAM J. Numer. Anal., vol. 8, no. 6, pp , Dec. 99. [9] R. D. DeGroat and R. A. Roberts, Efficient, numerically stabilized rank-one eigenstructure updating, IEEE Trans. Acoust., Speech, Signal Processing, vol. 38, no., pp. 3 36, Feb. 99. [] M. Moonen, P. van Dooren, and J. Vandewalle, A note on Efficient numerically stabilized rank-one eigenstructure updating, IEEE Trans. Signal Processing, vol. 39, no. 8, pp. 9 94, Aug. 99. [] T. M. Toolan and D. W. Tufts, Detection and estimation in nonstationary environments, in Proc. IEEE Asilomar Conference on Signals, Systems & Computers, Nov. 3, pp
ADVANCES IN SLIDING WINDOW SUBSPACE TRACKING TIMOTHY M. TOOLAN
ADVANCES IN SLIDING WINDOW SUBSPACE TRACKING BY TIMOTHY M. TOOLAN A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY IN ELECTRICAL ENGINEERING UNIVERSITY
More informationA DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION
SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationFast adaptive ESPRIT algorithm
Fast adaptive ESPRIT algorithm Roland Badeau, Gaël Richard, Bertrand David To cite this version: Roland Badeau, Gaël Richard, Bertrand David. Fast adaptive ESPRIT algorithm. Proc. of IEEE Workshop on Statistical
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationSliding window adaptive SVD algorithms
Sliding window adaptive SVD algorithms Roland Badeau, Gaël Richard, Bertrand David To cite this version: Roland Badeau, Gaël Richard, Bertrand David Sliding window adaptive SVD algorithms IEEE_J_SP, IEEE,
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationReview of some mathematical tools
MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical
More informationA Divide-and-Conquer Method for the Takagi Factorization
A Divide-and-Conquer Method for the Takagi Factorization Wei Xu 1 and Sanzheng Qiao 1, Department of Computing and Software, McMaster University Hamilton, Ont, L8S 4K1, Canada. 1 xuw5@mcmaster.ca qiao@mcmaster.ca
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More information1 Singular Value Decomposition and Principal Component
Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationarxiv: v1 [cs.lg] 26 Jul 2017
Updating Singular Value Decomposition for Rank One Matrix Perturbation Ratnik Gandhi, Amoli Rajgor School of Engineering & Applied Science, Ahmedabad University, Ahmedabad-380009, India arxiv:70708369v
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationNotes on basis changes and matrix diagonalization
Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix
More informationFoundations of Computer Vision
Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH
More informationSingular Value Decomposition and Principal Component Analysis (PCA) I
Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationChapter 2 Canonical Correlation Analysis
Chapter 2 Canonical Correlation Analysis Canonical correlation analysis CCA, which is a multivariate analysis method, tries to quantify the amount of linear relationships etween two sets of random variales,
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationLecture notes on Quantum Computing. Chapter 1 Mathematical Background
Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationChapter 7: Symmetric Matrices and Quadratic Forms
Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationParallel Singular Value Decomposition. Jiaxing Tan
Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationGlossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB
Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the
More informationAN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES
AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem
More informationBackground Mathematics (2/2) 1. David Barber
Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and
More informationA New Look at the Power Method for Fast Subspace Tracking
Digital Signal Processing 9, 297 314 (1999) Article ID dspr.1999.0348, available online at http://www.idealibrary.com on A New Look at the Power Method for Fast Subspace Tracking Yingbo Hua,* Yong Xiang,*
More informationMath 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam
Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationQR Factorization Based Blind Channel Identification and Equalization with Second-Order Statistics
60 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 48, NO 1, JANUARY 2000 QR Factorization Based Blind Channel Identification and Equalization with Second-Order Statistics Xiaohua Li and H (Howard) Fan, Senior
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationStudy Notes on Matrices & Determinants for GATE 2017
Study Notes on Matrices & Determinants for GATE 2017 Matrices and Determinates are undoubtedly one of the most scoring and high yielding topics in GATE. At least 3-4 questions are always anticipated from
More informationFinal Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson
Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationANSWERS. E k E 2 E 1 A = B
MATH 7- Final Exam Spring ANSWERS Essay Questions points Define an Elementary Matrix Display the fundamental matrix multiply equation which summarizes a sequence of swap, combination and multiply operations,
More informationThe constrained stochastic matched filter subspace tracking
The constrained stochastic matched filter subspace tracking Maissa Chagmani, Bernard Xerri, Bruno Borloz, Claude Jauffret To cite this version: Maissa Chagmani, Bernard Xerri, Bruno Borloz, Claude Jauffret.
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview
More informationDefinition (T -invariant subspace) Example. Example
Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationELE/MCE 503 Linear Algebra Facts Fall 2018
ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2
More informationMath 4A Notes. Written by Victoria Kala Last updated June 11, 2017
Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...
More informationReview of similarity transformation and Singular Value Decomposition
Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm
More informationA Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang
More informationUniversity of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012
University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012 Name: Exam Rules: This is a closed book exam. Once the exam
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationMATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.
as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations
More informationData Mining Lecture 4: Covariance, EVD, PCA & SVD
Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationMIT Final Exam Solutions, Spring 2017
MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationMathematical Methods wk 2: Linear Operators
John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationMath 408 Advanced Linear Algebra
Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationGeneralized Triangular Transform Coding
Generalized Triangular Transform Coding Ching-Chih Weng, Chun-Yang Chen, and P. P. Vaidyanathan Dept. of Electrical Engineering, MC 136-93 California Institute of Technology, Pasadena, CA 91125, USA E-mail:
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationbe a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u
MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =
More informationq n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.
Exam Review Material covered by the exam [ Orthogonal matrices Q = q 1... ] q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis
More informationTHE RELATION BETWEEN THE QR AND LR ALGORITHMS
SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian
More information22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices
m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix
More informationThe Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix
The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of
More informationThis appendix provides a very basic introduction to linear algebra concepts.
APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More information