THE CORE PROBLEM WITHIN A LINEAR APPROXIMATION PROBLEM AX B WITH MULTIPLE RIGHT-HAND SIDES. 1. Introduction. Consider a linear approximation problem

Size: px
Start display at page:

Download "THE CORE PROBLEM WITHIN A LINEAR APPROXIMATION PROBLEM AX B WITH MULTIPLE RIGHT-HAND SIDES. 1. Introduction. Consider a linear approximation problem"

Transcription

1 HE CORE PROBLEM WIHIN A LINEAR APPROXIMAION PROBLEM AX B WIH MULIPLE RIGH-HAND SIDES IVEA HNĚYNKOVÁ, MARIN PLEŠINGER, AND ZDENĚK SRAKOŠ Abstract his paper focuses on total least squares (LS) problems AX B with multiple right-hand sides Existence and uniqueness of a LS solution for such problems was analyzed in the paper I Hnětynková, M Plešinger, D M Sima, Z Strakoš, and S Van Huffel (211) For LS problems with single right-hand sides the paper C C Paige and Z Strakoš (26) showed how necessary and sufficient information for solving Ax b can be revealed from the original data through the so-called core problem concept In this paper we present a theoretical study extending this concept to problems with multiple right-hand sides he data reduction we present here is based on the singular value decomposition of the system matrix A We show minimality of the reduced problem; in this sense the situation is analogous to the single right-hand side case Some other properties of the core problem, however, can not be extended to the case of multiple right-hand sides Key words total least squares problem, multiple right-hand sides, core problem, linear approximation problem, error-in-variables modeling, orthogonal regression, singular value decomposition AMS subject classifications 15A6, 15A18, 15A21, 15A24, 65F2, 65F25 1 Introduction Consider a linear approximation problem Id AX B, or, equivalently, B A, (11) X where A R m n, X R n d, B R m d, without any further assumption on the positive integers m, n, d, anda B (this eliminates the trivial case where it does not make sense to approximate B by a linear combination of the columns of A; see also14) he equivalent B A form in(11) has been chosen so that our transformations will take it to block form (as for example in (21) belowforthed =1 case) revealing the core problem most simply We will focus on incompatible problems, ie R(B) R(A) If R(B) R(A), then the system AX = B can be solved using standard methods Consider changes of the coordinate systems in R m, R n,andr d represented by orthogonal transformations  X (P AQ)(Q XR) (P BR) B, (12) where P 1 = P, Q 1 = Q, R 1 = R ; or, equivalently, ( )( B  Id R R P X Id B A Q Q X ) R (13) his work has been supported by the GAČR grant No P21/ S he research of M Plešinger has been partly supported by the ESF grants No CZ17/23/9155 and No CZ17/23/365 he research of Z Strakoš has been partly supported by the GAČR grant 21/9/917 Charles University in Prague, Faculty of Mathematics and Physics, Czech Republic, and Institute of Computer Science, AS CR ( hnetynkova@cscascz) echnical University of Liberec, Department of Mathematics, Czech Republic, and Institute of Computer Science, AS CR ( martinplesinger@tulcz) Charles University in Prague, Faculty of Mathematics and Physics, Czech Republic ( strakos@karlinmffcunicz) 1

2 2 I HNĚYNKOVÁ, M PLEŠINGER, Z SRAKOŠ We require that X solves (11) if and only if X = Q XR solves (12) andcallsuch problems orthogonally invariant he total least squares problem (LS) min X,E,G G E F subject to (A + E)X = B + G (14) serves as an important example; see 12, 13, 7, 5, section 6, 18 Mathematically equivalent problems have been independently investigated under the names orthogonal regression and errors-in-variables modeling; see2, 21 In 8 it is shown that even with d = 1 (which gives Ax b, where b is an m-vector) the LS problem may not have a solution and, when the solution exists, it may not be unique; see also 6, pp In order to resolve this difficulty, the classical book 19 introduces the so-called nongeneric solution his book also extends the LS theory to problems with multiple right-hand sides, ie for d>1 he existence and uniquenessofalssolutionwithd>1isthendiscussedinfull generality in the recent paper 1, giving a new classification of all possible cases he sequence of papers 12, 13, and 14 by C C Paige and Z Strakoš investigates, using a unified framework, different least squares formulations for problems with d = 1 he last paper 14 introduced the so-called core problem that separates the necessary and sufficient information for solving the problem from the rest It gives the necessary and sufficient condition for existence of the LS solution, explains when the LS solution exists and when it is unique, and clarifies the meaning of the nongeneric solution For a brief summary see also the recent paper 1, pp , which also shows that there is a class of problems for which the classical LS approach described in 19, 22, 23, 9, Chapter 63, pp (in particular the so-called classical LS algorithm; see19, Chapter 361, pp 87 9, 1, Algorithm 1, p 767) is unable to find the existing LS solutions he first steps in generalizing the core problem theory for d>1weredoneby Å Björck in the series of talks 1, 2, 3, and also in the unpublished manuscript 4, and by D M Sima and S Van Huffel in 16, 17 In a theoretical study presented in this paper we further develop the data reduction suggested in 15 which gives the core problem, we investigate its properties and prove its minimality We do not advocate a computational technique for solving the problem his is a matter of further work and results will be published elsewhere he organization of this paper is as follows Section 2 recalls the core problem concept for a single right-hand side Section 3 describes the data reduction for multiple right-hand sides, shows how to assemble the transformation matrices, and discusses basic properties of the reduced problem Section 4 proves the minimality of the reduced problem, and thus justifies the definition of the core problem Section 5 concludes the paper hroughout the paper, R(M) andn (M) denote the range and null space of a matrix M, respectively; I l (or just I) denotes an l l identity matrix; and l,ξ (or just ) denotes an l ξ zero matrix he matrices A, B, B A, and X from (11) are called the system matrix, theright-hand sides (or the observation) matrix, the extended (or data) matrix, andthematrix of unknowns, respectively 2 Data reduction in the single right-hand side case Consider the linear approximation problem (11) withd = 1 In 14 itwasshownthatthereexist

3 HE CORE PROBLEM WIHIN A LINEAR APPROXIMAION PROBLEM AX B 3 orthogonal matrices P, Q, that transform the original problem into the block form P b1 A b A Q Q = 11 1 x x A 1, (21) 22 x 2 where b 1 and A 11 are of minimal dimensions Such transformation can be obtained using the singular value decomposition (SVD) of the system matrix A, A = UΣV, U R m m, Σ R m n, V R n n, (22) where U 1 = U, V 1 = V LetA have k distinct nonzero singular values σ 1 >σ 2 >>σ k >, (23) and let their multiplicities be m j, j =1,,k; k j=1 m j = r rank(a) hen Σ = diag(σ 1 I m1,,σ k I mk, m r,n r ) (24) Consider the partitioning U =U 1,,U k,u k+1, U j R m mj, j =1,,k,and U k+1 R m m k+1,wherem k+1 m r is the dimension of the null space N (A ) Columns of U j represent an orthonormal basis of the jth left singular vector subspaces of A hen U 1 b A = U b AV f Σ, (25) V where f U b =f 1,,f k,f k+1, f j U j b, j =1,,k+1, and ϕ j f j (26) Note that ϕ j =ifandonlyifb is orthogonal to the jth left singular vector subspace In order to be conformal with the multiple right-hand sides case, we keep (unlike in 14) the zero and nonzero components f j together until the last permutation Let S j R mj mj, S 1 j = Sj, be a Householder reflection matrix such that and let S j f j = e 1 ϕ j, e 1 =1,,, R mj, j =1,,k+1, (27) S =diag(s 1,,S k ), S L = diag(s,s k+1 ), S R =diag(s,i n r ) Note that S L ΣS R = Σ he orthogonal transformation (US L ) b A(VS R ) = S L f ΣS R =S L f Σ maximizes the number of zero entries in the right-hand side vector S L f =ϕ 1e 1,,ϕ ke 1,ϕ k+1e 1

4 4 I HNĚYNKOVÁ, M PLEŠINGER, Z SRAKOŠ (as mentioned above, we may have ϕ j =forsomej) Let b have nonzero components f jl in n left singular vector subspaces corresponding to nonzero singular values σ j with indices j 1,,j n,1 n k he component f k+1 (the component of b in the null space of A ) is nonzero due to the fact that the problem is incompatible Consider the row permutation Π L of the matrix SL f Σ such that Π LSL f =b 1, ϕ j1,,ϕ jn,ϕ k+1,,,, ie, all entries of b 1 =ϕ j1,,ϕ jn,ϕ k+1 = f j1,, f jn, f k+1 R n+1 (28) are positive hen there exists a column permutation Π R of the matrix Π L Σsuch that Π A11 LΣΠ R =, A 22 where the block A 11 R (n+1) n is (with b R(A)) rectangular, containing at most one copy of each nonzero singular value σ j on its diagonal and having the zero last row All the other singular values are moved to the diagonal of the second block A 22, which can be of any shape, or nonexistent Summarizing, we obtain P b1 A b AQ = 11, P US A L Π L, Q VS R Π R, 22 where b 1 A 11 anda 11 are of minimal dimensions; see 14 he corresponding transformation and conformal splitting of vector of unknowns is Q x =(VS R Π R ) x1 x = x 2 he subproblem b 1 A 11, or A 11 x 1 b 1, contains the necessary and sufficient information for solving the original problem Ax b, and it is called the core problem he solution of the second subproblem A 22 x 2 with the maximally dimensioned block A 22 R (m n 1) (n n) canbeconsideredtobex 2 = (see the discussion in 14) giving x1 x1 x = Q = VS x R Π R 2 (29) he core problem (in a different form) can also be revealed by the Golub-Kahan bidiagonalization; see 14, Section3and11 For d = 1 the core problem has a unique LS solution, see 14 Using (29), the core problem defines the minimum norm LS solution if it exists, or the minimum norm nongeneric solution of (11); see 14 and1, section 31, pp Computationally (assuming exact arithmetic) the solution (29) is for d = 1 therefore identical to the output of the classical LS algorithm given in 19, Chapter 361, pp 87 9; see also 1, Algorithm 1, p 767 Unlike in the classical LS algorithm, the solution (29) is constructed in a straightforward way by separating the LSmeaningful part A 11 x 1 b 1 of the problem Ax b from the redundant and irrelevant part represented by A 22 x 2, x 2 = In the rest of this paper it will be shown how to generalize the SVD-based data reduction and obtain the core problem for d>1, ie, for the problem AX B

5 HE CORE PROBLEM WIHIN A LINEAR APPROXIMAION PROBLEM AX B 5 3 Data reduction in the multiple right-hand side case Consider the problem (11) withd>1 In this section, we construct orthogonal matrices P, Q, R, that transform the original data matrix B A into the block form P R B A =P Q BR P B1 A AQ = 11, (31) A 22 where B 1 and A 11 are of minimal dimensions (the proof of minimality will be given in section 4) he orthogonal transformation (31) is done in four successive steps: preprocessing of the right-hand side B (section 31), transformation of the system matrix A (SVD of A) (section 32), transformation of the right-hand side B (section 33), and final permutation (section 34) 31 Preprocessing of the right-hand side Let d rank(b) min{m, d} Consider the SVD of B in the form B = SΘR, S R m d, Θ R d d, R R d d, (32) m d d d d = d where S has mutually orthonormal columns, ie S S = I d,θisoffull row rank, and R is square, ie R 1 = R (Note that the schema illustrates the case m>d>d; other cases can be illustrated analogously) We will see later that this R plays the role of the transformation matrix R in (31) If d<d,thenb contains linearly dependent columns representing redundant information that can be removed from the original problem (11) Multiplication of (11) from the right by R gives A(XR) BR, (33) where BR = SΘ C, R m d, C R m d, and (34) XR Y,Y R n d, Y R n d ; if d = d, thenbr = C, XR = Y With this notation I Id d BR A =C, A I XR d d =AY C AY (35) Y Y he original problem (11) is in this way split into two subproblems AY C and AY, (36) where the second problem is homogeneous Following the arguments in 14, we consider the meaningful solution Y In this way, the approximation problem (11) reduces to Id AY C, or, equivalently, C A, (37) Y where A R m n, Y R n d,andc R m d is of full column rank From(32) (34) it follows that the right-hand side matrix C has mutually orthogonal columns

6 6 I HNĚYNKOVÁ, M PLEŠINGER, Z SRAKOŠ 32 ransformation of the system matrix Consider the SVD of A given by (22) with(23) and(24) he problem (37) can then be transformed analogously to (25), ( U AV )( V Y ) =ΣZ F, (38) where Z = V Y, F = U CEquivalently, Id F Σ (39) Z he approximation problem ΣZ F has the full column rank right-hand side matrix F with mutually orthogonal columns and the system matrix Σ in a diagonal form 33 ransformation of the right-hand side Similarly to the single righthand side case, we now transform the right-hand side matrix of (38) inordertoget as many zero rows as possible Consider a partitioning of F into the block-rows with respect to the multiplicities of the singular values of the system matrix A, ie F =F 1,,F k,f k+1, where F j R mj d, j =1,,k,k+1 Let r j rank(f j ) min{m j, d} Consider the SVD of F j in the form F j = S j Θ j W j, S j R mj mj, Θ j R mj rj, W j R d rj, (31) m j = d m j r j d r j where S j is square, ie Sj 1 = Sj,Θ j is of full column rank, andw j has mutually orthonormal columns, ie Wj W j = I rj, j =1,,k,k + 1 (Note that, as above, the schema illustrates the case r j <m j < d) he matrix S j generalizes the role of the identically denoted matrix in section 2; see(27) Consider the block diagonal orthogonal matrices S =diag(s 1,,S k ), S L = diag(s,s k+1 ), S R =diag(s,i n r ), (311) where as in (24) r =rank(a), and recall that Σ=diag(σ 1 I m1,,σ k I mk, m r,n r ) R m n, σ 1 >σ 2 >>σ k >, see (23), (24) hen ΣZ F from (38) can be transformed to ( S L ΣS R )( S R Z ) S L F and S L ΣS R =Σ (312) Equivalently, (39) becomes S L F Σ I d SR Z, and the extended (data) matrix has the form Θ 1 W1 σ 1 I m1 S L F Σ = Θ k Wk σ k I mk Rm (n+d) (313) Θ k+1 Wk+1

7 HE CORE PROBLEM WIHIN A LINEAR APPROXIMAION PROBLEM AX B 7 If m j >r j, then the block S j F j =Θ j W j contains zero rows at the bottom, see (31) Denote Θ j W j Φj, Φ j R rj d, (314) where Φ j is the block of nonzero rows (if r j =, then the block Φ j has no rows) For r j = m j we simply have Θ j Wj Φ j It follows from (31) thatφ j has mutually orthogonal rows hematrixφ j generalizes the role of the number ϕ j ;see(26), (28) 34 Final permutation Now the aim is to find a permutation of (313) that reveals the block diagonal structure (31) his can be done analogously to the single right-hand side case (see also 14, Section 2), by moving the rows of (313) with zero blocks in Θ j Wj (see (314)) to the bottom submatrix of the whole matrix, see the matrix in the middle row of (315), with subsequent moving of the corresponding columns with the diagonal blocks σ j I mj r j in the bottom to the right, Π L Θ 1 W1 σ 1 I m1 Θ k Wk σ k I mk Θ k+1 Wk+1 Id Π R Φ 1 σ 1 I r1 Φ k σ k I rk = Φ k+1 σ 1 I m1 r 1 σ k I mk r k B1 A 11 (315) A 22 Here Π L R m m is given by Ir 1 I m1 r 1 Π L Ir k I mk r k Ir k+1 I mk+1 r k+1 (316) and it permutes the block-rows starting with Φ j up, while moving the block-rows starting with zero blocks down Analogously, Π R R n n given by Π R I m1 r 1 Ir k I mk r k I n r Ir 1 (317)

8 8 I HNĚYNKOVÁ, M PLEŠINGER, Z SRAKOŠ rearranges the block-columns of the system matrix Note that if for some j we have r j = m j, then the block I mj r j and the corresponding block-rows and block-columns vanish; if r j =, then the block I rj and the corresponding block-rows and blockcolumns vanish Let us briefly summarize the whole transformation 35 Summary of the transformation Using the SVD B = SΘR, defined in (32), the original approximation problem (11) is transformed to AX B, A R m n, X R n d, B R m d, AY C, A R m n, Y R n d, C R m d, with the full column rank right-hand side hen using the SVD A = UΣV in (22) (24), the problem is further transformed to defined ΣZ F, Σ R m n, Z R n d, F R m d, with the diagonal system matrix Σ Using singular value decompositions F j = S j Θ j W j of block-rows of F,see(31), and the orthogonal matrix S given by (311), the righthand side matrix gets the structure with the full row rank block-rows Φ j and the zero block-rows, while the diagonal system matrix Σ stays unchanged Finally, the permutation matrices Π L,Π R given by (316) and(317) are used to collect the full row rank blocks, and to transform the system matrix to the block diagonal form with two diagonal (in general rectangular) blocks, see (315) We give a quick summary of the mathematical transformations, each preceded by the relevant equation numbers: (11), (32): AX B = SΘR =C, R, { (U AV )(V XR) U BR, (22), (34) (38): Σ(V XR) U C, = F,, { (S L U AV S R )(SR (31) (314): V XR) SL U BR, Σ(SR V XR) SL F,, (315) (317): (Π L S L U AV S R Π R )(Π R S R V XR) Π L S L U BR, (Π L ΣΠ R)(Π R S R V XR) Π L S L F,, diag(a 11,A 22 )(Π R S R V XR) diag(b 1, ), so that the full transformations of A, X, andb can be summarized as (P AQ)(Q XR) P BR, P US L Π L, Q VS R Π R (318) Clearly P 1 = P, Q 1 = Q, R 1 = R,and R P B A Q B1 A = 11 A 22 }{{} d }{{} d d }{{} n } m } m m }{{} n n (319)

9 HE CORE PROBLEM WIHIN A LINEAR APPROXIMAION PROBLEM AX B 9 is of the form (31), where from (315) Φ 1 σ 1 I r1 B 1 A 11 Φ k σ k I rk Φ k+1 Rm (n+d), (32) and m k+1 j=1 r j, n k j=1 r j, d rank(b) he block A 22 has the form A 22 diag(σ 1 I m1 r 1,,σ k I mk r k, m r rk+1,n r) (321) hus the original problem AX B is transformed into the block form A11 (Q XR ) B1, A 22 compare with (12) Using a conformal partitioning of the matrix of unknowns Q X1 X 1 XR = } n X 2 X 2 } n n }{{}}{{} d d d, (322) where Q X1 X 2 = Y, Q X 1 X 2 = Y, and Y,Y = XR, see (34), the block diagonal structure of the system matrix and the right-hand sides matrix allows us to split the original problem AX B into (generally four) subproblems A 11 X 1 B 1, and A 22 X 2, A 11 X 1, A 22X 2 he last three subproblems are homogenous and we consider, following the arguments in 14, X 2, X 1, X 2 Only the subproblem Id A 11 X 1 B 1, or, equivalently, B 1 A 11, (323) X 1 where A 11 R m n, X 1 R n d, B 1 R m d, has to be solved If its solution is X 1, the solution X of the original problem AX B is then X1 X Q R (324) he dimensions of the reduced problem satisfy max{n, d} m n + r k+1 n + d Note that d can be smaller than, equal to, or even largerthan n From the construction we immediately have the following properties: (CP1) he matrix A 11 R m n is of full column rank equal to n m (CP2) he matrix B 1 R m d is of full column rank equal to d m (CP3) he matrices Φ j R rj d are of full row rank equal to r j d, j =1,,k+1

10 1 I HNĚYNKOVÁ, M PLEŠINGER, Z SRAKOŠ Remark 31 Note that instead of the SVD preprocessing (32) of the right-hand side B (section 31), one can use an LQ decomposition (producing a matrix with m d lower triangular block in the column echelon form and an orthogonal matrix), or another decomposition giving a full column rank matrix multiplied by an orthogonal matrix from the right Similarly, instead of the singular value decompositions (31) of F j (section 33) one can use QR decomposition(s) (producing a matrix with an r j d upper triangular block in the row echelon form and an orthogonal matrix), or another decomposition(s) giving a full row rank matrix multiplied by an orthogonal matrix from the left Such modifications lead, in general, to a different subproblem B 1 Ã11 with the same dimension as (32) and satisfying (CP1) (CP3) Moreover, there exist orthogonal matrices P 1 = P, Q 1 = Q, R 1 = R such that P B 1 A 11 R Q = B 1 Ã11 (325) Properties (CP1) (CP3) are invariant with respect to any orthogonal transformation of the form (325) 4 he Core Problem Let B 1 A 11 be the subproblem of the given problem B A obtained by the transformation (318) (319) he subproblem B 1 A 11, B 1 R m d, A 11 R m n, has the properties (CP1) (CP3) Consider now arbitrary orthogonal transformations of the form in (12) of the original problem analogous to (318) (319) such that P B A R Q B1 Â = 11, (41) Â 22 where P 1 = P, Q 1 = Q, R 1 = R,and B 1 R bm bd, Â11 R bm bn Substituting for B A from(319) gives (P P ) B1 A 11 R R B1 Â A 22 Q = 11 (42) Q Â 22 We will show that the subproblem B 1 A 11 on the left side of (42) has minimal dimensions over all possible subproblems B 1 Â11 on the right-hand side; ie d d, m m, andn n In analogy with the single right-hand side case, this justifies calling B 1 A 11 acore problem within B A 41 Proof of minimality he proof consists of five successive steps presented, for an easy orientation, in separate subsections he first gives d d he second step describes the structure of the orthogonal matrix R R he proof of the inequality m m is then based on the projections of B to the left singular subspaces of A he fourth step describes the structure of the orthogonal matrix P P which is needed for finalizing the proof by showing n n 411 Number of columns of the reduced observation matrix In (41) B 1 R bm bd,and P B R = B1 has rank( B 1 )=rank(b) =d, so that d d

11 HE CORE PROBLEM WIHIN A LINEAR APPROXIMAION PROBLEM AX B Structure of the R R matrix Consider the partitioning R R R = 11 R 12 R 21 R 22, R 11 R d bd, R 22 R (d d) (d d) b hen (42) gives (P P ) B1 (R R) =(P P ) B1 R 11 B 1 R 12 } {{ } d } {{ } d d Because B 1 is of full column rank, then R 12 = Consequently R R = R 11 R 21 R 22 B1 = }{{} d }{{} d d, (43) and therefore R 11 has d orthonormal rows 413 Number of rows of the reduced data matrix he number m = k+1 j=1 r j of rows of B 1 A 11 is the sum of dimensions of intersections of the range of B with the individual left singular subspaces of A, and these dimensions are invariant with respect to the orthogonal transformation of the form (318) (319); see also (32) he desired inequality m m will be shown by comparing three different forms of the SVD of the system matrix A Recall that A = UΣV (see (22) (24)) is the standard SVD of A Now consider the singular value decompositions  11 = Û1 Σ 1 V 1,  22 = Û2 Σ 2 V 2, with square orthogonal matrices Û1, Û2, V 1,and V 2, and diagonal matrices Σ 1, Σ 2 hen, using (41), ( ) ( Û1 Σ1 A = P Û 2 Σ2 Q V1 V2 ) (44) represents another SVD of the system matrix A Furthermore, A 11, A 22 in (32), (321) are diagonal matrices he transformation (318) (319) gives A11 A = P Q, where P = US A L Π L, (45) 22 which also represents the SVD of A Because the singular values (and their multiplicities) of A are unique, for some permutation matrices Π L, Π R (analogous to Π L,Π R in (316), (317)) Σ= Π L Σ1 Σ2 Π A11 R =Π L A 22 Π R he left singular vector subspaces of A can be expressed using (44), (45) andsome orthogonal matrices Ŝ L = diag(ŝ1,,ŝk, Ŝk+1), Ŝ j R mj mj, j =1,,k,k+1, (46)

12 12 I HNĚYNKOVÁ, M PLEŠINGER, Z SRAKOŠ as U = P Û1 Û 2 Π LŜ L = P Π L S L (47) Using (41) and(319), the projections of B in the left singular subspaces of A are U B = ŜL Π L Û 1 B1 R = S L Π L B1 R, and, with (43), Û 1 B 1 = Π LŜ L S LΠ L B1 R 11 (48) he matrix R 11 R d bd, d d, has linearly independent rows; see (43) and section 411 Now every row of B 1 is nonzero (see (32) and (CP3)), so any row of B 1 multiplied by R 11 on the right must also be nonzero, and the matrix B 1R 11 has m nonzero rows he matrix Π L permutes rows (see (316)), and matrices S L, ŜL (see (311), (46)) are orthogonal block diagonal matrices such that (see (315), (314)), } Ŝ1 S Φ1 R 11 1 m 1 Ŝ L S LΠ L B1 R 11 = } Ŝk S Φk R 11 k } Ŝk+1 S Φk+1 R 11 k+1 m k m k+1, (49) where each block of m j rows corresponds to one singular value σ j (of the matrix Σ) with the multiplicity m j Because Φ j R 11 has all r j rows nonzero (since the rows of Φ j are linearly independent, the rows of Φ j R 11 are, in fact, also linearly independent and therefore each Φ j R 11 is of full row rank) and Ŝ j S j are orthogonal matrices, the block matrix on the right of (49) has at least m nonzero rows, see (32) he permutation matrix Π L then moves all the nonzero rows (and possibly also some zero rows) of (49) tothetopblockû 1 B 1, R bm d of the leftmost matrix in (48) hus Û 1 B 1 has at least m nonzero rows (and possibly some zero rows) Because Û1 is square, B 1 R bm bd has at least m rows, so that m m 414 Structure of the P P matrix Consider the partitioning of the permutation matrices Π L in (316) withm = k+1 j=1 r j as in (32), and Π L in (48) where Û 1 is m m, Π L =Π 1, Π 2, Π 1 R m m, ΠL = Π 1, Π 2, Π1 R m bm hen (48) can be rewritten as Û 1 B1 Π = 1 ŜL S LΠ 1 Π 1 ŜL S LΠ 2 Π 2 Ŝ L S LΠ 1 Π 2 ŜL S LΠ 2 B1 R 11,

13 HE CORE PROBLEM WIHIN A LINEAR APPROXIMAION PROBLEM AX B 13 yielding the condition ( Π 2 Ŝ L S L Π 1 )(B 1 R 11) = (41) Using the structure of the matrices Π L, S (see (311), (316)) and analogously for Π L and Ŝ (see section 413) Π 2 Ŝ L S L Π 1 =,I m1 br1 Ŝ 1 S 1 Ir 1,I mk brk Ŝ k S k Ir k,i mk+1 brk+1 Ŝ k+1 S k+1 Ir k+1 and thus with (32) the condition (41) is split into k +1blockparts ( ),I mj br Ŝ j j S Irj j (Φ j R 11 )=, j =1,,k,k+1 Because Φ j R 11 are of full row rank, this gives for all indices j,i mj br Ŝ j j S Irj j =, and thus also Π 2 ŜL S LΠ 1 = Using (47) we finally obtain, P P =Π L SL ŜL Π Û1 L Û 2 Π = 1 SL ŜL Π 1 Û1 Π 2 SL ŜL Π 1 Û1 Π 2 SL ŜL Π 2 Û2 P 11 } m P 21 P 22 } m m (411) }{{}}{{} m m m and therefore P 11 has m orthonormal rows 415 Number of columns of the reduced system matrix he relation (42) gives, using the structure of P P in (411), P 11 A11 P 21 P 22 (Q Â11 Q) = A 22 Â 22 hus (P 11 ) A 11, (P 21 ) A 22 (Q Q) = Â 11, Because (P 11 ) has orthonormal columns and Q Q is a square orthogonal matrix, A 11 R m n in (32) hasrankn, andâ11 R bm bn (see (41)), n =rank(a 11 )=rank((p 11 ) A 11 ) rank(â11, ) = rank(â11) n completing our proof

14 14 I HNĚYNKOVÁ, M PLEŠINGER, Z SRAKOŠ 5 Summary and concluding remarks We formulate the minimality property as a theorem heorem 51 (Minimality) Consider the problem AX B (11) anditssub- problem A 11 X 1 B 1 (323), where A 11 R m n and B 1 R m d are obtained by the transformation (319) described in section 3 he subproblem A 11 X 1 B 1 has minimal dimensions over all subproblems Â11 X 1 B 1, Â 11 R bm bn, B1 R bm d b obtained by an orthogonal transformation of the form (41), ie d d, m m, and n n his motivates the following definition of the core problem within (11) Definition 52 (Core problem) he subproblem A 11 X 1 B 1 is a core problem within the approximation problem AX B if B 1 A 11 is minimally dimensioned and A 22 maximally dimensioned subject to (31), ie subject to the orthogonal transformations of the form P B A R Q =P BR P B1 A AQ 11 A 22 he core problem obtained by the data reduction based on the SVD described in section 3 iscalledthecoreprobleminthesvdform his extends the core problem definition within the single right-hand side problems formulated by C C Paige and Z Strakoš in14 he data reduction presented here is based on the singular value decomposition of the system matrix A he original paper 14 presents two ways of determining the core problem he second is based on Golub-Kahan iterative bidiagonalization his can also be generalized to problems with multiple right-hand sides It leads to a band algorithm which was for this purpose proposed by Å Björck; see 1, 2, 3, and also the unpublished manuscript 4 he core problem concept is a useful tool in understanding the LS problems, which was the original motivation in 14 he data reduction and core problem based on the generalization of the Golub-Kahan iterative bidiagonalization is under investigation he results will be presented in the near future Acknowledgments he authors thank Diana M Sima for valuable discussions about the LS problems he numerous comments and suggestions of two anonymous referees led to significant improvements of the text REFERENCES 1 Å Björck, Bidiagonal decomposition and least squares, Presentation, Canberra (25) 2, A Band-Lanczos generalization of bidiagonal decomposition, Presentation, Conference in Honor of G Dahlquist, Stockholm (26) 3, A band-lanczos algorithm for least squares and total least squares problems, In: book of abstracts of 4th otal Least Squares and Errors-in-Variables Modeling Workshop, Leuven, Katholieke Universiteit Leuven (26), pp , Block bidiagonal decomposition and least squares problems with multiple right-hand sides, unpublished manuscript 5 G H Golub, Some modified matrix eigenvalue problems, SIAM Review, 15 (1973), pp G H Golub, A Hoffman, and G W Stewart, A generalization of the Eckart-Young- Mirsky matrix approximation theorem, Linear Algebra Appl, 88/89 (1987), pp G H Golub and C Reinsch, Singular value decomposition and least squares solutions, Numerische Mathematik, 14 (197), pp 43 42

15 HE CORE PROBLEM WIHIN A LINEAR APPROXIMAION PROBLEM AX B 15 8 G H Golub and C F Van Loan, An analysis of the total least squares problem, SIAMJ Num Anal, 17 (198), pp , Matrix Computations, fourth ed, he Johns Hopkins University Press, Baltimore and London, I Hnětynková, M Plešinger,DMSima,ZStrakoš, and S Van Huffel, he total least squares problem in AX B: A new classification with the relationship to the classical works, SIAM J on Matrix Anal and Appl, 32 (211), pp I Hnětynková and Z Strakoš, Lanczos tridiagonalization and core problems, Linear Algebra Appl, Vol 421 (27), pp C C Paige and Z Strakoš, Scaled total least squares fundamentals, Numerische Mathematik, 91 (22), pp , Unifying least squares, total least squares and data least squares, In: otal Least Squares and Errors-in-Variables Modeling, S Van Huffel and P Lemmerling, eds, Dordrecht, Kluwer Academic Publishers (22), pp , Core problem in linear algebraic systems, SIAM J Matrix Anal Appl, 27 (26), pp M Plešinger, he otal Least Squares Problem and Reduction of Data in AX B, PhD thesis, echnical University of Liberec, Liberec, Czech Rebuplic, D M Sima, Regularization techniques in model fitting and parameter estimation, PhDthesis, Dept of Electrical Engineering, Katholieke Universiteit Leuven, D M Sima and S Van Huffel, Core problems in AX B, echnical Report, Dept of Electrical Engineering, Katholieke Universiteit Leuven (26) 18 A van der Sluis, Stability of the solutions of linear least squares problems, Numerische Mathematik, 23 (1975), pp S Van Huffel and J Vandewalle, he otal Least Squares Problem: Computational Aspects and Analysis, SIAM Publications, Philadelphia, PA, S Van Huffel, ed, Recent Advances in otal Least Squares echniques and Errors-in- Variables Modeling, Proceedings of the Second Int Workshop on LS and EIV, Philadelphia, SIAM Publications, S Van Huffel and P Lemmerling, eds, otal Least Squares and Errors-in-Variables Modeling: Analysis, Algorithms and Applications, Dordrecht, Kluwer Academic Publishers, M Wei, he analysis for the total least squares problem with more than one solution, SIAM J Matrix Anal Appl, 13 (1992), pp , Algebraic relations between the total least squares and least squares problems with more than one solution, Numer Math, 62 (1992), pp

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES Christopher C. Paige School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7 paige@cs.mcgill.ca Zdeněk Strakoš

More information

The total least squares problem in AX B. A new classification with the relationship to the classical works

The total least squares problem in AX B. A new classification with the relationship to the classical works Eidgenössische Technische Hochschule Zürich Ecole polytechnique fédérale de Zurich Politecnico federale di Zurigo Swiss Federal Institute of Technology Zurich The total least squares problem in AX B. A

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Total Least Squares Approach in Regression Methods

Total Least Squares Approach in Regression Methods WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Row Space and Column Space of a Matrix

Row Space and Column Space of a Matrix Row Space and Column Space of a Matrix 1/18 Summary: To a m n matrix A = (a ij ), we can naturally associate subspaces of K n and of K m, called the row space of A and the column space of A, respectively.

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Linear Systems. Carlo Tomasi

Linear Systems. Carlo Tomasi Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

On the Solution of Constrained and Weighted Linear Least Squares Problems

On the Solution of Constrained and Weighted Linear Least Squares Problems International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Exponential Decomposition and Hankel Matrix

Exponential Decomposition and Hankel Matrix Exponential Decomposition and Hankel Matrix Franklin T Luk Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong luk@csecuhkeduhk Sanzheng Qiao Department

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Normalized power iterations for the computation of SVD

Normalized power iterations for the computation of SVD Normalized power iterations for the computation of SVD Per-Gunnar Martinsson Department of Applied Mathematics University of Colorado Boulder, Co. Per-gunnar.Martinsson@Colorado.edu Arthur Szlam Courant

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

The SVD-Fundamental Theorem of Linear Algebra

The SVD-Fundamental Theorem of Linear Algebra Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

Linear Methods in Data Mining

Linear Methods in Data Mining Why Methods? linear methods are well understood, simple and elegant; algorithms based on linear methods are widespread: data mining, computer vision, graphics, pattern recognition; excellent general software

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

A fast randomized algorithm for orthogonal projection

A fast randomized algorithm for orthogonal projection A fast randomized algorithm for orthogonal projection Vladimir Rokhlin and Mark Tygert arxiv:0912.1135v2 [cs.na] 10 Dec 2009 December 10, 2009 Abstract We describe an algorithm that, given any full-rank

More information

Using semiseparable matrices to compute the SVD of a general matrix product/quotient

Using semiseparable matrices to compute the SVD of a general matrix product/quotient Using semiseparable matrices to compute the SVD of a general matrix product/quotient Marc Van Barel Yvette Vanberghen Paul Van Dooren Report TW 508, November 007 n Katholieke Universiteit Leuven Department

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Some Reviews on Ranks of Upper Triangular Block Matrices over a Skew Field

Some Reviews on Ranks of Upper Triangular Block Matrices over a Skew Field International Mathematical Forum, Vol 13, 2018, no 7, 323-335 HIKARI Ltd, wwwm-hikaricom https://doiorg/1012988/imf20188528 Some Reviews on Ranks of Upper Triangular lock Matrices over a Skew Field Netsai

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. = SECTION 3.3. PROBLEM. The null space of a matrix A is: N(A) {X : AX }. Here are the calculations of AX for X a,b,c,d, and e. Aa [ ][ ] 3 3 [ ][ ] Ac 3 3 [ ] 3 3 [ ] 4+4 6+6 Ae [ ], Ab [ ][ ] 3 3 3 [ ]

More information

Problem set 5: SVD, Orthogonal projections, etc.

Problem set 5: SVD, Orthogonal projections, etc. Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors. Math 7 Treibergs Third Midterm Exam Name: Practice Problems November, Find a basis for the subspace spanned by the following vectors,,, We put the vectors in as columns Then row reduce and choose the pivot

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

THE RELATION BETWEEN THE QR AND LR ALGORITHMS SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian

More information

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES

REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES REORTHOGONALIZATION FOR THE GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART I SINGULAR VALUES JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information