76. LIO ND Z. I vector x 2 R n by a quasi-newton-type algorithm. llwright, Woodgate[2,3] and Liao[4] gave some necessary and sufficient conditions for
|
|
- Adelia Henry
- 6 years ago
- Views:
Transcription
1 Journal of Computational Mathematics, Vol.2, No.2, 23, LEST-SQURES SOLUTION OF X = D OVER SYMMETRIC POSITIVE SEMIDEFINITE MTRICES X Λ) nping Liao (Department of Mathematics, Hunan Normal University, Changsha 48) (Department of Mathematics and Information Science Changsha University, Changsha 43, China) Zhongzhi ai y (State Key Laboratory of Scientific/Engineering Computing, Institute of Computational Mathematics and Scientific/Engineering Computing, cademy of Mathematics and System Sciences, Chinese cademy of Sciences, P.O. ox 279, eijing 8, China) bstract Least-squares solution of X = D with respect to symmetric positive semidefinite matrix X is considered. y making use of the generalized singular value decomposition, we derive general analytic formulas, and present necessary and sufficient conditions for guaranteeing the existence of the solution. y applying MTL 5.2, we give some numerical examples to show the feasibility and accuracy of this construction technique in the finite precision arithmetic. Key words: Least-squares solution, Matrix equation, Symmetric positive semidefinite matrix, Generalized singular value decomposition.. Introduction Denote by R n m the set of all real n m matrices, I k the identity matrix in R k k,or the set of all orthogonal matrices in R,SR the set of all symmetric matrices in R, and SR (SR ) the set of all symmetric positive semidefinite (definite) matrices in R. The notations O ( >O) represents that the matrix is a symmetric positive semidefinite (definite) matrix, represents the Moor-Penrose generalized inverse of a matrix, andk k denotes the Frobenius norm. We usebotho and to denote the zero matrix. The purpose of this paper is to study the least-squares problem of the matrix equation X = D with respect to X 2 SR, i.e., Problem I. For given matrices 2 R m n, 2 R n p, D 2 R m p, find a matrix ^X 2 SR such that k ^X Dk = min X2SR kx Dk: llwright[] first investigated a special case of Problem I, i.e., Problem. For given matrices ;D 2 R n m, find a matrix ^X 2 SR such that k ^X Dk = min X2SR kx Dk: The solution of Problem can be used as estimates of the inverse Hessian of a nonlinear differentiable function f : R n! R, which is to be minimized with respect to a parameter Λ Received ugust, 2. ) Subsidized by The Special Funds For Major State asic Research Project G y address: bzzlsec.cc.ac.cn.
2 76. LIO ND Z. I vector x 2 R n by a quasi-newton-type algorithm. llwright, Woodgate[2,3] and Liao[4] gave some necessary and sufficient conditions for the existence of solution of Problem as well as explicit formulas of the solution for some special cases. Dai and Lancaster[5] studied in detail another special case of Problem I, i.e., Problem. For given matrices 2 R n m, D 2 R m m, find a matrix ^X 2 SR such that k T ^X Dk = min X2SR k T X Dk: n inverse problem[6,7] arising in structural modification of the dynamic behaviour of a structure calls for solution of Problem. Dai and Lancaster[5] successfully solved Problem by using singular value decomposition(svd). Obviously, Problem I is a nontrivial generalization of Problems and, and the approachs adopted for solving Problems and in [-5] are not suitable for Problem I. In this paper, by applying the generalized singular value decomposition (GSVD) we will present necessary and sufficient conditions for the existence of the solution of Problem I, and give analytic expression of it, too. 2. Solutions of Problem I We first study the solution of Problem I when matrices ; ; D 2 R and ) = ) =n. Lemma 2. []. If ) =n, then Problem has a unique solution. Theorem 2.. If ; ; D 2 R and ) = ) = n, then Problem I has a unique solution. Proof. ecause ; 2 R and ) = ) =n,we easily know that )=n μ and kx Dk = k(x T )( T ) Dk = k X μ μ Dk; (2.) where μ = T, and μ X = X T. Now, it is obvious that μ X if and only if X. Therefore, by Lemma 2. and (2.) Problem I has a unique solution. To study the solvability of Problem I in general case, we decompose the given matrix pair [ T ;]by GSVD[8] as follows: where M is an n n nonsingular matrix, and I O O r O S ± = O C s O O O k r s O O O n k with r s m r s T = M± U T ; = M± V T ; (2.2) O O O ; O S ± = O C O O I O O O pk r s k r s r s k r s n k k = T ;); r = k ); s = T ) ) k; I and I are identity matrices, O and O are zero matrices, and S =diag(ff ;ff 2 ;:::;ff s ); S =diag(fi ;fi 2 ;:::;fi s ) >ff ff 2 ::: ff s > ; <fi» fi 2» :::» fi s < ; ff 2 i fi 2 i =(i =; 2;:::;s); U = (U ; U 2 ; U 3 ) 2 OR m m ; V = (V ; V 2 ; V 3 ) 2 OR p p : r s m r s pr k s k r s ;
3 Least-squares Solution of X = D Over Symmetric Positive Semidefinite Matrices X 77 Theorem 2.2. Let 2 R m n, 2 R n p, and D 2 R m p. ssume that the matrix pair [ T ;] has GSVD (2.2). Denote by D ij the matrix Ui T DV j(i; j = ; 2; 3). Then Problem I exists a solution if and onlyif ^X22 )=S DT 2; ^X22 ;S D 23); (2.3) where ^X22 is a unique minimizer of ks X 22 S D 22 k with respect to X 22 2 SR s s. Moreover, if Problem I exists a solution X, then this solution has the following expression: where 8Z 2 R k (n k) and X = M T Y YZ (YZ) T Z T YZ G 3 Y = X D 2 S ^X 22 S S DT 2 D 3 D 23 D T 3 D T 23S X 33 M ; (2.4) ; (2.5) X = D 2 S ^X 22 S DT 2 G ; (2.6) X 33 = D T ^X 23 S 22 S D 23 (D 3 D 2 S ^X 22 S D 23) T G (D 3 D 2 S ^X 22 S D 23)G 2 ; (2.7) 8G 2 SR r r ; 8G 2 2 SR (k r s) (k r s) ; 8G 3 2 SR (n k) (n k) ; and G satisfies the following condition: G )=G ;D 3 D 2 S ^X 22 S D 23): The following lemmas are necessary for proving Theorem 2.2. Lemma 2.2. Suppose that 2 R m n, 2 R n p and D 2 R m p, and there exists a minimizer ^X to Problem I. Then Proof. k ^X Dk = minx2sr It is obvious that k ^X Dk = minx2sr kx Dk = inf X2SR kx Dk: (2.8) kx Dk»inf X2SR kx Dk: On the other hand, if =, then ^X fi > forany f>, and k( ^X fi) Dk»k ^X Dk kfk < k ^X Dk f; if 6=, then by letting ffi =(2kk) f for any f>, we have ^X ffii >, and That is to say, k( ^X ffii) Dk»k ^X Dk kffik < k ^X Dk f: inf X2SR kx Dk»k ^X Dk = minx2sr kx Dk; and it follows straightforwardly that (2.8) holds. Lemma 2.3 (See [9], Theorem ). Let Q =(Q ij ) 2 2 be a 2 2 real block symmetric matrix, with Q and Q 22 square submatrices. Then Q is a symmetric positive semidefinite matrix if and onlyif Q ; Q 22 Q T 2Q Q 2 and Q )=Q ;Q 2 ): Lemma 2.3 directly results in the following Lemmas 2.4 and 2.5. Lemma 2.4. Let Q = (Q ij ) SR be a 2 2 real block symmetric matrix, with Q 2 SR r r the known submatrix, and Q 2 and Q 22 two unknown submatrices. Then there
4 78. LIO ND Z. I exist matrices Q 2 and Q 22 such that Q if and onlyif Q. Furthermore, all submatrices Q 2 and Q 22 such that Q can be expressed as Q 2 = Q Z; Q 22 = Z T Q Z G; where 8Z 2 R r (n r) and 8G 2 SR (n r) (n r). Lemma 2.5. Let Q = (Q ij ) R be a 3 3 real block symmetric matrix, with Q 2 R r r and Q 33 2 R s s two unknown submatrices, and the other blocks the known submatrices. Then there exist Q and Q 33 such that Q if and onlyif Q 22 and Q 22 )=Q 2 ;Q 22 ;Q 23 ): Furthermore, all submatrices Q and Q 33 such that Q can be expressed as ρ Q = Q 2 Q 22 Q 2 G ; Q 33 = Q 32 Q 22 Q 23 (Q 3 Q 32 Q 22 Q 2)G (Q 3 Q 2 Q 22 Q 23)G 2 ; where 8G 2 SR r r ; 8G 2 2 SR s s and G satisfies G )=G ;Q 3 Q 2 Q 22 Q 23): (2.9) Proof. It follows from Lemma 2.3 that Q Q 2 Q 3 Q 22 Q 2 Q 23 Q () Q 2 Q 22 Q 23 () Q 2 Q Q 3 Q 3 Q 32 Q 33 Q 32 Q 3 Q 33 Q Q () Q 22 ; 3 Q2 Q Q 3 Q 33 Q (Q 22 2;Q 23 ) and Q 22 ) = Q 22 ;Q 2 ;Q 23 ) 32 () Q 22 ; Q 22 ) = Q 2 ;Q 22 ;Q 23 ) and Q Q 2 Q Q 22 2 Q 3 Q 2 Q Q Q 3 Q 32 Q Q 22 2 Q 33 Q 32 Q Q : (2.) y Lemma 2.3 we know that there exist submatrices Q and Q 33 such that (2.) holds, and all submatrices Q and Q 33 such that (2.) holds can be expressed by (2.9) when Q 22. This completes the proof of this lemma. Proof of Theorem 2.2. For any X 2 SR, partition the matrix (M T XM)as It follows from (2.2) and (2.) that X X 2 X 3 X 4 M T X XM = T 2 X 22 X 23 X 24 C X T 3 X T 23 X 33 X 34 X T 4 X T 24 X T 34 X 44 r s k r s n k r s k r s n k : (2.) where kx Dk 2 = k± T M T XM± U T DV k 2 D X 2 S D 2 X 3 D 3 = D 2 S X 22 S D 22 S X 23 D 23 D 3 D 32 D 33 = ks X 22 S D 22 k 2 kx 2 S D 2 k 2 ks X 23 D 23 k 2 kx 3 D 3 k 2 C Λ ; 2 (2.2) C Λ = kd k 2 kd 2 k 2 kd 3 k 2 kd 32 k 2 kd 33 k 2 : (2.3)
5 Least-squares Solution of X = D Over Symmetric Positive Semidefinite Matrices X 79 From Theorem 2. we know that there exists a unique matrix ^X22 which minimizes ks X 22 S D 22 k with respect to X 22 2 SR s s. Obviously, if there exist submatrices X ;X 33 ;X 4 ;X 24 ;X 34 and X 44 such that the following matrix X: X D 2 S D 3 X 4 X = M T S DT 2 ^X 22 S D 23 X 24 C D T 3 D23S T X 33 X 34 M (2.4) X T 4 X T 24 X T 34 X 44 is a symmetric positive semidefinite matrix, then by (2.2) we know that the matrix X is certainly a solution of Problem I, where the sizes of submatrices X ;X 33 ;X 4 ;X 24 ;X 34 and X 44 are the same as (2.). Thus, if (2.3) holds, i.e., ^X22 ) = S DT 2; ^X22 ;S D 23); then by Lemma 2.5 we know that there exist submatrices X and X 33 such that X D 2 S D 3 S DT 2 ^X 22 S D 23 ; (2.5) D T 3 D23S T X 33 and that all submatrices X and X 33 such that(2.5) holds can be expressed as (2.5), (2.6) and (2.7). Therefore, by Lemma 2.4 we know that Problem I has a solution when ^X22 ) = S DT 2 ; ^X22 ;S D 23); and in this case the solution X of Problem I can be expressed by (2.4). Conversely, we only need to show that there is no minimum point to Problem I when Suppose that ^X22 ) 6= S DT 2; ^X22 ;S D 23): ^X22 ) 6= S DT 2 ; ^X22 ;S D 23); and that there is a minimum point, say X (M T X M)into μx X2 μ X3 μ μ X4 M T μx X M = T 2 μx 22 X23 μ X24 μ C 2 SR. Then by partitioning the matrix μx T 3 μx T 23 μx 33 X34 μ μx T 4 μx T 24 μx T 34 μx 44 r s k r s n k r s k r s n k ; (2.6) we obtain from Lemma 2.5 that X22 μ and X22 μ )= X μ T 2; X22 μ ; X23 μ ). If ( X μ T ; μ 2 X23 )=(S DT 2 ;S D 23); then μ X22 6= ^X22. Otherewise, we have ^X22 ) = μ X22 )= μ X T 2 ; μ X22 ; μ X23 ) = S DT 2 ; ^X22 ;S D 23); which obviously contradicts the initial assumption that and Hence, in both cases ^X22 ) 6= S DT 2; ^X22 ;S D 23): ( μ X T 2; μ X23 )=(S DT 2;S D 23) ( μ X T 2; μ X23 ) 6= (S DT 2;S D 23);
6 8. LIO ND Z. I it follows from (2.2), (2.6), (2.3)-(2.4) and Lemma 2.2 that kx Dk 2 = ks μ X22 S D 22 k 2 k μ X2 S D 2 k 2 ks μ X23 D 23 k 2 k μ X3 D 3 k 2 C Λ > ks ^X22 S D 22 k 2 C Λ = min X222SR s s = inf X222SR s s ks X 22 S D 22 k 2 C Λ ks X 22 S D 22 k 2 C Λ = inf X2SR kx Dk 2 = min X2SR kx Dk 2 : This contradicts the optimality ofx, and therefore, contradicts the existence of a minimum for Problem I when ^X22 ) 6= S DT 2; ^X22 ;S D 23): This completes the proof. Theorems 2. and 2.2 directly result in the following Corollary 2.. Corollary 2.. Let 2 R m n, 2 R n p and D 2 R m p. ssume that the matrix pair [ T ;] has GSVD (2.2). Denote by D ij the matrix (Ui T DV j)(i; j =; 2; 3). Then Problem I exists a unique solution if and only if ) =) =n: Moreover, if Problem I exists a unique solution ^X, then this solution has the expression ^X = M T ^X22 M ; where ^X22 is a unique minimizer of ks X 22 S D 22 k with respect to X 22 2 SR s s. 3. Numerical examples Without loss of generality, we take numerical examples only when ) = ) = n. ased on Corollary 2., we formulate the following algorithm to find the solution ^X of Problem I. lgorithm I. Let 2 R m n, 2 R n p and D 2 R m p satisfying ) = ) =n. Then Problem I can be solved in the following steps: Step. Make the GSVD of the matrix pair [ T ;] as (2.2), and determine the nonsingular matrix M, two orthogonal matrices U =(U 2 ; U 3 ); V =(V ; V 2 ); n m n and two diagonal matrices S and S. Step 2. Compute D 22 = U T DV 2 2. Step 3. Find ^X22 such thatks ^X22 S D 22 k = min X222SR ks X 22 S D 22 k by Steps : Step 3.. Let X 22 = R T R, where R is a real n n upper triangular matrix. Step 3.2. pply function leastsq in MTL 5.2 (in Optimization Toolbox) to find an n n upper triangular matrix ^R such that p n ks ^RT ^RS D 22 k = min R2UR ks R T RS D 22 k; where UR represents the set of all real n n upper triangular matrix. Step 3.3. Compute ^X22 = ^RT ^R. Step 4. Compute ^X = M T ^X22 M. Example. Let C C C = ; = ; D = : n
7 and the residual error of ^X as k ^X Dk =5: : and the residual error of ^X as k ^X Dk =5: : Least-squares Solution of X = D Over Symmetric Positive Semidefinite Matrices X 8 y lgorithm I we can obtain S = diag(: ; : ; : ; : ); S = diag(: ; : ; : ; : ); D 22 = 7: : : : : : : : C ; : : : : : : : : M = : : : : : : : : : : : : : : : : and ^X 22 = 6: : : : : : : : C : 6: : : : : : : : y Step 4 we obtain the solution ^X of Problem I as : : : : : : : : C ^X = : : : : : : : : C Example 2 []. Let 6 = ; = 4 3 ; D = 2 3 : :5 2 4 y lgorithm I we can obtain the solution of Problem I as : : : ^X = : : : ; : : : Example 3. Let ^X represent the solution of Problem I, min ( ^X) the smallest eigenvalue of ^X. Denote by hilb(n) the n-th order Hilbert matrix, pascal(n) the n-th order Pascal matrix, magic(n) the n-th order Magic matrix, toeplitz( : n) the n-th order Toeplitzmatrix whose first row is(; 2;:::;n), and hankel( : n) the n-th order Hankel matrix whose first row is(; 2;:::;n). For example, =2 =3 =4 =2 =3 =4 =5 C C hilb(4) = ; pascal(4) = ; =3 =4 =5 =6 3 6 =4 =5 =6 =7 4 2
8 82. LIO ND Z. I magic(4) = C C ; toeplitz( : 4) = ; C hankel(:4)= : We run a variety of numerical experiments about lgorithm I for the above classes of matrices of different dimensions. Some numerical results are listed in the following table. n D k ^X Dk min( ^X) 5 pascal(n) hilb(n) magic(n) toeplitz(:n) pascal(n) magic(n) e hankel(:n) pascal(n) hilb(n) pascal(n) hankel(:n) hilb(n) toeplitz(:n) hankel(:n) pascal(n) e pascal(n) toeplitz(:n) hilb(n) toeplitz(:n) hankel(:n) hilb(n) toeplitz(:n) hankel(:n) hilb(n) toeplitz(:n) hankel(:n) hilb(n) hankel(:n) toeplitz(:n) hilb(n) hankel(:n) toeplitz(:n) pascal(n) e hankel(:n) toeplitz(:n) hilb(n) From Examples -3, we clearly see that lgorithm I is feasible and accurate for solving Problem I. cknowledgements: Part of the first author's work was supported by State Key Laboratory of Scientific/Engineering Computing of Chinese cademy of Sciences. References [] J.C. llwright, Positive semidefinite matrices: Characterization via conical hulls and least-squares solution of a matrix equation, SIM J. Control Optim., 26(988), [2] J.C. llwright, K.G. Woodgate, Errata and ddendum to: Positive semidefinite matrices: Characterization via conical hulls and least-squares solution of a matrix equation, SIM J. Control Optim., 28(99), [3] K.G. Woodgate, Least-squares solution of F = P G over positive semidefinite matrices P, Linear lgebra ppl., 245(996), 7-9. [4].P. Liao, On the least squares problem of a matrix equation, J. Comput. Math., 7(999), [5] H. Dai, P. Lancaster, Linear matrix equations from an inverse problem of vibration theory, Linear lgebra ppl., 246(996), [6]. erman, Mass matrix correction using an incomplete set of measure model, I J., 7(979), [7]. erman and E.J. Nagy, Improvement of a large analytical model using test data, I J., 2(983), [8] C.C. Paige, M.. Saunders, Towards a generalized singular value decomposition, SIM J. Numer. nal., 8(98), [9]. lbert, Conditions for positive and nonnegative in terms of pseudoinverses, SIM J. ppl. Math., 7(969),
The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B
International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China
More informationTHE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction
Journal of Computational Mathematics, Vol22, No4, 2004, 535 544 THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1 Zhen-yun Peng Department of Mathematics, Hunan University of
More informationThe Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D
International Journal of Algebra, Vol. 5, 2011, no. 30, 1489-1504 The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D D. Krishnaswamy Department of Mathematics Annamalai University
More informationThe reflexive and anti-reflexive solutions of the matrix equation A H XB =C
Journal of Computational and Applied Mathematics 200 (2007) 749 760 www.elsevier.com/locate/cam The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Xiang-yang Peng a,b,, Xi-yan
More informationSimultaneous Diagonalization of Positive Semi-definite Matrices
Simultaneous Diagonalization of Positive Semi-definite Matrices Jan de Leeuw Version 21, May 21, 2017 Abstract We give necessary and sufficient conditions for solvability of A j = XW j X, with the A j
More informationThe skew-symmetric orthogonal solutions of the matrix equation AX = B
Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics
More informationMore on generalized inverses of partitioned matrices with Banachiewicz-Schur forms
More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationSymmetric matrices and dot products
Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More informationHands-on Matrix Algebra Using R
Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches
More informationOn the Schur Complement of Diagonally Dominant Matrices
On the Schur Complement of Diagonally Dominant Matrices T.-G. Lei, C.-W. Woo,J.-Z.Liu, and F. Zhang 1 Introduction In 1979, Carlson and Markham proved that the Schur complements of strictly diagonally
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationLinear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationTrace inequalities for positive semidefinite matrices with centrosymmetric structure
Zhao et al Journal of Inequalities pplications 1, 1:6 http://wwwjournalofinequalitiesapplicationscom/content/1/1/6 RESERCH Trace inequalities for positive semidefinite matrices with centrosymmetric structure
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationThe reflexive re-nonnegative definite solution to a quaternion matrix equation
Electronic Journal of Linear Algebra Volume 17 Volume 17 28 Article 8 28 The reflexive re-nonnegative definite solution to a quaternion matrix equation Qing-Wen Wang wqw858@yahoo.com.cn Fei Zhang Follow
More informationComputing tomographic resolution matrices using Arnoldi s iterative inversion algorithm
Stanford Exploration Project, Report 82, May 11, 2001, pages 1 176 Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm James G. Berryman 1 ABSTRACT Resolution matrices
More informationMatrix decompositions
Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationLinGloss. A glossary of linear algebra
LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal
More informationApproximation Algorithms
Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs
More informationonly nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr
The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationMATH36001 Generalized Inverses and the SVD 2015
MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationreal and symmetric positive denite. The examples will only display the upper triangular part of matrices, with the understanding that the lower triang
NEESSRY ND SUFFIIENT SYMOLI ONDITION FOR THE EXISTENE OF INOMPLETE HOLESKY FTORIZTION XIOGE WNG ND RNDLL RMLEY DEPRTMENT OF OMPUTER SIENE INDIN UNIVERSITY - LOOMINGTON KYLE. GLLIVN DEPRTMENT OF ELETRIL
More informationResearch Article The Zeros of Orthogonal Polynomials for Jacobi-Exponential Weights
bstract and pplied nalysis Volume 2012, rticle ID 386359, 17 pages doi:10.1155/2012/386359 Research rticle The Zeros of Orthogonal Polynomials for Jacobi-Exponential Weights Rong Liu and Ying Guang Shi
More informationZ-Pencils. November 20, Abstract
Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is
More informationThe symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation
Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationRITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY
RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity
More informationProposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)
RODICA D. COSTIN. Singular Value Decomposition.1. Rectangular matrices. For rectangular matrices M the notions of eigenvalue/vector cannot be defined. However, the products MM and/or M M (which are square,
More informationSPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN
Electronic Journal of Linear lgebra ISSN 1081-3810 publication of the International Linear lgebra Society EL SPECIL FORMS OF GENERLIZED INVERSES OF ROW BLOCK MTRICES YONGGE TIN bstract. Given a row block
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationCANONICAL FORMS, HIGHER RANK NUMERICAL RANGES, TOTALLY ISOTROPIC SUBSPACES, AND MATRIX EQUATIONS
CANONICAL FORMS, HIGHER RANK NUMERICAL RANGES, TOTALLY ISOTROPIC SUBSPACES, AND MATRIX EQUATIONS CHI-KWONG LI AND NUNG-SING SZE Abstract. Results on matrix canonical forms are used to give a complete description
More informationMultiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationarxiv: v1 [math.ra] 11 Aug 2014
Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091
More informationConditions for Robust Principal Component Analysis
Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and
More informationof a Two-Operator Product 1
Applied Mathematical Sciences, Vol. 7, 2013, no. 130, 6465-6474 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2013.39501 Reverse Order Law for {1, 3}-Inverse of a Two-Operator Product 1 XUE
More informationSubset selection for matrices
Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,
More informationCriss-cross Method for Solving the Fuzzy Linear Complementarity Problem
Volume 118 No. 6 2018, 287-294 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem
More informationTo appear in International Journal of Nonlinear Analysis and Applications Vol. 00, No. 00, Month 20XX, 1 10 PAPER
May, 06 International Journal of Nonlinear nalysis and pplications Petraki&Samaras To appear in International Journal of Nonlinear nalysis and pplications Vol 00, No 00, Month 0XX, 0 PPER Solving the nth
More informationProduct distance matrix of a tree with matrix weights
Product distance matrix of a tree with matrix weights R B Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India email: rbb@isidacin Sivaramakrishnan Sivasubramanian
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationThe Group Inverse of extended Symmetric and Periodic Jacobi Matrices
GRUPO MAPTHE Preprint 206 February 3, 206. The Group Inverse of extended Symmetric and Periodic Jacobi Matrices S. Gago Abstract. In this work, we explicitly compute the group inverse of symmetric and
More informationOutline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St
Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic
More informationDeterminants Chapter 3 of Lay
Determinants Chapter of Lay Dr. Doreen De Leon Math 152, Fall 201 1 Introduction to Determinants Section.1 of Lay Given a square matrix A = [a ij, the determinant of A is denoted by det A or a 11 a 1j
More informationOn the Local Convergence of an Iterative Approach for Inverse Singular Value Problems
On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory
More informationELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES
Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse
More information3.2 Gaussian Elimination (and triangular matrices)
(1/19) Solving Linear Systems 3.2 Gaussian Elimination (and triangular matrices) MA385/MA530 Numerical Analysis 1 November 2018 Gaussian Elimination (2/19) Gaussian Elimination is an exact method for solving
More informationLinear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016
Linear Algebra Shan-Hung Wu shwu@cs.nthu.edu.tw Department of Computer Science, National Tsing Hua University, Taiwan Large-Scale ML, Fall 2016 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall
More informationDisplacement rank of the Drazin inverse
Available online at www.sciencedirect.com Journal of Computational and Applied Mathematics 167 (2004) 147 161 www.elsevier.com/locate/cam Displacement rank of the Drazin inverse Huaian Diao a, Yimin Wei
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationParallel Singular Value Decomposition. Jiaxing Tan
Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector
More informationOn the Berlekamp/Massey Algorithm and Counting Singular Hankel Matrices over a Finite Field
On the Berlekamp/Massey Algorithm and Counting Singular Hankel Matrices over a Finite Field Matthew T Comer Dept of Mathematics, North Carolina State University Raleigh, North Carolina, 27695-8205 USA
More informationA NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES
Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University
More informationAn Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution
Copyright 2011 Tech Science Press CMES, vol.77, no.3, pp.173-182, 2011 An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution Minghui Wang 1, Musheng Wei 2 and Shanrui Hu 1 Abstract:
More information2 Computing complex square roots of a real matrix
On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationMath/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018
Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More information2nd Symposium on System, Structure and Control, Oaxaca, 2004
263 2nd Symposium on System, Structure and Control, Oaxaca, 2004 A PROJECTIVE ALGORITHM FOR STATIC OUTPUT FEEDBACK STABILIZATION Kaiyang Yang, Robert Orsi and John B. Moore Department of Systems Engineering,
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationStrang-type Preconditioners for Solving System of Delay Dierential Equations by Boundary Value Methods Raymond H. Chan Xiao-Qing Jin y Zheng-Jian Bai
Strang-type Preconditioners for Solving System of Delay Dierential Equations by oundary Value Methods Raymond H. han iao-qing Jin y Zheng-Jian ai z ugust 27, 22 bstract In this paper, we survey some of
More informationA lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo
A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationLEAST SQUARES SOLUTION TRICKS
LEAST SQUARES SOLUTION TRICKS VESA KAARNIOJA, JESSE RAILO AND SAMULI SILTANEN Abstract This handout is for the course Applications of matrix computations at the University of Helsinki in Spring 2018 We
More informationON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION
Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th
More informationFRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG. Lehrstuhl für Informatik 10 (Systemsimulation)
FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG INSTITUT FÜR INFORMATIK (MATHEMATISCHE MASCHINEN UND DATENVERARBEITUNG) Lehrstuhl für Informatik 0 (Systemsimulation) On a regularization technique for
More informationProduct of P-polynomial association schemes
Product of P-polynomial association schemes Ziqing Xiang Shanghai Jiao Tong University Nov 19, 2013 1 / 20 P-polynomial association scheme Definition 1 P-polynomial association scheme A = (X, {A i } 0
More informationMiniversal deformations of pairs of skew-symmetric matrices under congruence. Andrii Dmytryshyn
Miniversal deformations of pairs of skew-symmetric matrices under congruence by Andrii Dmytryshyn UMINF-16/16 UMEÅ UNIVERSITY DEPARTMENT OF COMPUTING SCIENCE SE-901 87 UMEÅ SWEDEN Miniversal deformations
More informationStrictly nonnegative tensors and nonnegative tensor partition
SCIENCE CHINA Mathematics. ARTICLES. January 2012 Vol. 55 No. 1: 1 XX doi: Strictly nonnegative tensors and nonnegative tensor partition Hu Shenglong 1, Huang Zheng-Hai 2,3, & Qi Liqun 4 1 Department of
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationOn Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time
On Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time Alle-Jan van der Veen and Michel Verhaegen Delft University of Technology Department of Electrical Engineering
More informationIntroduction to Numerical Linear Algebra II
Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationLecture 6. Numerical methods. Approximation of functions
Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation
More informationGaussian Elimination without/with Pivoting and Cholesky Decomposition
Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A
More informationIterative Solution of a Matrix Riccati Equation Arising in Stochastic Control
Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationMatrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...
Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements
More informationBasic Calculus Review
Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V
More information