RANK REDUCTION AND BORDERED INVERSION
|
|
- Easter Gibson
- 5 years ago
- Views:
Transcription
1 Mathematical Notes, Misolc, Vol. 2., No. 2., (21), pp RANK REDUCTION AND BORDERED INVERSION Aurél Galántai Institute of Mathematics, University of Misolc 3515 Misolc Egyetemváros, Hungary [Received October 8, 21] Abstract. We clarify the connection between the ran reduction algorithm and the bordered inversion method and their related sequences. Mathematical Subject Classification: 15A9, 15A21, 15A23, 65F3, 65F99. Keywords: Ran reduction, bordered inversion, inverse factorization, conjugation 1. Introduction The ran reduction algorithm and the bordered inversion method play important roles in numerical linear algebra. The ran reduction algorithm, while eeping the size of the matrix, decreases its ran to zero. The bordered inversion method, starting from a small size leading principal submatrix, increases the size and ran of the submatrix to obtain an inverse. The formulae of the two methods show certain formal similarities. In this paper we are looing for a deeper connection between the two methods. Using unexplored observations of Egerváry [4], we determine three inds of relationship between the two algorithms and associated sequences. These results are essentially the following. The ran reduction operation gives the reverse bordered inversion formula. Using the inverse LDU decomposition algorithm included in the bordered inversion method, we can produce the same full ran factorization as the ran reduction algorithm. Finally, the bordered inversion formula leads to new multiplicative canonical forms of the ran reduction algorithm. 2. Preliminary results We need the following results. For proofs and other details, see Ouellette [12]. Lemma 1 (Guttman-1) Let E F A = R G H E R. (2.1) If E is nonsingular, then ran (A) =ran (E)+ran H GE F. (2.2)
2 118 A. Galántai Corollary 2 If A and E are nonsingular, then the Schur complement S =(A/E) =H GE F is also nonsingular. Lemma 3 (Guttman-2) Let E F A = G H R m n H R j j. (2.3) If H is nonsingular, then ran (A) =ran (H)+ran E FH G. (2.4) Corollary 4 If A and H are nonsingular, then the Schur complement is also nonsingular. T =(A/H) =E FH G 3. The inverse of partitioned matrices Here we recall the most important results concerning the inverse of partitioned matrices. For proofs and related results see Ouellette [12]. Let E F A = R n n E R (3.1) G H and E be nonsingular. Theorem 5 (Banachiewicz-Frazer-Duncan-Collar) If the nonsingular matrix A R n n is partitioned in the form (3.1), where E has an inverse, then A can be expressed in the partitioned form A = E + E FS GE E FS S GE S. (3.2) Remar 6 Provided that E is invertible, A is invertible if and only if S is invertible. Remar 7 Formula (3.2) can be written in the following form A = E + E F I S GE, I. (3.3)
3 Ran reduction and bordered inversion The bordered inversion method Formula (3.2) is the basis for the bordered inversion or escalator method of Frazer, Duncan and Collar [6]. For =1,...,m let E +1 = E F G H E R n n, F,G T R n p, H R p p. Hence n +1 = n +p and E m+1 = E. IfE and S = H G E F are nonsingular, then E +1 is also nonsingular and by Theorem 5 we can calculate E +1. The bloc bordered inversion algorithm and 1. Determine E For =1,...,m set E +1 = E S = H G E F + E S F S G E G E E F S S. (4.1) end It is clear that Em+1 = E provided that S is nonsingular for =1,...,m.IfS is singular for an index, then the Guttman lemma implies that E +1 is singular. Theorem 8 (Guttman) Let E be nonsingular. The bloc bordered inversion method gives the inverse of E if and only if all E are nonsingular for =1,...,m. Guttman suggested several bloc variants of the bordered inversion method. For details, see Guttman [1] and Ouellette [12]. We also note that Gergely [9] observed the equivalence of the bordered inversion method with the Gauss-Jordan inversion. 5. The ran reduction algorithm The ran reduction procedure of Egerváry is based on the following result. Theorem 9 (Egerváry-Guttman-Wedderburn) Let H, S R m n and ran (H) ran (S) =. Furthermore let UR V T (U R m, R R, V R n )bea full ran factorization of S. Then ran (H S) =ran (H) ran (S), (5.1) if and only if U = HX, V T = Y T H, Y T HX = R (5.2) for some matrices X R n and Y R m.
4 12 A. Galántai Remar 1 We can write that ran (H S) =ran (H) ran (S) S = HX Y T HX Y T H. For the proof of the theorem see Egerváry [4], Guttman [11], Cline and Funderlic [3], Ouellette [12] and Galántai [7]. The ran reduction algorithm which is important in full ran factorizations and conjugation has the following form. Let A 1 = A, X i R n l i, Y i R m l i, l i 1 and Yi T A ix i be nonsingular for i =1, 2,...,. The ran reduction procedure A i+1 = A i A i X i Y T i A i X i Y T i A i (i =1, 2,...,), (5.3) where ran (A) P i=1 l i. The algorithm is said to be breadown free, if all Yi T A ix i are nonsingular for i =1, 2,...,.Assumethat X ran (A) = l i. In this case, A +1 =.Let i=1 X =[X 1,...,X ], Y =[Y 1,...,Y ] and X (i) =[X 1,...,X i ], Y (i) =[Y 1,...,Y i ]. Then the following theorem is true. Theorem 11 The ran reduction procedure can be carried out breadown free if and only if Y T AX = Yi T AX j i,j=1 is bloc strongly nonsingular1.inthiseventtheran reduction procedure has the canonical form A i+1 = A AX (i) ³ Y (i)t AX (i) Y (i)t A (i =1,...,). (5.4) For proof, see Galántai [7], [8]. For i = the relation A = AX Y T AX Y T A holds, where Z = X Y T AX Y T is a reflexive inverse of A. Itisalsoeasytosee that algorithm (5.3) leads to the full ran factorization A = QD P T, (5.5) 1 It means that the matrix has a bloc LU decomposition without pivoting.
5 Ran reduction and bordered inversion 121 where P = A T 1 Y 1,...,A T Y, Q = A1 X 1,...,A T X and D = diag Y1 T A 1 X 1,...,Y T A X. If a square matrix B has a bloc LDU decomposition with respect to a given partition, then this unique factorization will be denoted by B = L B D B U B.Thenwe can prove Theorem 12 Let Y T AX be bloc strongly nonsingular. The components of the full ran factorization (5.5) are P = A T YL T Y T AX, and A can be expressed in the form A = AXU Y T AX For proof, see Galántai [7], [8]. Q = AXUY T AX, D = D Y T AX, (5.6) D Y T AX L Y T AX Y T A. (5.7) 6. Ran-reduction and bordered inversion I We show a surprisingly simple connection between the ran reduction operation and the bordered inversion. Theorem 13 Assume that E F A = R n n E R (6.1) G H and Then provided that H is invertible 2. and J = I n A AJ J T AJ J T A = R n (n ). (A/H) Proof. By simple calculation we have AJ J T AJ J T F A = H H FH [G, H] = G F G H A J J T AJ J T A =, E FH G which was to be proved. 2 This condition is in agreement with the strong bloc nonsingularity of Y T AX. In fact, X 1 = Y 1 = J and Y1 T AX 1 = H.,
6 122 A. Galántai Remar 14 If J is replaced by J b I = provided that E is nonsingular. A A b J ³ bj T A b J bj T A =,thenwehave (A/E), (6.2) Remar 15 If A and H are replaced by A and S, respectively then we have A /S = E. This means that A A J J T A J J T A = E. (6.3) We just obtained that one ran reduction step on matrix A results in the inverse of the leading principal submatrix E. Relation (6.3) gives a direct connection between the ran reduction and the bordered inversion. It was first discovered by Egerváry [4] in the scalar case (H R), who developed it for solving difference equations with modified boundary conditions [5]. The same idea appears in Brezinsi et al. [2], who give credit to Duncan and call it reverse bordered inversion. Their variant of identity (6.3) is obtained in the following way. Let " # be b A F = bg H b. Then be = E + E FS GE, F b = E FS, bg = S GE, H b = S. As F b ³ bh = E F and F b ³ bh bg = E FS GE we have E = be E FS GE = be bf ³ H b G, b (6.4) which is exactly the reverse bordered inversion formula of Brezinsi et al. [2]. 7. The bordered inversion method and triangular factorization We associate sequences with the bordered inversion algorithm which produce the inverse LDU factorizations of the leading principal submatrices E. It is based on the following observation. Let E = LDU be the unique bloc LDU decomposition of E. If E is not partitioned, then we just tae L = U = I and D = E. IfA has the form (3.1), then the inverse formula (3.3) can be written as A = U E F I = U A D A L A, D S L GE I
7 Ran reduction and bordered inversion 123 which is the inverse bloc LDU decomposition of A. Thus we can define the following sequences related to the bordered inversion algorithm of Section 4. Let E = L D U denote the unique LDU decomposition of E for =1,...,m. Inverse LDU decomposition algorithm For =1,...,m set L +1 = L G E U I, D +1 = D S E F U +1 = I end It is clear that E +1 = U +1 D +1 L +1 and. E = U m+1 D m+1 L m+1. (7.1) The above algorithm, which requires no extra operations, is based on a remar by Egerváry [4], according to which the bordered inversion (or escalator) method furnishes the triangular factorisation of the inverse. The associated procedure can be carried out if and only if the bordered inversion can be so. Hence the necessary and sufficient condition for performing the algorithm without breadown is the bloc strong nonsingularity of E., 8. Ran-reduction and bordered inversion II Here we determine a new relationship between ran reduction and bordered inversion. More precisely, we show that certain sequences related to the ran reduction and the bordered inversion algorithms, respectively, can be tied together. We recall the facts that 1. The associated sequences of the bordered inversion method produce U and L E. 2. The ran reduction method produces the full ran factorization A = QD P T = AXU Y T AX D Y T AX L Y T AX Y T A. E, D E If X = B T V, then the pair (P, V ) is bloc B-conjugate (see Galántai [7], [8]). This conjugation property plays an important role in the ABS methods (see Abaffy- Spedicato [1]). Having the two observations above, we can easily derive the following Conjugation algorithm 1. Apply the inverse LDU decomposition algorithm to the matrix Y T AX and get L Y T AX.
8 124 A. Galántai 2. Set P = A T YL T Y T AX. end Thus we have a new connection between ran reduction and bordered inversion. It is also clear that, similarly, we can have Q and D. Finally we mention that Stewart [13] gave a conjugation algorithm which is based on LU factorization. 9. Ran-reduction and the reverse bordered inversion We deal with the last connection found, which is, in fact, an application of identity (6.3) to obtain new canonical forms for A i+1. We start with the formula (5.4), which tellsusthat A i+1 = A AX (i) ³ Y (i)t AX (i) Y (i)t A (i =1,...,). Let I (j) be the first j columns of the identity matrix I n.furthermoreletω i = P i j=1 l j. Then X (i) = XI (ωi) and Y (i) = YI (ωi) and we can write Y A i+1 = AX T AX ³ I (ω i ) Y (i)t AX (i) I (ω i )T Y T A. As ³ I (ωi) Y (i)t AX (i) I (ω i)t = Y (i)t AX (i) we have the following Theorem 16 A i+1 has the following canonical form µ Y A i+1 = AX T AX Y (i)t AX (i) under the conditions of Theorem 11. Y T A (9.1) We can observe that in the parentheses we have a variant of the identity (6.3) with Y T AX instead of A.Thus Y T AX Y (i)t AX (i) = Y T AX J hj T Y T AX J i J T Y T AX, where J = Hencebysimplecalculationswehave I n ωi. (9.2)
9 Ran reduction and bordered inversion 125 Theorem 17 A i+1 has the following canonical form A i+1 = Y T J J T X A Y T J J T X (9.3) under the conditions of Theorem 11. Assuming that Y T AX " be F b = bg H b # wecanobtainathirdformofa i+1. Theorem 18 A i+1 has the following canonical form under the conditions of Theorem 11. A i+1 = Y T b H X (9.4) These new multiplicative canonical forms indicate different properties of the ran reduction algorithm. The first canonical form indicates the decrease of A +1.The second form shows the projector properties of the ran reduction matrix A i+1. The last canonical form shows the structure of A +1 in terms of inverse quantities. These three new forms certainly give a better understanding of ran reduction and may lead to further new results. Acnowledgement: This wor was supported by the CNR International short-term mobility program, Year 21. REFERENCES [1] Abaffy J. and Spedicato, E.: ABS-projection Algorithms: Mathematical Techniques for Linear and Nonlinear Algebraic Equations, Ellis Horwood, Chichester, [2] Brezinsi, C., Morandi Cecchi, M. and Redivo-Zaglia, M.: The reverse bordering method, SIAM J. Matrix. Anal. Appl., 15, (1994}, [3] Cline, R.E. and Funderlic, R.E.: Theranofadifference of matrices and associated generalized inverses, Lin. Alg. Appl., 24, (1979), [4] Egerváry, E.: On ran-diminishing operations and their applications to the solution of linear equations, ZAMP, XI., (196), [5] Egerváry, J.: Über eine Methode zur numerischen Lösung der Poissonschen Differenzengleichung für beliebige Gebiete, Acta Mathematica Academiae Scientiarium Hungaricae, 11, (196), [6] Frazer, A., Duncan, J., Collar, R.: Elementary matrices, Cambridge University Press, [7] Galántai A.: Ran reduction and conjugation, Mathematical Notes, Misolc, 1(1), (2),
10 126 A. Galántai [8] Galántai A.: Ran reduction, factorization and conjugation, Linear and Multilinear Algebra, Vol. 49, (21), [9] Gergely J.: Mátrixinvertáló módszereről (On matrix inversion methods, in Hungarian), Alalmazott Matematiai Lapo, 2, (1976), [1] Guttman, L.: Enlargement methods for computing the inverse matrix, Ann. Math. Statist., 17, (1946), [11] Guttman, L.: Anecessaryandsufficient formula for matrix factoring, Psychometria, 22, (1957), [12] Ouellette, D.V.: Schur complements and statistics, Lin. Alg. Appl., 36, (1981), [13] Stewart, G.W.: Conjugate direction methods for solving systems of linear equations, Numerische Mathemati, 21, (1973),
The rank reduction procedure of Egerváry
CEJOR DOI 10.1007/s10100-009-0124-0 ORIGINAL PAPER The rank reduction procedure of Egerváry Aurél Galántai Springer-Verlag 2009 Abstract Here we give a survey of results concerning the rank reduction algorithm
More informationA new interpretation of the integer and real WZ factorization using block scaled ABS algorithms
STATISTICS,OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 2, September 2014, pp 243 256. Published online in International Academic Press (www.iapress.org) A new interpretation
More informationOn the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination
On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices
More informationOn the necessary and sufficient condition for the extended Wedderburn-Guttman theorem
On the necessary and sufficient condition for the extended Wedderburn-Guttman theorem Yoshio Takane a,1, Haruo Yanai b a Department of Psychology, McGill University, 1205 Dr. Penfield Avenue, Montreal,
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationComputing the pth Roots of a Matrix. with Repeated Eigenvalues
Applied Mathematical Sciences, Vol. 5, 2011, no. 53, 2645-2661 Computing the pth Roots of a Matrix with Repeated Eigenvalues Amir Sadeghi 1, Ahmad Izani Md. Ismail and Azhana Ahmad School of Mathematical
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationON THE QR ITERATIONS OF REAL MATRICES
Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationLinear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More informationPROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS
PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix
More informationSection 3.3. Matrix Rank and the Inverse of a Full Rank Matrix
3.3. Matrix Rank and the Inverse of a Full Rank Matrix 1 Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix Note. The lengthy section (21 pages in the text) gives a thorough study of the rank
More informationMatrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...
Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements
More informationAMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.
J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationc 1998 Society for Industrial and Applied Mathematics
SIAM J MARIX ANAL APPL Vol 20, No 1, pp 182 195 c 1998 Society for Industrial and Applied Mathematics POSIIVIY OF BLOCK RIDIAGONAL MARICES MARIN BOHNER AND ONDŘEJ DOŠLÝ Abstract his paper relates disconjugacy
More informationGeneralized Principal Pivot Transform
Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More information7.6 The Inverse of a Square Matrix
7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses
More informationRESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS
RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent
More informationReview of Matrices and Block Structures
CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationReview of Vectors and Matrices
A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P
More informationWorking with Block Structured Matrices
Working with Block Structured Matrices Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations with
More informationELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES
ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ZHONGPENG YANG AND XIAOXIA FENG Abstract. Under the entrywise dominance partial ordering, T.L. Markham
More informationThe reflexive and anti-reflexive solutions of the matrix equation A H XB =C
Journal of Computational and Applied Mathematics 200 (2007) 749 760 www.elsevier.com/locate/cam The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Xiang-yang Peng a,b,, Xi-yan
More informationSENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN*
SIAM J Matrix Anal Appl c 1994 Society for Industrial and Applied Mathematics Vol 15, No 3, pp 715-728, July, 1994 001 SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* CARL D MEYER Abstract
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More informationUNIT 3 MATRICES - II
Algebra - I UNIT 3 MATRICES - II Structure 3.0 Introduction 3.1 Objectives 3.2 Elementary Row Operations 3.3 Rank of a Matrix 3.4 Inverse of a Matrix using Elementary Row Operations 3.5 Answers to Check
More informationMAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM
MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM OLGA SLYUSAREVA AND MICHAEL TSATSOMEROS Abstract. The principal pivot transform (PPT) is a transformation of a matrix A tantamount to exchanging
More informationI-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x
Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker
More informationExtra Problems: Chapter 1
MA131 (Section 750002): Prepared by Asst.Prof.Dr.Archara Pacheenburawana 1 Extra Problems: Chapter 1 1. In each of the following answer true if the statement is always true and false otherwise in the space
More informationMAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :
MAC 0 Module Systems of Linear Equations and Matrices II Learning Objectives Upon completing this module, you should be able to :. Find the inverse of a square matrix.. Determine whether a matrix is invertible..
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationGeneralized Schur complements of matrices and compound matrices
Electronic Journal of Linear Algebra Volume 2 Volume 2 (200 Article 3 200 Generalized Schur complements of matrices and compound matrices Jianzhou Liu Rong Huang Follow this and additional wors at: http://repository.uwyo.edu/ela
More informationOn Pseudo SCHUR Complements in an EP Matrix
International Journal of Scientific Innovative Mathematical Research (IJSIMR) Volume, Issue, February 15, PP 79-89 ISSN 47-7X (Print) & ISSN 47-4 (Online) wwwarcjournalsorg On Pseudo SCHUR Complements
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationOn the adjacency matrix of a block graph
On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit
More informationMATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.
MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:
More informationInterlacing Inequalities for Totally Nonnegative Matrices
Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are
More informationGAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)
GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.
More informationInequalitiesInvolvingHadamardProductsof HermitianMatrices y
AppliedMathematics E-Notes, 1(2001), 91-96 c Availablefreeatmirrorsites ofhttp://math2.math.nthu.edu.tw/»amen/ InequalitiesInvolvingHadamardProductsof HermitianMatrices y Zhong-pengYang z,xianzhangchong-guangcao
More informationSection 5.6. LU and LDU Factorizations
5.6. LU and LDU Factorizations Section 5.6. LU and LDU Factorizations Note. We largely follow Fraleigh and Beauregard s approach to this topic from Linear Algebra, 3rd Edition, Addison-Wesley (995). See
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationAPPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS
Univ. Beograd. Publ. Eletrotehn. Fa. Ser. Mat. 15 (2004), 13 25. Available electronically at http: //matematia.etf.bg.ac.yu APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS Predrag
More informationWelsh s problem on the number of bases of matroids
Welsh s problem on the number of bases of matroids Edward S. T. Fan 1 and Tony W. H. Wong 2 1 Department of Mathematics, California Institute of Technology 2 Department of Mathematics, Kutztown University
More informationDeterminants Chapter 3 of Lay
Determinants Chapter of Lay Dr. Doreen De Leon Math 152, Fall 201 1 Introduction to Determinants Section.1 of Lay Given a square matrix A = [a ij, the determinant of A is denoted by det A or a 11 a 1j
More informationLemma 8: Suppose the N by N matrix A has the following block upper triangular form:
17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix
More informationa Λ q 1. Introduction
International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE
More informationComputing Real Logarithm of a Real Matrix
International Journal of Algebra, Vol 2, 2008, no 3, 131-142 Computing Real Logarithm of a Real Matrix Nagwa Sherif and Ehab Morsy 1 Department of Mathematics, Faculty of Science Suez Canal University,
More informationHere is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J
Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.
More informationAn Empirical Study of the ɛ-algorithm for Accelerating Numerical Sequences
Applied Mathematical Sciences, Vol 6, 2012, no 24, 1181-1190 An Empirical Study of the ɛ-algorithm for Accelerating Numerical Sequences Oana Bumbariu North University of Baia Mare Department of Mathematics
More informationON THE MATRIX EQUATION XA AX = X P
ON THE MATRIX EQUATION XA AX = X P DIETRICH BURDE Abstract We study the matrix equation XA AX = X p in M n (K) for 1 < p < n It is shown that every matrix solution X is nilpotent and that the generalized
More informationLesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method
Module 1: Matrices and Linear Algebra Lesson 3 Inverse of Matrices by Determinants and Gauss-Jordan Method 3.1 Introduction In lecture 1 we have seen addition and multiplication of matrices. Here we shall
More informationA NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES
Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University
More informationSOLVING FUZZY LINEAR SYSTEMS BY USING THE SCHUR COMPLEMENT WHEN COEFFICIENT MATRIX IS AN M-MATRIX
Iranian Journal of Fuzzy Systems Vol 5, No 3, 2008 pp 15-29 15 SOLVING FUZZY LINEAR SYSTEMS BY USING THE SCHUR COMPLEMENT WHEN COEFFICIENT MATRIX IS AN M-MATRIX M S HASHEMI, M K MIRNIA AND S SHAHMORAD
More informationGeometric Mapping Properties of Semipositive Matrices
Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationCSL361 Problem set 4: Basic linear algebra
CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices
More informationEfficient algorithms for finding the minimal polynomials and the
Efficient algorithms for finding the minimal polynomials and the inverses of level- FLS r 1 r -circulant matrices Linyi University Department of mathematics Linyi Shandong 76005 China jzh108@sina.com Abstract:
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationSome Formulas for the Principal Matrix pth Root
Int. J. Contemp. Math. Sciences Vol. 9 014 no. 3 141-15 HIKARI Ltd www.m-hiari.com http://dx.doi.org/10.1988/ijcms.014.4110 Some Formulas for the Principal Matrix pth Root R. Ben Taher Y. El Khatabi and
More informationLinear Algebra, Summer 2011, pt. 2
Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................
More informationA note on the equality of the BLUPs for new observations under two linear models
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationMath 60. Rumbos Spring Solutions to Assignment #17
Math 60. Rumbos Spring 2009 1 Solutions to Assignment #17 a b 1. Prove that if ad bc 0 then the matrix A = is invertible and c d compute A 1. a b Solution: Let A = and assume that ad bc 0. c d First consider
More informationSection 2.2: The Inverse of a Matrix
Section 22: The Inverse of a Matrix Recall that a linear equation ax b, where a and b are scalars and a 0, has the unique solution x a 1 b, where a 1 is the reciprocal of a From this result, it is natural
More informationELA
Electronic Journal of Linear Algebra ISSN 181-81 A publication of te International Linear Algebra Society ttp://mat.tecnion.ac.il/iic/ela RANK AND INERTIA OF SUBMATRICES OF THE MOORE PENROSE INVERSE OF
More informationThe Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix
The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of
More informationComponentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])
SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to
More informationDefinite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra
Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear
More informationMatrices and Matrix Algebra.
Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationGeneralized matrix inversion is not harder than matrix multiplication
Generalized matrix inversion is not harder than matrix multiplication Marko D. Petković, Predrag S. Stanimirović University of Niš, Department of Mathematics, Faculty of Science, Višegradska 33, 18000
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationSolving Homogeneous Systems with Sub-matrices
Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationFormulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms
Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms Dijana Mosić and Dragan S Djordjević Abstract We introduce new expressions for the generalized Drazin
More informationChapter 3. Determinants and Eigenvalues
Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory
More informationSolving Ax = b w/ different b s: LU-Factorization
Solving Ax = b w/ different b s: LU-Factorization Linear Algebra Josh Engwer TTU 14 September 2015 Josh Engwer (TTU) Solving Ax = b w/ different b s: LU-Factorization 14 September 2015 1 / 21 Elementary
More informationAnalytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications
Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationNecessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix.
arxiv:math/0506382v1 [math.na] 19 Jun 2005 Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. Adviser: Charles R. Johnson Department of Mathematics College
More informationOrthogonal similarity of a real matrix and its transpose
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 382 392 www.elsevier.com/locate/laa Orthogonal similarity of a real matrix and its transpose J. Vermeer Delft University
More informationApplied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University
Applied Matrix Algebra Lecture Notes Section 22 Gerald Höhn Department of Mathematics, Kansas State University September, 216 Chapter 2 Matrices 22 Inverses Let (S) a 11 x 1 + a 12 x 2 + +a 1n x n = b
More informationOn the Schur Complement of Diagonally Dominant Matrices
On the Schur Complement of Diagonally Dominant Matrices T.-G. Lei, C.-W. Woo,J.-Z.Liu, and F. Zhang 1 Introduction In 1979, Carlson and Markham proved that the Schur complements of strictly diagonally
More informationMath Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!
Math 5- Computation Test September 6 th, 6 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge! Name: Answer Key: Making Math Great Again Be sure to show your work!. (8 points) Consider the following
More informationSplit Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hankel Matrices
Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hanel Matrices Georg Heinig Dept of Math& CompSci Kuwait University POBox 5969 Safat 13060, Kuwait Karla Rost Faultät für Mathemati
More informationImprovements on Geometric Means Related to the Tracy-Singh Products of Positive Matrices
MATEMATIKA, 2005, Volume 21, Number 2, pp. 49 65 c Department of Mathematics, UTM. Improvements on Geometric Means Related to the Tracy-Singh Products of Positive Matrices 1 Adem Kilicman & 2 Zeyad Al
More information