Proceedings of the Third International DERIV/TI-92 Conference

Size: px
Start display at page:

Download "Proceedings of the Third International DERIV/TI-92 Conference"

Transcription

1 Proceedings of the Third International DERIV/TI-9 Conference The Use of a Computer Algebra System in Modern Matrix Algebra Karsten Schmidt Faculty of Economics and Management Science, University of Applied Sciences, Schmalkalden, Germany kschmidt@wi.fh-schmalkalden.de Introduction The concept of generalized inverses of matrices was not developed until the 0th century (cf. References). Whilst the inverse A of a matrix A only exists if A is square and nonsingular, the generalized inverse (g-inverse) A exists for all matrices A. Any matrix A satisfying the condition AA A A is a generalized inverse of A. If A is square and nonsingular, we have A A, i.e. the generalized inverse is unique. Otherwise, the number of generalized inverses of a matrix is infinite. Therefore, a special generalized inverse, the Moore-Penrose inverse (MP inverse) A, attracted greater attention. This matrix satisfies condition (), i.e. AA A A, and in addition A AA A 3A A8 A A 3AA 8 AA assuring its uniqueness. Until recently, these matrices did not play a principal role in first and second year foundations in mathematics and statistics in such areas as economics and management science. But with the availability of powerful computers in the classroom, it became possible to apply these modern concepts, for example to the solution of systems of linear equations or to the linear regression model. In this paper we demonstrate how DERIVE can be used to teach the concepts of generalized inverses and the Moore-Penrose inverse. In Section we will introduce an algorithm for the computation of a generalized inverse of a matrix, and in Section 3 an algorithm for the computation of the unique Moore- Penrose inverse will be presented. Both algorithms will be illustrated by examples. In Section 4 we will see how the g-inverse can be used to check if solutions for a system of linear equations Ax b exist and to provide the general solution. Some examples are given to demonstrate how this method works for different matrices A, both regular and singular. () () (3) (4)

2 Proceedings of the Third International DERIV/TI-9 Conference Computation of a generalized inverse We now introduce an algorithm for the computation of a generalized inverse A n m of any matrix A. This algorithm is based on the well known Gauss algorithm which is also m n frequently applied to calculate the inverse A of a regular matrix A. It comprises four steps: Step We concatenate the identity matrix I to the right of A:! A I m n m m " $# m m Step By successively performing elementary row operations to the matrix A I, i.e. by multiplying A I from the left with matrices Z i, where the Z i are elementary matrices, in order to transform A into the Hermite normal form, Z A I ZA Z ZZ A I ZZA ZZ! H Z A I ZA Z we get Z Z Z Z. k k m m Step 3 If the resulting matrix H R! I r r 0 0 " $# ZA is not already of the form K" () $# H6 ranka6, we transform it into this form by interchanging the P n n where r rank columns. This is equivalent to multiplying H from the right with a permutation matrix which is equal to the identity matrix with interchanged columns (i.e. if H is already of the form () we have P I): H Z P HP ZP R ZP Step 4 Having determined Z and P we can calculate a g-inverse of A by Schmidt: The Ues of CAS in Matrix Algebra Page

3 Proceedings of the Third International DERIV/TI-9 Conference A P I 0 r r Z n m n n 0 0! " $ # n m where r rank6. A m m We illustrate this algorithm by calculating a g-inverse of A 0 0 which is a matrix of rank. A copy of the corresponding expressions is given in Appendix A. The DERIVE-function ROW_REDUCE performs the first two steps simultaneously. Obviously, the resulting H is not of the form (). We therefore have to use a permutation matrix P different from I. Finally, we compute a generalized inverse A P I 0" Z! $ # and show that condition is indeed satisfied. Computation of the Moore-Penrose inverse In this section we introduce an algorithm for the computation of the Moore-Penrose inverse A of any matrix A. This iterative algorithm, known as Greville algorithm, leads n m m n to the unique MP inverse in a finite number of iterations. Since the Moore-Penrose inverse A is also a generalized inverse of A, this algorithm provides another method to calculate a generalized inverse A. We start with a simple formula to calculate the MP inverse if A a n is a vector: Schmidt: The Ues of CAS in Matrix Algebra Page 3

4 a aa % & ' Proceedings of the Third International DERIV/TI-9 Conference a if a 0 0 if a 0 We now consider the column notation of A: A a a a m n n and denote the submatrix, that comprises the first k columns of A, by A a a a Hence k m k Ak Ak ak Moreover, we define the following vectors for j : k (6) d a A A j j j j 3 8 j j j j c I A A a cc j j b j cj d a d j j j Note that d j is a row vector, c j a column vector (and hence c j a row vector) and b j a row vector. Then we have Since A by (6). j j j A A a! Aj Aj ajb j b j " $ # a is a matrix which has only one column, its MP inverse is easily calculated Using (7) we can then iteratively calculate A, A 3,..., A n A. This algorithm is easily implemented on a computer with a matrix programming language such as GAUSS. An example of a procedure for the calculation of the Moore- Penrose inverse can be found in Schmidt/Trenkler (998, p. 3). However, in this paper we provide a solution in DERIVE, where for the sake of simplicity we confine ourselves to matrices A with min mn, m n 6, i.e. vectors, and matrices which have either only two rows or only two columns. (7) Schmidt: The Ues of CAS in Matrix Algebra Page 4

5 Proceedings of the Third International DERIV/TI-9 Conference The set of functions on the following page could be used as a utility file. After being loaded, the function MPI calculates the MP inverse of the matrix passed as parameter, or terminates with an error message if min mn, 6>. We use the MPI-function to calculate the MP inverse of the matrix we already used in the previous section to illustrate the calculation of a g-inverse. A copy of the corresponding expressions, including the check showing that conditions to 4 are satisfied, is given in Appendix B. Note that conditions (3) and (4) require both A A and AA to be symmetric. a` MPIV(a) : IF a` a 0, 0 a`, ELEMENT(a` a,, ) ƒ A(a) : DELETE_ELEMENT(a`, )` A(a) : DELETE_ELEMENT(a`, )` DT(a, aplus) : a` aplus` aplus C(a, aplus, a) : (IDENTITY_MATRIX(DIMENSION(a)) - a aplus) a - MPIV(c) c BT(c, dt, a) : MPIV(c) dt dt a APLUS(aplus, a, bt) : APPEND(aplus - aplus a bt, bt) MPI(a) : APLUS(MPIV(A(a)), A(a), BT(C(A(a), MPIV(A(a)), A(a)), DT(A(a), MPIV(A(a))), A(a))) MPI(a) : IF(MIN(DIMENSION(a), DIMENSION(a`)) >, "Error: MIN(m,n) > ", IF(MIN(DIMENSION(a), DIMENSION(a`)), IF(DIMENSION(a`), MPIV(a), MPIV(a`)`), IF(DIMENSION(a`) >, MPI(a`)`, MPI(a)))) Application to systems of linear equations We consider a system of linear equations A x b m nn m The g-inverse of A can be applied to such a system to check if it is consistent, i.e. to investigate if it has solutions or not, and if it is consistent, to provide the general solution, which may consist of either one unique solution or an infinite number of solutions. System (8) is consistent if and only if AA b b for any generalized inverse A. (8) (9) Schmidt: The Ues of CAS in Matrix Algebra Page

6 Proceedings of the Third International DERIV/TI-9 Conference If Ax b is consistent, its general solution is given by n n n x A b I A A z where z n is an arbitrary vector. (0) Note that in applying (9) and (0), any g-inverse A is helpful. Therefore, we can use the Moore-Penrose inverse A as well. Furthermore, since the vector z n in (0) is arbitrary, we can choose z 0. Consequently, one (possibly unique) solution of Ax b is always given by x A b The following function could be used as a utility file. After being loaded, the function SOLVESLE solves a system of linear equations Ax b where the matrix A and the vector b have been passed as parameters, or displays a message, if a solution does not exist. z z : z SOLVESLE(a, b) : IF(a mpi a b b, mpi a b (IDENTITY_MATRIX(DIMENSION(a`)) - mpi a a) z, "A solution does not exist!") Finally, we analyze the consistency of three systems of linear equations, and calculate solutions, if possible. Copies of the corresponding expressions are given in Appendix C. We start with A b 3 ; () 7 By checking condition (9) using the MP inverse of A, we find that system () is consistent: AA b 7 b Note that in this case A is a regular matrix. Hence A A and AA I. The general solution is provided by (0); clearly, system () has a unique solution: 3 8 x A b I A A z Schmidt: The Ues of CAS in Matrix Algebra Page 6

7 Proceedings of the Third International DERIV/TI-9 Conference The second system of linear equations is described by A b 4 ; 4 () When we check condition (9) we find that system () is also consistent: AA b 4 b Note that in this case A is a singular matrix. Hence, A does not exist and AA I. The general solution is provided by (0); obviously system () has an infinite number of solutions: 3 8 x A b I A A z 4 4 z z z z For example, by choosing z 0 we get the solution x 4 The third system of linear equations is described by A b 4 ; 3 (3) By checking condition (9), we find that system (3) is inconsistent: AA b 8 6 i.e. a solution does not exist. b References Ben-Israel, A. & T.N.E. Greville (974), Generalized Inverses: Theory and Applications, New York (Wiley). Greville, T.N.E. (960), Some Applications of the Pseudoinverse of a Matrix, SIAM Review, -. Moore, E.H. (90), On the Reciprocal of the General Algebraic Matrix (Abstract), Bulletin of the American Mathematical Society 6, Penrose, R. (9), A Generalized Inverse for Matrices, Proceedings of the Cambridge Philosophical Society, Schmidt: The Ues of CAS in Matrix Algebra Page 7

8 Proceedings of the Third International DERIV/TI-9 Conference Rao, C.R. (96), A Note on a Generalized Inverse of a Matrix with Applications to Problems in Mathematical Statistics, Journal of the Royal Statistical Society B 4, -8. Rao, C.R. & S.K. Mitra (97), Generalized Inverse of Matrices and Its Applications, New York (Wiley). Schmidt, K. & G. Trenkler (998), Moderne Matrix-Algebra, Berlin (Springer). Schmidt: The Ues of CAS in Matrix Algebra Page 8

9 Proceedings of the Third International DERIV/TI-9 Conference Appendix A 0 a : 0 ROW_REDUCE(a, IDENTITY_MATRIX(DIMENSION(a))) h : z : p : h p H Z R! I " $# 0 p 0 0 z A 0 0 a 0 a Schmidt: The Ues of CAS in Matrix Algebra Page 9

10 Proceedings of the Third International DERIV/TI-9 Conference A Appendix B 0 a : 0 MPI(a) aplus : 4 a aplus a 0 0 aplus a aplus aplus a Schmidt: The Ues of CAS in Matrix Algebra Page 0

11 Proceedings of the Third International DERIV/TI-9 Conference a aplus 4 Appendix C a : 3 b : 7 SOLVESLE(a, b) a : 4 b : 4 SOLVESLE(a, b) 4 z z - z z Schmidt: The Ues of CAS in Matrix Algebra Page

12 Proceedings of the Third International DERIV/TI-9 Conference b : 3 SOLVESLE(a, b) "A solution does not exist!" Schmidt: The Ues of CAS in Matrix Algebra Page

An Alternative Proof of the Greville Formula

An Alternative Proof of the Greville Formula JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 1, pp. 23-28, JULY 1997 An Alternative Proof of the Greville Formula F. E. UDWADIA1 AND R. E. KALABA2 Abstract. A simple proof of the Greville

More information

Notes on Row Reduction

Notes on Row Reduction Notes on Row Reduction Francis J. Narcowich Department of Mathematics Texas A&M University September The Row-Reduction Algorithm The row-reduced form of a matrix contains a great deal of information, both

More information

The DMP Inverse for Rectangular Matrices

The DMP Inverse for Rectangular Matrices Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for

More information

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix journal of optimization theory and applications: Vol. 127, No. 3, pp. 639 663, December 2005 ( 2005) DOI: 10.1007/s10957-005-7508-7 Recursive Determination of the Generalized Moore Penrose M-Inverse of

More information

On V-orthogonal projectors associated with a semi-norm

On V-orthogonal projectors associated with a semi-norm On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai

More information

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Diagonal and Monomial Solutions of the Matrix Equation AXB = C Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

More information

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

A Note on Solutions of the Matrix Equation AXB = C

A Note on Solutions of the Matrix Equation AXB = C SCIENTIFIC PUBLICATIONS OF THE STATE UNIVERSITY OF NOVI PAZAR SER A: APPL MATH INFORM AND MECH vol 6, 1 (214), 45-55 A Note on Solutions of the Matrix Equation AXB C I V Jovović, B J Malešević Abstract:

More information

7.6 The Inverse of a Square Matrix

7.6 The Inverse of a Square Matrix 7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses

More information

Generalized Principal Pivot Transform

Generalized Principal Pivot Transform Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the

More information

a Λ q 1. Introduction

a Λ q 1. Introduction International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE

More information

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE J. Appl. Math. & Computing Vol. 19(2005), No. 1-2, pp. 297-310 MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE K. KAMARAJ AND K. C. SIVAKUMAR Abstract. The concept of the Moore-Penrose inverse

More information

A note on solutions of linear systems

A note on solutions of linear systems A note on solutions of linear systems Branko Malešević a, Ivana Jovović a, Milica Makragić a, Biljana Radičić b arxiv:134789v2 mathra 21 Jul 213 Abstract a Faculty of Electrical Engineering, University

More information

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level. Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical

More information

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Mathematical Sciences 205, 2, p. 20 M a t h e m a t i c s MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I Yu. R. HAKOPIAN, S. S. ALEKSANYAN 2, Chair

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

5x 2 = 10. x 1 + 7(2) = 4. x 1 3x 2 = 4. 3x 1 + 9x 2 = 8

5x 2 = 10. x 1 + 7(2) = 4. x 1 3x 2 = 4. 3x 1 + 9x 2 = 8 1 To solve the system x 1 + x 2 = 4 2x 1 9x 2 = 2 we find an (easier to solve) equivalent system as follows: Replace equation 2 with (2 times equation 1 + equation 2): x 1 + x 2 = 4 Solve equation 2 for

More information

Moore-Penrose-invertible normal and Hermitian elements in rings

Moore-Penrose-invertible normal and Hermitian elements in rings Moore-Penrose-invertible normal and Hermitian elements in rings Dijana Mosić and Dragan S. Djordjević Abstract In this paper we present several new characterizations of normal and Hermitian elements in

More information

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University Applied Matrix Algebra Lecture Notes Section 22 Gerald Höhn Department of Mathematics, Kansas State University September, 216 Chapter 2 Matrices 22 Inverses Let (S) a 11 x 1 + a 12 x 2 + +a 1n x n = b

More information

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0. ) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to

More information

Methods for Solving Linear Systems Part 2

Methods for Solving Linear Systems Part 2 Methods for Solving Linear Systems Part 2 We have studied the properties of matrices and found out that there are more ways that we can solve Linear Systems. In Section 7.3, we learned that we can use

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

3.4 Elementary Matrices and Matrix Inverse

3.4 Elementary Matrices and Matrix Inverse Math 220: Summer 2015 3.4 Elementary Matrices and Matrix Inverse A n n elementary matrix is a matrix which is obtained from the n n identity matrix I n n by a single elementary row operation. Elementary

More information

Re-nnd solutions of the matrix equation AXB = C

Re-nnd solutions of the matrix equation AXB = C Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 4. Solving Systems of Equations. Chapter 4 Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.

More information

The generalized Schur complement in group inverses and (k + 1)-potent matrices

The generalized Schur complement in group inverses and (k + 1)-potent matrices The generalized Schur complement in group inverses and (k + 1)-potent matrices Julio Benítez Néstor Thome Abstract In this paper, two facts related to the generalized Schur complement are studied. The

More information

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra Sections 5.1 5.3 A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information

EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B

EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B Dragan S. Djordjević November 15, 2005 Abstract In this paper we find the explicit solution of the equation A X + X A = B for linear bounded operators

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

ORIE 6300 Mathematical Programming I August 25, Recitation 1

ORIE 6300 Mathematical Programming I August 25, Recitation 1 ORIE 6300 Mathematical Programming I August 25, 2016 Lecturer: Calvin Wylie Recitation 1 Scribe: Mateo Díaz 1 Linear Algebra Review 1 1.1 Independence, Spanning, and Dimension Definition 1 A (usually infinite)

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

Matrix inversion and linear equations

Matrix inversion and linear equations Learning objectives. Matri inversion and linear equations Know Cramer s rule Understand how linear equations can be represented in matri form Know how to solve linear equations using matrices and Cramer

More information

CPE 310: Numerical Analysis for Engineers

CPE 310: Numerical Analysis for Engineers CPE 310: Numerical Analysis for Engineers Chapter 2: Solving Sets of Equations Ahmed Tamrawi Copyright notice: care has been taken to use only those web images deemed by the instructor to be in the public

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra Sections 5.1 5.3 A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method Module 1: Matrices and Linear Algebra Lesson 3 Inverse of Matrices by Determinants and Gauss-Jordan Method 3.1 Introduction In lecture 1 we have seen addition and multiplication of matrices. Here we shall

More information

Matrices and RRE Form

Matrices and RRE Form Matrices and RRE Form Notation R is the real numbers, C is the complex numbers (we will only consider complex numbers towards the end of the course) is read as an element of For instance, x R means that

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

1 - Systems of Linear Equations

1 - Systems of Linear Equations 1 - Systems of Linear Equations 1.1 Introduction to Systems of Linear Equations Almost every problem in linear algebra will involve solving a system of equations. ü LINEAR EQUATIONS IN n VARIABLES We are

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model Yongge Tian a and Douglas P. Wiens b a School of Economics, Shanghai University of

More information

Fast Computation of Moore-Penrose Inverse Matrices

Fast Computation of Moore-Penrose Inverse Matrices Fast Computation of Moore-Penrose Inverse Matrices Pierre Courrieu To cite this version: Pierre Courrieu. Fast Computation of Moore-Penrose Inverse Matrices. Neural Information Processing - Letters and

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

arxiv: v1 [math.ra] 14 Apr 2018

arxiv: v1 [math.ra] 14 Apr 2018 Three it representations of the core-ep inverse Mengmeng Zhou a, Jianlong Chen b,, Tingting Li c, Dingguo Wang d arxiv:180.006v1 [math.ra] 1 Apr 018 a School of Mathematics, Southeast University, Nanjing,

More information

On some matrices related to a tree with attached graphs

On some matrices related to a tree with attached graphs On some matrices related to a tree with attached graphs R. B. Bapat Indian Statistical Institute New Delhi, 110016, India fax: 91-11-26856779, e-mail: rbb@isid.ac.in Abstract A tree with attached graphs

More information

EC5555 Economics Masters Refresher Course in Mathematics September 2014

EC5555 Economics Masters Refresher Course in Mathematics September 2014 EC5555 Economics Masters Refresher Course in Mathematics September 4 Lecture Matri Inversion and Linear Equations Ramakanta Patra Learning objectives. Matri inversion Matri inversion and linear equations

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Math 2331 Linear Algebra

Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Shang-Huan Chiu Department of Mathematics, University of Houston schiu@math.uh.edu math.uh.edu/ schiu/ Shang-Huan Chiu,

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

Lecture 7: Introduction to linear systems

Lecture 7: Introduction to linear systems Lecture 7: Introduction to linear systems Two pictures of linear systems Consider the following system of linear algebraic equations { x 2y =, 2x+y = 7. (.) Note that it is a linear system with two unknowns

More information

ELA

ELA Electronic Journal of Linear Algebra ISSN 181-81 A publication of te International Linear Algebra Society ttp://mat.tecnion.ac.il/iic/ela RANK AND INERTIA OF SUBMATRICES OF THE MOORE PENROSE INVERSE OF

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

Partial isometries and EP elements in rings with involution

Partial isometries and EP elements in rings with involution Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 55 2009 Partial isometries and EP elements in rings with involution Dijana Mosic dragan@pmf.ni.ac.yu Dragan S. Djordjevic Follow

More information

Mathematical Foundations of Applied Statistics: Matrix Algebra

Mathematical Foundations of Applied Statistics: Matrix Algebra Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.

More information

On the Relative Gain Array (RGA) with Singular and Rectangular Matrices

On the Relative Gain Array (RGA) with Singular and Rectangular Matrices On the Relative Gain Array (RGA) with Singular and Rectangular Matrices Jeffrey Uhlmann University of Missouri-Columbia 201 Naka Hall, Columbia, MO 65211 5738842129, uhlmannj@missouriedu arxiv:180510312v2

More information

Linear equations in linear algebra

Linear equations in linear algebra Linear equations in linear algebra Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra Pearson Collections Samy T. Linear

More information

Lectures on Linear Algebra for IT

Lectures on Linear Algebra for IT Lectures on Linear Algebra for IT by Mgr. Tereza Kovářová, Ph.D. following content of lectures by Ing. Petr Beremlijski, Ph.D. Department of Applied Mathematics, VSB - TU Ostrava Czech Republic 2. Systems

More information

System of Linear Equations

System of Linear Equations Chapter 7 - S&B Gaussian and Gauss-Jordan Elimination We will study systems of linear equations by describing techniques for solving such systems. The preferred solution technique- Gaussian elimination-

More information

EP elements in rings

EP elements in rings EP elements in rings Dijana Mosić, Dragan S. Djordjević, J. J. Koliha Abstract In this paper we present a number of new characterizations of EP elements in rings with involution in purely algebraic terms,

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

Solving Linear Systems Using Gaussian Elimination

Solving Linear Systems Using Gaussian Elimination Solving Linear Systems Using Gaussian Elimination DEFINITION: A linear equation in the variables x 1,..., x n is an equation that can be written in the form a 1 x 1 +...+a n x n = b, where a 1,...,a n

More information

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2. MAT 1332: CALCULUS FOR LIFE SCIENCES JING LI Contents 1 Review: Linear Algebra II Vectors and matrices 1 11 Definition 1 12 Operations 1 2 Linear Algebra III Inverses and Determinants 1 21 Inverse Matrices

More information

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix Electronic Journal of Linear Algebra Volume 2 Volume 2 (21) Article 17 21 Rank and inertia of submatrices of te Moore- Penrose inverse of a Hermitian matrix Yongge Tian yongge.tian@gmail.com Follow tis

More information

Paul Schrimpf. September 10, 2013

Paul Schrimpf. September 10, 2013 Systems of UBC Economics 526 September 10, 2013 More cardinality Let A and B be sets, the Cartesion product of A and B is A B := {(a, b) : a A, b B} Question If A and B are countable is A B countable?

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

System of Linear Equations. Slide for MA1203 Business Mathematics II Week 1 & 2

System of Linear Equations. Slide for MA1203 Business Mathematics II Week 1 & 2 System of Linear Equations Slide for MA1203 Business Mathematics II Week 1 & 2 Function A manufacturer would like to know how his company s profit is related to its production level. How does one quantity

More information

MTH 102A - Linear Algebra II Semester

MTH 102A - Linear Algebra II Semester MTH 0A - Linear Algebra - 05-6-II Semester Arbind Kumar Lal P Field A field F is a set from which we choose our coefficients and scalars Expected properties are ) a+b and a b should be defined in it )

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Matrix Algebra, part 2

Matrix Algebra, part 2 Matrix Algebra, part 2 Ming-Ching Luoh 2005.9.12 1 / 38 Diagonalization and Spectral Decomposition of a Matrix Optimization 2 / 38 Diagonalization and Spectral Decomposition of a Matrix Also called Eigenvalues

More information

Lecture 2 Systems of Linear Equations and Matrices, Continued

Lecture 2 Systems of Linear Equations and Matrices, Continued Lecture 2 Systems of Linear Equations and Matrices, Continued Math 19620 Outline of Lecture Algorithm for putting a matrix in row reduced echelon form - i.e. Gauss-Jordan Elimination Number of Solutions

More information

The following two problems were posed by de Caen [4] (see also [6]):

The following two problems were posed by de Caen [4] (see also [6]): BINARY RANKS AND BINARY FACTORIZATIONS OF NONNEGATIVE INTEGER MATRICES JIN ZHONG Abstract A matrix is binary if each of its entries is either or The binary rank of a nonnegative integer matrix A is the

More information

System of Linear Equations

System of Linear Equations Math 20F Linear Algebra Lecture 2 1 System of Linear Equations Slide 1 Definition 1 Fix a set of numbers a ij, b i, where i = 1,, m and j = 1,, n A system of m linear equations in n variables x j, is given

More information

APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS

APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS Univ. Beograd. Publ. Eletrotehn. Fa. Ser. Mat. 15 (2004), 13 25. Available electronically at http: //matematia.etf.bg.ac.yu APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS Predrag

More information

Miscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

Miscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51 Miscellaneous Results, Solving Equations, and Generalized Inverses opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 51 Result A.7: Suppose S and T are vector spaces. If S T and

More information

Lecture 21: 5.6 Rank and Nullity

Lecture 21: 5.6 Rank and Nullity Lecture 21: 5.6 Rank and Nullity Wei-Ta Chu 2008/12/5 Rank and Nullity Definition The common dimension of the row and column space of a matrix A is called the rank ( 秩 ) of A and is denoted by rank(a);

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

MA 138 Calculus 2 with Life Science Applications Matrices (Section 9.2)

MA 138 Calculus 2 with Life Science Applications Matrices (Section 9.2) MA 38 Calculus 2 with Life Science Applications Matrices (Section 92) Alberto Corso albertocorso@ukyedu Department of Mathematics University of Kentucky Friday, March 3, 207 Identity Matrix and Inverse

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Moore-Penrose s inverse and solutions of linear systems

Moore-Penrose s inverse and solutions of linear systems Available online at www.worldscientificnews.com WSN 101 (2018) 246-252 EISSN 2392-2192 SHORT COMMUNICATION Moore-Penrose s inverse and solutions of linear systems J. López-Bonilla*, R. López-Vázquez, S.

More information

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,

More information

Chapter 3. Determinants and Eigenvalues

Chapter 3. Determinants and Eigenvalues Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory

More information

Solutions to Exam I MATH 304, section 6

Solutions to Exam I MATH 304, section 6 Solutions to Exam I MATH 304, section 6 YOU MUST SHOW ALL WORK TO GET CREDIT. Problem 1. Let A = 1 2 5 6 1 2 5 6 3 2 0 0 1 3 1 1 2 0 1 3, B =, C =, I = I 0 0 0 1 1 3 4 = 4 4 identity matrix. 3 1 2 6 0

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information