A note on solutions of linear systems

Similar documents
A Note on Solutions of the Matrix Equation AXB = C

arxiv: v5 [math.ra] 1 Jan 2013

Some results on the reverse order law in rings with involution

Re-nnd solutions of the matrix equation AXB = C

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

arxiv: v1 [math.ra] 28 Jan 2016

arxiv: v1 [math.ra] 14 Apr 2018

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

The DMP Inverse for Rectangular Matrices

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

EP elements in rings

Chapter 1. Matrix Algebra

Generalized Principal Pivot Transform

Moore-Penrose-invertible normal and Hermitian elements in rings

arxiv: v1 [math.ra] 21 Jul 2013

a Λ q 1. Introduction

Moore Penrose inverses and commuting elements of C -algebras

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I

Operators with Compatible Ranges

EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B

Group inverse for the block matrix with two identical subblocks over skew fields

Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms

Proceedings of the Third International DERIV/TI-92 Conference

On EP elements, normal elements and partial isometries in rings with involution

Predrag S. Stanimirović and Dragan S. Djordjević

A Note on the Group Inverses of Block Matrices Over Rings

Factorization of weighted EP elements in C -algebras

On some matrices related to a tree with attached graphs

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Partial isometries and EP elements in rings with involution

Dragan S. Djordjević. 1. Introduction and preliminaries

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Inner image-kernel (p, q)-inverses in rings

New concepts: rank-nullity theorem Inverse matrix Gauss-Jordan algorithm to nd inverse

arxiv: v1 [math.ca] 15 Jan 2018

APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

arxiv: v2 [math.gm] 8 Apr 2007

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

arxiv: v1 [math.ra] 27 Jul 2013

Linear estimation in models based on a graph

arxiv: v1 [math.ra] 16 Nov 2016

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

On Sum and Restriction of Hypo-EP Operators

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

Representations of the (b, c)-inverses in Rings with Involution

Some spectral inequalities for triangle-free regular graphs

THE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX

Inverses and Elementary Matrices

arxiv: v1 [math.ra] 23 Feb 2018

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Elementary maths for GMT

Solutions of Linear system, vector and matrix equation

Linear Systems and Matrices

Journal of Complexity. On strata of degenerate polyhedral cones, II: Relations between condition measures

Solutions to Exam I MATH 304, section 6

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

arxiv: v1 [math.ca] 2 Nov 2018

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

Lectures on Linear Algebra for IT

Linear Algebra Review

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Spring 2014 Math 272 Final Exam Review Sheet

1 Last time: inverses

Math 315: Linear Algebra Solutions to Assignment 7

The Determinant: a Means to Calculate Volume

arxiv: v1 [math.ra] 24 Aug 2016

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

Representations for the Drazin Inverse of a Modified Matrix

Some additive results on Drazin inverse

A trigonometric orthogonality with respect to a nonnegative Borel measure

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Decomposition of a ring induced by minus partial order

Weighted Drazin inverse of a modified matrix

Determinants Chapter 3 of Lay

Minkowski Inverse for the Range Symmetric Block Matrix with Two Identical Sub-blocks Over Skew Fields in Minkowski Space M

Linear Algebra and its Applications

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Moore-Penrose s inverse and solutions of linear systems

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN

On V-orthogonal projectors associated with a semi-norm

Product of Range Symmetric Block Matrices in Minkowski Space

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

The Moore-Penrose inverse of 2 2 matrices over a certain -regular ring

Repeated Eigenvalues and Symmetric Matrices

Notes on Row Reduction

Matrix Arithmetic. j=1

the Ghebyshev Problem

ELA

Moore [1, 2] introduced the concept (for matrices) of the pseudoinverse in. Penrose [3, 4] introduced independently, and much later, the identical

Linear algebra. S. Richard

Transcription:

A note on solutions of linear systems Branko Malešević a, Ivana Jovović a, Milica Makragić a, Biljana Radičić b arxiv:134789v2 mathra 21 Jul 213 Abstract a Faculty of Electrical Engineering, University of Belgrade, Bulevar kralja Aleksandra 73, 11 Belgrade, Serbia b Faculty of Civil Engineering, University of Belgrade, Bulevar kralja Aleksandra 73, 11 Belgrade, Serbia In this paper we will consider Rohde s general form of {1}-inverse of a matrix A The necessary and sufficient condition for consistency of a linear system Ax = c will be represented We will also be concerned with the minimal number of free parameters in Penrose s formula x = A (1) c + (I A (1) A)y for obtaining the general solution of the linear system This results will be applied for finding the general solution of various homogenous and non- -homogenous linear systems as well as for different types of matrix equations Keywords: Generalized inverses, linear systems, matrix equations 1 Introduction In this paper we consider non-homogeneous linear system in n variables Ax = c, (1) where A is an m n matrix over the field C of rank a and c is an m 1 matrix over C The set of all m n matrices over the complex field C will be denoted by C m n, m,n N The set of all m n matrices over the complex field C of rank a will be denoted by C m n a For simplicity of notation, we will write A i (A j ) for the i th row (the j th column) of the matrix A C m n Email addresses: Branko Malešević <malesevic@etfrs>, Ivana Jovović <ivana@etfrs>, Milica Makragić <milicamakragic@etfrs>, Biljana Radičić <biljana radicic@yahoocom> Preprint submitted to ISRN Algebra April 29 213

Any matrix X satisfying the equality AXA = A is called {1}-inverse of A and is denoted by A (1) The set of all {1}-inverses of the matrix A is denoted by A{1} It can be shown that A{1} is not empty If the n n matrix A is invertible, then the equation AXA = A has exactly one solution A 1, so the only {1}-inverse of the matrix A is its inverse A 1, ie A{1}={A 1 } Otherwise, {1}-inverse of the matrix A is not uniquely determined For more informations about {1}-inverses and various generalized inverses we recommend ABen-Israel and TNE Greville1 and SL Campbell and CD Meyer 2 For each matrix A C m n a there are regular matrices P C n n and Q C m m such that QAP = E a = Ia, (2) where I a is a a identity matrix It can be easily seen that every {1}-inverse of the matrix A can be represented in the form A (1) Ia U = P Q (3) V W where U = u ij, V = v ij and W = w ij are arbitrary matrices of corresponding dimensions a (m a), (n a) a and (n a) (m a) with mutually independent entries, see C Rohde 8 and V Perić 7 We will generalize the results of N S Urquhart 9 Firstly, we explore the minimal numbers of free parameters in Penrose s formula x = A (1) c+(i A (1) A)y for obtaining the general solution of the system (1) Then, we consider relations among the elements of A (1) to obtain the general solution in the form x = A (1) c of the system (1) for c This construction has previously been used by B Malešević and B Radičić 3 (see also 4 and 5) At the end of this paper we will give an application of this results to the matrix equation AXB = C 2 The main result In this section we indicate how technique of an {1}-inverse may be used to obtain the necessary and sufficient condition for an existence of a general solution of a non-homogeneous linear system 2

Lemma 21 The non-homogeneous linear system (1) has a solution if and only if the last m a coordinates of the vector c = Qc are zeros, where Q C m m is regular matrix such that (2) holds Proof: The proof follows immediately from Kroneker Capelli theorem We provide a new proof of the lemma by using the {1}-inverse of the system matrix A The system (1) has a solution if and only if c = AA (1) c, see R Penrose 6 Since A (1) is described by the equation (3), it follows that AA (1) = AP Ia U V W Hence, we have the following equivalences c = AA (1) c c = c a c n a (I AA (1) )c= Q 1 U I n a U I n a c n a = Q = Q 1 Ia U c a Qc }{{} c n a Furthermore, we conclude c = AA (1) c c n a = Theorem 22 The vector x = A (1) c+(i A (1) A)y, c Q ( Q 1 Q Q 1 Ia U U = I n a c n a ) Q c= c = Uc = n a = y C n 1 is an arbitrary column, is the general solution of the system (1), if and only if the {1}-inverse A (1) of the system matrix A has the form (3) for arbitrary matrices U and W and the rows of the matrix V(c a y a )+y (n a) are free parameters, where Qc = c = c a y and P 1 y = y = a y n a Proof: Since {1}-inverse A (1) of the matrix A has the form (3), the solution of the system x = A (1) c+(i A (1) A)y can be represented in the form ( ) Ia U Ia U x = P Qc+ I P QA y V W V W ( ) Ia U = P c V W Ia U + I P QAPP V W 1 y 3

According to Lemma 21 and from (2) we have ( Ia U c x = P a Ia U + I P V W V W Ia P 1 ) y Furthermore, we obtain c x = P a c = P a c = P a c = P a Vc a Vc a Vc a Vc a ( Ia + I P V ( + PP 1 Ia P V ( Ia +P I V y +P a V I n a P 1 ) ya y n a P 1 ) ya ) P 1 ya y n a, y n a y n a where y = P 1 y We now conclude ( ) c x = P a c Vc + a Vy a = P a +y n a V(c a y a )+y n a c Therefore, since matrix P is regular we deduce that P a V(c a y a )+y n a is the general solution of the system (1) if and only if the rows of the matrix V(c a y a )+y n a are n a free parameters Corollary 23 The vector x = (I A (1) A)y, y C n 1 is an arbitrary column, is the general solution of the homogeneous linear system Ax =, A C m n, if and only if the {1}-inverse A (1) of the system matrix A has the form (3) for arbitrary matrices U and W and the rows of the matrix Vy a + y (n a) are free parameters, where P 1 y = y = y a y n a Example 24 Consider the homogeneous linear system x 1 +2x 2 +3x 3 = 4x 1 +5x 2 +6x 3 = (4) 4

The system matrix is For regular matrices Q = 1 4 1 A = 1 2 3 4 5 6 and P = 1 2 3 1 1 3 2 1 equality (2) holds Rohde s general {1}-inverse A (1) of the system matrix A is of the form 1 A (1) = P 1 Q v 11 v 12 According to Corollary 23 the general solution of the system (4) is of the form y 1 x = P P 1 y 2, v 11 v 12 1 y 3 where P 1 = Therefore, we obtain x = P v 11 v 12 1 1 2 3 3 6 1 y 1 +2y 2 +3y 3 3y 2 6y 3 = P v 11 y 1 (2v 11 3v 12 y 2 (3v 11 6v 12 1)y 3 If we take α = v 11 y 1 (2v 11 3v 12 y 2 (3v 11 6v 12 1)y 3 as a parameter we get the general solution 2 1 1 3 x = 1 2 3 = 1 α y 3 α 2α α 5

Corollary 25 The vector x = A (1) c is the general solution of the system (1), if and only if the {1}-inverse A (1) of the system matrix A has the form (3) for arbitrary matrices U and W and c the rows of the matrix Vc a are free parameters, where Qc = c = a Remark 26 Similar result can be found in paper B Malešević and B Radičić 3 Example 27 Consider the non-homogeneous linear system x 1 +2x 2 +3x 3 = 7 4x 1 +5x 2 +6x 3 = 8 (5) According to Corollary 25 the general solution of the system (5) is of the form 1 7 x = P 7 1 Q = P 2 8 v 11 v 12 7v 11 2v 12 If we take α = 7v 11 2v 12 as a parameter we obtain the general solution of the system 2 7 1 1 7 19 +α 3 x = P 2 = 1 3 2 2 = 2 2α 3 3 α 1 α α We are now concerned with the matrix equation where A C m n, X C n k and C C m k AX = C, (6) Lemma 28 The matrix equation (6) has a solution if and only if the last m a rows of the matrix C = QC are zeros, where Q C m m is regular matrix such that (2) holds 6

Proof: If we write X = X 1 X 2 X k and C = C 1 C 2 C k, then we can observe the matrix equation (6) as the system of matrix equations AX 1 = C 1 AX 2 = C 2 AX k = C k Each of the matrix equation AX i = C i, 1 i k, by Lemma 21 has solution if and only if the last m a coordinates of the vector C i = QC i are zeros Thus, the previous system has solution if and only if the last m a rows of the matrix C = QC are zeros, which establishes that the matrix equation (6) has solution if and only if all entries of the last m a rows of the matrix C are zeros Theorem 29 The matrix X = A (1) C +(I A (1) A)Y C n k, Y C n k is an arbitrarymatrix, isthe general solutionof the matrix equation (6) if and only if the {1}-inverse A (1) of the system matrix A has the form (3) for arbitrary matrices U and W and the entries of the matrix V(C a Y a )+Y (n a) are mutually independent free parameters, where QC = C = Y P 1 Y = Y a = Y n a C a Proof: ApplyingtheTheorem22ontheeachsystemAX i = C i, 1 i k, we obtain that C a i X i = P V(C a i Y a i )+Y n a i is the general solution of the system if and only if the rows of the matrix V(C a i Y a i )+Y n a i are n a free parameters Assembling these individual solutions together we get that C a X = P V(C a Y a)+y n a 7 and

is the general solution of the matrix equation (6) if and only if entries of the matrix V(C a Y a )+Y n a are (n a)k mutually independent free parameters From now on we proceed with the study of the non-homogeneous linear system of the form xb = d, (7) where B is an n m matrix over the field C of rank b and d is an 1 m matrix over C Let R C n n and S C m m be regular matrices such that Ib RBS = E b = (8) An {1}-inverse of the matrix B can be represented in the Rohde s form B (1) Ib M = S R (9) N K where M = m ij, N = n ij and K = k ij are arbitrary matrices of corresponding dimensions b (n b), (m b) b and (m b) (n b) with mutually independent entries Lemma 21 The non-homogeneous linear system (7) has a solution if and only if the last m b elements of the row d = ds are zeros, where S C m m is regular matrix such that (8) holds Proof: By transposing the system (7) we obtain system B T x T = d T and by transposing the matrix equation (8) we obtain that S T B T R T = E b According to Lemma 21 the system B T x T = d T has solution if and only if the last m b coordinates of the vector S T d T are zeros, ie if and only if the last m b elements of the row d = ds are zeros Theorem 211 The row x = db (1) +y(i BB (1) ), y C 1 n is an arbitrary row, is the general solution of the system (7), if and only if the {1}-inverse B (1) of the system matrix B has the form (9) for arbitrary matrices N and K and the columns of the matrix (d b y b )M+y n b are free parameters, where ds = d = d b and yr 1 = y = y b y n b 8

Proof: The basic idea of the proof is to transpose the system (7) and to apply the Theorem 22 The {1}-inverse of the matrix B T is equal to a transpose of the {1}-inverse of the matrix B Hence, we have (B T ) (1) = (B (1) ) T = ( S Ib M N K ) T Ib N T R = R T M T K T We can now proceed analogously to the proof of the Theorem 22 to obtain that d T x T = R T b M T (d T b y b T)+yT n b is the general solution of the system B T x T = d T if and only if the rows of the matrix M T (d T b y b T)+yT n b are n b free parameters Therefore, x = d b (d b y b )M +y n b R is the general solution of the system (7) if and only if the columns of the matrix (d b y b )M +y n b are n b free parameters Analogous corollaries hold for the Theorem 211 We now deal with the matrix equation where X C k n, B C n m and D C k m S T XB = D, (1) Lemma 212 The matrix equation (1) has a solution if and only if the last m b columns of the matrix D = DS are zeros, where S C m m is regular matrix such that (8) holds Theorem 213 The matrix X = DB (1) +Y(I BB (1) ) C k n, Y C k n is an arbitrarymatrix, isthe general solutionof the matrix equation (1) if and only if the {1}-inverse B (1) of the system matrix B has the form (9) for arbitrary matrices N and K and the entries of the matrix (D b Y b )M +Y (n b) are mutually independent free parameters, where DS = D = D b and YR 1 = Y = Y b Y n b 9

3 An application In this section we will briefly sketch properties of the general solution of the matrix equation AXB = C, (11) where A C m n, X C n k, B C k l and C C m l If we denote by Y matrix product XB, then the matrix equation (11) becomes AY = C (12) According to the Theorem 29 the general solution of the system (12) can be presented as a product of the matrix P and the matrix which has the first a = rank(a) rows same as the matrix QC and the elements of the last m a rows are (m a)n mutually independent free parameters, P and Q are regular matrices such that QAP = E a Thus, we are now turning on to the system of the form XB = D (13) By the Theorem 213 we conclude that the general solution of the system (13) can be presented as a product of the matrix which has the first b = rank(b) columns equal to the first b columns of the matrix DS and the rest of the columns have mutually independent free parameters as entries, and the matrix R, for regular matrices R and S such that RBS = E b Therefore, the general solution of the system (11) is of the form X = P Gab F H L R, where G ab is a submatrix of the matrix QCS corresponding to the first a rows and the first b columns and the entries of the matrices F, H and L are nk ab free parameters We will illustrate this on the following example Example 31 We consider the matrix equation where A = 1 2 2 4, B = take Y = XB, we obtain the system AXB = C, 1 2 1 1 2 1 1 2 1 AY = C 1 and C = 1 2 1 2 4 2 If we

It is easy to check that the matrix A is of the rank a = 1 and for matrices 1 1 2 Q = and P = the equality QAP = E 2 1 1 a holds Based on the Theorem 29, the equation AY = C can be rewritten in the system form 1 AY 1 = 2 2 AY 2 = 4 1 AY 3 = 2 Combining the Theorem 22 with the equality c 1 c 2 c 3 1 1 2 1 = 2 1 2 4 2 yields Y 1 = P 1 v vz 11 +z }{{ 21 } α 2 = Y 2 = P 2v 2vz 12 +z }{{ 22 } β 1 Y 3 = P v vz 13 +z }{{ 23, } Therefore, the general solu- z11 z for an arbitrary matrix Z = 12 z 13 z 21 z 22 z 23 tion of the system AY = C is 1 2 1 Y = P α β γ From now on, we consider the system γ XB = D 1 2 1 for D = P 1 2 1 α β γ = 1+2α 2+2β 1+2γ α β γ 11

There are regular matrices R = 1 1 1 1 1 and S = 1 2 1 1 1 such that RBS = E b holds Since the rank of the matrix B is b = 1, according to the Lemma 212 all entries of the last two columns of the matrix D = DS are zeros, ie we have γ = α, β = 2α Hence, we get that the matrix D is of the form D = X = 1+2α α 1+2α (1+2α t 11 )m 11 +t }{{ 12 } β1 α (α t 21 )m 11 +t }{{ 22 } for an arbitrary matrix T = system AXB = C is 1+2α β1 β X = 2 β 1 β 2 α γ 1 γ 2 γ 1 γ 2 Applying the Theorem 213, we obtain (1+2α t 11 )m 12 +t 13 }{{} β2 R, (α t 12 )m 12 +t }{{ 23 } γ 1 γ 2 t11 t 12 t 13 Finally, the solution of the t 21 t 22 t 23 Acknowledgment Research is partially supported by the Ministry of Science and Education of the Republic of Serbia, Grant No17432 References 1 A Ben Israel and TNE Greville, Generalized Inverses: Theory and Applications, Springer, New York, 23 2 SL Campbell and CD Meyer, Generalized Inverses of Linear Transformations, SIAM series CLASSICS in Applied Mathematics, Philadelphia, 29 3 B Malešević and B Radičić, Non-reproductive and reproductive solutions of some matrix equations, Proceedings of International Conference Mathematical and Informational Technologies 211, pp 246-251, Vrnjačka Banja, 211 12

4 B Malešević and B Radičić, Reproductive and non-reproductive solutions of the matrix equation AXB = C, arxiv:1184867 5 B Malešević and B Radičić, Some considerations of matrix equations using the concept of reproductivity, Kragujevac J Math, Vol 36 (212) No 1, pp 151 161 6 R Penrose, A generalized inverses for matrices, Math Proc Cambridge Philos Soc, Vol 51 (1955), pp 46 413 7 V Perić, Generalized reciprocals of matrices, (in Serbo-Croatian), Matematika (Zagreb), Vol 11 (1982) No 1, pp 4 57 8 CA Rohde, Contribution to the theory, computation and application of generalized inverses, doctoral dissertation, University of North Carolina at Raleigh, 1964 9 NS Urquhart, The nature of the lack of uniqueness of generalized inverse matrices, SIAM Review, Vol 11 (1969) No 2, pp 268 271 13