On the solving of matrix equation of Sylvester type

Similar documents
ON THE CONSTRUCTION OF GENERAL SOLUTION OF THE GENERALIZED SYLVESTER EQUATION

V.B. LARIN 1. Keywords: unilateral quadratic matrix equation, matrix sign function, parameters updating.

Chapter 3 Transformations

Linear Algebra Massoud Malek

Linear Algebra Review. Vectors

Applied Linear Algebra in Geoscience Using MATLAB

Linear Matrix Inequality (LMI)

Convex Optimization Approach to Dynamic Output Feedback Control for Delay Differential Systems of Neutral Type 1,2

Research Article An Equivalent LMI Representation of Bounded Real Lemma for Continuous-Time Systems

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Review of Linear Algebra

Matrix Algebra: Summary

Applied Linear Algebra in Geoscience Using MATLAB

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

Mathematical foundations - linear algebra

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 581D FINAL EXAM Autumn December 12, 2016

A frequency weighted approach to robust fault reconstruction

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

A new robust delay-dependent stability criterion for a class of uncertain systems with delay

14 Singular Value Decomposition

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Linear Algebra Methods for Data Mining

Knowledge Discovery and Data Mining 1 (VO) ( )

Section 3.9. Matrix Norm

Review of Some Concepts from Linear Algebra: Part 2

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Lecture 1 Systems of Linear Equations and Matrices

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

7 Principal Component Analysis

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

Introduction to Matrix Algebra

Mathematical foundations - linear algebra

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Cheat Sheet for MATH461

ROBUST STABILITY TEST FOR UNCERTAIN DISCRETE-TIME SYSTEMS: A DESCRIPTOR SYSTEM APPROACH

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Linear Algebra Review

NORMS ON SPACE OF MATRICES

Static Output Feedback Stabilisation with H Performance for a Class of Plants

Review problems for MA 54, Fall 2004.

Introduction to Numerical Linear Algebra II

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Example Linear Algebra Competency Test

On Computing the Worst-case Performance of Lur'e Systems with Uncertain Time-invariant Delays

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Parameterized Linear Matrix Inequality Techniques in Fuzzy Control System Design

MATH 3511 Lecture 1. Solving Linear Systems 1

CSL361 Problem set 4: Basic linear algebra

A MATRIX INEQUALITY BASED DESIGN METHOD FOR CONSENSUS PROBLEMS IN MULTI AGENT SYSTEMS

Linear Algebra for Machine Learning. Sargur N. Srihari

Cheng Soon Ong & Christian Walder. Canberra February June 2017

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions

Simultaneous State and Fault Estimation for Descriptor Systems using an Augmented PD Observer

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

SYNTHESIS OF ROBUST DISCRETE-TIME SYSTEMS BASED ON COMPARISON WITH STOCHASTIC MODEL 1. P. V. Pakshin, S. G. Soloviev

LOW ORDER H CONTROLLER DESIGN: AN LMI APPROACH

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

ELE/MCE 503 Linear Algebra Facts Fall 2018

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Background Mathematics (2/2) 1. David Barber

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Linear Algebra Final Exam Review Sheet

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Review of Basic Concepts in Linear Algebra

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery

Chap 3. Linear Algebra

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Linear Algebra. Session 12

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Supplementary Notes on Linear Algebra

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

Convex and Nonsmooth Optimization: Assignment Set # 5 Spring 2009 Professor: Michael Overton April 23, 2009

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

Properties of Matrices and Operations on Matrices

Linear Algebra. and

Lecture 1: Review of linear algebra

Lecture 1: Systems of linear equations and their solutions

Diagonalizing Matrices

CS 246 Review of Linear Algebra 01/17/19

Chapter 1 Vector Spaces

Transcription:

Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 7, No. 1, 2019, pp. 96-104 On the solving of matrix equation of Sylvester type Fikret Ahmadali Aliev Institute of Applied Mathematics, Baku State University, Baku, Azerbaijan. E-mail: f aliev@yahoo.com Vladimir Boris Larin Institute of Mechanics of the Academy of Sciences of Ukraine, Ukraine, Kiev. E-mail: vblarin@gmail.com Abstract Solutions of two problems related to the matrix equation of Sylvester type is given. In the first problem, the procedures for linear matrix inequalities are used to construct the solution of this equation. In the second problem, when a matrix is given which is not a solution of this equation, it is required to find such solution of the original equation, which most accurately approximates the given matrix. For this, an algorithm for constructing a general solution of the Sylvester matrix equation is used. The effectiveness of the proposed approaches is illustrated on the examples. Keywords. Matrix equation, Linear matrix inequalities LMI), Matrix Sylvester equation, Sylvester type matrix equation, Complex matrices. 2010 Mathematics Subject Classification. 15A23, 15A24, 15A39. 1. Introduction In various problems of motion control an important place is occupied by questions connected to the development of algorithms for solving matrix equations see, for example, [7], and references therein). Here it can be noted that the algorithms for constructing the solution of the Sylvester matrix equations continue to attract the attention of researchers [1-3, 6, 9]. Thus, in [9] two problems are considered for constructing a solution of a matrix equation of the Sylvester type: AXD + CX T D = E. 1.1) In 1.1) the sought matrix X has the dimension m n, the superscript T hereinafter means transposition. In the first problem, we need to find the solution of 1.1). The second problem is as follows. Let X f be the given matrix, which is not a solution Received: 19 July 2018 ; Accepted: 9 December 2018. Corresponding author. 96

CMDE Vol. 7, No. 1, 2019, pp. 96-104 97 of 1.1). It is necessary to find a matrix X, belonging to the set S r of solutions of 1.1), which minimizes the following norm X X f F = min X S r X X f F. 1.2) Hereinafter F means the Frobenius norm Euclidean or spherical norm [8]). Below the algorithms for solving these problems will be considered. Thus, to solve the first problem, an algorithm based on the use of procedures of linear matrix inequalities LMI [4]) will be considered. For solving the second problem the approach described in [1] will be used. 2. General relations of LMI [4] As noted in [4] relations 2.3), 2.4)), the matrix inequality: [ ] Qx) Sx) S T > 0, 2.1) x) Rx) where the matrices Qx) = Q T x), Rx) = R T x), Sx) linearly depend on x, is equivalent to the following matrix inequalities: Rx) > 0, Qx) Sx)R 1 x)s T x) > 0. 2.2) Consider the LMI: [ ] Z T T T > 0, Z = Z T, 2.3) I which, according to 2.1), 2.2), can be written in the form Z > T T T. 2.4) Hereinafter I is the unit matrix of the corresponding size. The relations 2.1) - 2.3) allow to consider the following standard LMI problem on eigenvalues relations 2.9) 2.2.2 [4])), namely, the problem of minimizing of a linear function cxfor example, cx = trz), where trz) is the trace of the matrix Z ) under the conditions 2.3). To solve this problem, the mincx.m procedure of the MATLAB package [5] can be used. 3. The solution of equation 1.1) We use the above relations for solving the first problem. Thus, it is necessary to find a matrix X, satisfying equation 1.1). Let T = AXB + CX T D E

98 F. A. ALIEV AND V. B. LARIN in 2.3). Using the procedure mincx.m of the MATLAB package, we minimize trz) in inequality 2.4). For a sufficiently small value trz) = 0, it can be assumed that T = 0, consequently, the corresponding value of X is a solution of 1.1). Let us illustrate the effectiveness of such algorithm for solving equation 1.1) by the following example. Example 1. Suppose that in 1.1) A = D = 1 2 1 2 4 2 3 4 5 1 2 3 6 8 9, B =, E = [ 1 0 0 1 410 539 646 849 886 1163 ], C = 3 4 5 6 7 8 With these initial data, the exact solution of 1.1) will be the matrix X 0 = 1 2 3 4 5 6 Using the algorithm described above, we find the matrix X, to which correspond the following discrepancies:, n c = AXB + CX T D E F = 9.86 10 13, n x = X X c F = 1.48 10 11. Thus, the given estimates of the accuracy of the obtained solution of equation 1.1) show the effectiveness of using the LMI procedures in this problems. However, to solve the second problem considered in [9], it is expedient to use an approach based on the procedure of the Kronecker product. As noted in [9], equation 1.1) can be represented as a system of linear algebraic equations: H vecx) = vece). 3.1) In 3.1) H = B T A + D T C)P m, n), where the symbol denotes the Kronecker product A B = a ij B). see. [8]), and vecx) = x 11, x 12,..., x m1, x 12, x 22,..., x m2,..., x mn ) T R mn.

CMDE Vol. 7, No. 1, 2019, pp. 96-104 99 The matrix P m, n) is defined as follows: P m, n) = P T 11 P T 12 P T 1n P T 21 P T 22 P T 2n.... Pm1 T Pm2 T Pmn T Rmn mn 3.2) In 3.2) the matrices P ij of dimension m n, have the following structure: the element in the position i, j) is 1, all others are zeros. Denoting z = vecx), b = vece), we rewrite system 3.1) as follows H z = b. 3.3) To construct the general solution of 3.3), one can use the approach of [1]. 4. Algorithm for constructing the general solution 1.1) [1] Thus, the problem of constructing a general solution of equation 1.1) reduces to the problem of constructing a general solution of the linear algebraic equation 3.3). Consequently, the condition for the existence of a solution of 1.1) can be formulated as follows. For the existence of the solution 1.1), the matrices H and [H b] must have the same rank [8] to calculate the rank of the matrix one can use the rank.m procedure). Let us perform the singular decomposition of the matrix H procedure svd.m): H = USV T. 4.1) In 4.1) U, V are orthogonal matrices, S is a diagonal matrix, the first r r is the rang of matrix H) elements of diagonal of which are not equal to zero. Let consider the matrix U T H = SV T. In connection with the above structure of the matrix S, only the first r rows of the matrix U T H will be non-zero. Denote by A g the matrix formed from the first r rows of the matrix U T H. Multiplying the left and right sides of equation 3.3) by the matrix U T and leaving only the first r rows in both parts, we rewrite 3.3) as follows: A g z = b u. 4.2) Here the vector b u is formed from the first r components of the vector U T b. Note that appearing in 4.2) matrix A g is the matrix of full rang. Therefore, to determine the general solution 3.3), we can use the relations [1]: z = A T g g bu + Nξ, N = I A T g ) g Ag. 4.3)

100 F. A. ALIEV AND V. B. LARIN Here the first term on the right-hand side defines a particular solution of 3.3) having a minimal norm, ξ is a vector of free parameters, which determines the general solution of 3.3). Let produce, similar to 4.1), the singular expansion of the matrix N: N = U n S n V T n. Let the first q diagonal elements of the matrix S n are not equal to zero. Consequently, the matrix NV n = U n S n will have only the first q columns nonzero. We denote the matrix consisting of the first q columns of the matrix NV n as N q defining the zero subspace of the matrix A g ). We will rewrite relation 4.3) as follows: z = A T g g bu + N q ξ q, 4.4) where the dimension of the free parameters vector ξ q is q. Defining the vector z according to 4.4) and using the reshape.m procedure, it is possible to construct a matrix X, using the vector z, which defines the general solution of 1.1). Let us illustrate the procedure for constructing of the general solution of 1.1) by the following example. Example 2. Suppose that in 1.1) A = 1 2 1 2 4 2 3 4 5 [ 1 0 B = 0 1 C = 1 2 3 6 4 5 E = ],,, D = 50 92 138 260 150 272 1 2 3 6 0 0, Corresponding to these initial data, the matrix P m,n), according to 3.2), will have the form: P m,n) = 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1. The matrix H, appearing in 3.1) has the following form:

CMDE Vol. 7, No. 1, 2019, pp. 96-104 101 2 5 1 2 6 0 5 13 2 6 18 0 H = 7 16 5 5 15 0 2 6 0 5 14 1. 6 18 0 14 40 2 8 24 0 13 34 5 ) 1appearing Its rank is 4. The matrices A g,a T g Ag A T g in 4.2), 4.3) and the vector b u have the form: A g = A T g b u = 12.9143 36.4667 2.2762 21.2417 59.2648 4.4604 3.8814 7.3687 4.2753 1.8256 4.8419 0.6350 0.3851 1.3498 2.5052 0.6525 1.1156 3.0731 0.0883 0.2489 0.5139 0.1737 0.0137 0.5074 g = 440.0004 20.5351 6.4192 5.3579 0.0024 0.0338 0.0199 0.1421 0.0066 0.0642 0.0697 0.4004 0.0004 0.0372 0.1294 0.8266 0.0039 0.0159 0.0337 0.2794 0.0108 0.0422 0.0576 0.0221 0.0008 0.0055 0.1587 0.8162 Further, we find that the matrix N q appearing in 4.4) has the form N q = 0.9045 0.0000 0.3015 0.0000 0.3015 0.0000 0.0000 0.9045 0.0000 0.3015 0.0000 0.3015, 4.5) and, correspondingly, the dimension of the free parameters vector q is equal to 2. To the minimal solution z 0 of the system 4.2), namely, to the solution at the zero value of the vector ξ q in 4.4):, z 0 = A T g g bu = [2.3636 2.5455 4.5455 3.0909 3.6364 5.6364] T, 4.6) there corresponds the following solution of equation 1.1):

102 F. A. ALIEV AND V. B. LARIN X 0 = 2.3636 3.0909 2.5455 3.6364 4.5455 5.6364 If in 4.4) we select the following value of the vector ξ q : ξ q = [ 1.507 1.206 ], then the matrix X will have the form X = 1 2 3 4 5 6 4.7) 4.8) We note that both of matrices 4.7), 4.8) are solutions of equation 1.1). 5. The second problem of [9] Thus, as noted in the Introduction, the second problem of [9] is as follows. Let a matrix X f be given that does not belong to the of solution set of equation 1.1). It is necessary to find a matrix X, belonging to the set of solutions of equation 1.1), which most accurately approximates the matrix X f see the relation 1.2)). Taking into account that for a matrix of dimension n m see 4.48 [6]): A 2 F = n i=1 j=1 it can be argued that m a ij 2 A 2 F = veca) 2 2. Thus, the problem is reduced to approximation of the vector vecx f ), by the vectors which are defined by 4.4), i.e. to the corresponding selection of the vector of free parameters. In other words, it is necessary, choosing the elements of the vector ξ q, to minimize the discrepancy of the following linear system: N q ξ q = b q, 5.1) b q = vecx f ) A T g g bu. The corresponding solution of system 5.1) has the form see 15.43 [8]): ξ q = N T q N q ) 1 N T q b q. 5.2)

CMDE Vol. 7, No. 1, 2019, pp. 96-104 103 Note that the procedure of solving 5.2) is realized by the procedure \ of the MATLAB package. Thus, the expression for the matrix X, which most accurately approximates the matrix X f has the form vecx ) = A T g g bu + N q ξ q = z 0 + N q ξ q, 5.3) where the vector ξ q is defined by 5.2). Let us illustrate the algorithm described above for solving the second problem of [9] in the following example. Example 3. In this example, the original data coincides with the original data of Example 2. It is assumed that X f = 1 1 1 1 1 1 This matrix does not satisfy equation 1.1). According to the results of Example 2, the matrix N q is determined by the expression 4.5). Taking into account 5.1), we have, according to 4.7), the following expression for the vector b q : b q = [ 1.3636 1.5455 3.5455 2.0909 2.6364 4.6364] T. Then according to 5.2) we obtain the value for the vector ξ q : ξ q = [ 0.3015 0.3015 ]. Using 5.3) and taking into account 5.2), we have: vecx ) = [2.6364 2.4545 4.4545 3.3636 3.5455 5.5455] T, X = 2.6364 3.3636 2.4545 3.5455 4.4545 5.5455 Note that the matrix X satisfies equation 1.1) with accuracy of order 10 13. 6. Conclusion Solutions of the problems considered in [9], connected to the solution of the Sylvester type matrix equation, is given. In the first problem to construct the solution of this equation the procedures for linear matrix inequalities are used [4, 5]. In the second problem is used the algorithm [1] for construct a general solution of the Sylvester matrix equation. The effectiveness of the proposed approaches is illustrated by examples.

104 F. A. ALIEV AND V. B. LARIN References [1] F. A. Aliev and V. B. Larin, On the construction of general solution of the generalized Sylvester equation, TWMS J.Appl. Eng. Math., 7 2017), 1 6. [2] F. A. Aliev, V. B. Larin, N. I. Velieva, and K.G. Gasimova, On periodic solution of generalized Sylvester matrix equations, Appl. Comput. Math., 16 2017), 78 84. [3] F. A. Aliev and V. B. Larin, A note about the solution of matrix Sylvester equation, TWMS J.Pure Appl. Math., 8 2017), 251 255. [4] S. Boyd, L. E. Ghaoui, E. Feron, and V. Balakrishnan, Linear matrix inequalities in system and control theory, Philadelphia: SIAM, 1994, 193 p. [5] P. Gahinet, A. Nemirovski, A. J. Laub, and M. Chilali, LMI Control toolbox users guide, the mathworks inc., 1995, 315 p. [6] V. B. Larin, About solving of Sylvester equation, Problems of control and Inform., 1 2009), 29 34. [7] [8] V. B. Larin, Solutions of matrix equations in mechanics and control problems, Appl. Mechanics, 45 2009, 54 85. [9] V. V. Voevodin and Yu. A. Kuznetsov, Matrices and calculations, M.: Nauka, 1984, 320 p. [10] Ke. Yifen and Ma. Changfeng, The alternating direction methods for solving the sylvester-type matrix equation axb + cx t d = e, Journal of Computational Mathematics, 35 2017), 620 641.