Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Size: px
Start display at page:

Download "Solvability of Linear Matrix Equations in a Symmetric Matrix Variable"

Transcription

1 Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type in which the number of terms involving products of the problem data with the matrix variable can be arbitrary. We show that contrary to what happens with stard Lyapunov equations, which have only two terms, these generalized matrix equations can have unique solutions but the associated matrix representation in terms of Kronecker products can be singular. We show how a simple modification to the equation can lead to a matrix representation that does not suffer from this deficiency. I. INTRODUCTION Consider linear equations of the form L(X) := A i XBi T + B i XA T i = C = C T. (1) with real matrix coefficients A i, B i a square real matrix unknown X. These similar equations in several unknowns often occur when one deals with problems having matrix unknowns, e.g. [1, [2. Observe that due to the symmetry of the right-h side, solutions will always be symmetric, that is X = X T (2) even if this constraint is not explicitly stated. Efficient numerical solution is little developed for M 2, very highly developed when M = 1. There, a common tool (at least implicitly) is to convert this to a conventional linear system L vec(x) = vec(c) (3) where, using Kronecker products, matrix L is L := A i B i + B i A i (4) We call this the vec-representation of L. It translates structure (1) into a natural Kronecker product structure (4). A key property is: for M = 1 equation (1) has a solution this solution is unique if only if L L are invertible. For M 2 invertibility of L L can fail even when (1) has a solution this solution is unique. The catch here is that a solution to (1) may exist be unique but L M. C. de Oliveira is with the Department of Mechanical Aerospace Engineering, University of California San Diego, La Jolla, CA, , USA mauricio@ucsd.edu J. W. Helton is with the Department of Mathematics, University of California San Diego, La Jolla, CA, , USA helton@math.ucsd.edu Partly supported by NSF DMS , DMS the Ford Motor Company. may not be invertible without (2). A brute force fix to this problem is to produce linear equations L sym x sym = c sym (5) which do not have a nice (Kronecker product) structure where x sym is a vector of dimension k = m(m + 1)/2. (6) Note that for m > 1 we have k < m 2 k m 2 /2 for large m. This is bad because sparsity patterns in the coefficient matrices are likely destroyed. One contribution of the present paper is to show that if L is not invertible but a square equation of the form (1) has a unique solution, the modified problem A i XBi T + B i XA T i + α(x X T ) = C = C T. (7) will have this same solution, independent of the parameter α will be invertible for all but finitely many α, see Theorem 3.3. Thus we have a highly structured set of well posed linear equations available for numerical techniques. A similar result in terms of the rank of L is produced when the coefficients A i, B i are not square. See Theorem 3.4 for the generalization of (7). Future work treats systems of equations in more variables numerical algorithms for which our formulas are well suited. The interested reader is also referred to the work [3 for related results in the case of quadratic matrix equations to the work [4 for a numerical method for solving coupled Sylvester equations. II. SOME PERSPECTIVE In the above equation the A s B s (problem data) are real matrices of compatible dimensions M 1 is an integer. We are especially interested in solving equations where M 2 is fixed usually small, say M < 20, the coefficient matrices come from system problems. One concrete example are the equations arising from interiorpoint algorithms applied to convex matrix optimization problems such as the ones described in [2. In these problems A s B s can be highly structured, e.g. sparse, low rank, etc, we would like to preserve some of this structure when solving equation (1). Subsequent work will treat systems of such matrix equations in several unknowns. A particularly well studied instance of equation (1) is the Lyapunov equation AX + XA T = C = C T (8) in which M = 1. In this particular case, where all matrices are square of dimension m, the best known algorithms for

2 computing the solution to the Lyapunov equation (8) will compute X is O(m 3 ) operations (e.g. Bartels Stewart algorithm [5). A closer look at these algorithms reveals that the matrices A C are first transformed into a highly structured form, e.g. A is transformed into à in Schur form using orthogonal transformations, a solution is computed after solving an equation of the form (I à + à I) vec(x) = vec( C) (9) Note that the above equation is never formed explicitly its solution is computed on a column by column basis. It is also well know that a solution to equation (8) will exist be unique if only if a solution to (9) exists is unique. We shall prove that this is the case for all equations of the form (1) with M = 1 in Lemma 4.6. As we anticipated, the situation is more involved when M 2, where the matrix L in (4) may be rank-deficient even when the solution to equation (1) exists is unique. The best illustration is with a simple example. Example 1: Let L(X) = A T X+XA+B T XB with M = m = 2 We have A = 1 0, B = L = , which is singular. Nevertheless, if equations (5) are formed taking into account the symmetry of X = X T in (2) one can verify that L sym = is non-singular, hence that the solution to any equation where L is as in Example 1 with a symmetric right-h side C = C T will exist be unique. III. CROSS-SYMMETRIC LINEAR MAPPINGS All results in the present paper can be seen as consequences of the following property of equation (1), which we now formalize. Definition 1 (Cross-Symmetric Linear Mapping): Let L : R m m R r r be a linear mapping. We say that the mapping L is cross-symmetric if L(X) T = L(X T ) for all X. Note that the definition is not restricted to linear mappings of the form (1) but to any linear mapping. Indeed, the proofs in this section do not make use of structure in (1) other than its cross-symmetry. The main point is that cross-symmetric mappings are block-diagonalizable into symmetric antisymmetric components. The following is stard [6, p.170. Proposition 1: Let R m m, SR m m, AR m m denote the space of real square, real symmetric, real anti-symmetric matrices of dimension m. Then R m m = SR m m AR m m. The following lemma is equivalent to blockdiagonalization for cross-symmetric mappings. Lemma 3.1 (Block-diagonalization): Let L(X) : R m m R r r be a linear cross-symmetric mapping. Then where L(R m m ) = L sym (SR m m ) L anti (AR m m ) L sym (X) := 1 2 (L(X) + L(X)T ) (10) L anti (X) := 1 2 (L(X) L(X)T ). (11) Proof: Let X = X sym +X anti, where X sym SR m m X anti AR m m. Then Y = L(X) R r r is such that Y = Y sym + Y anti where Y sym := L sym (X) = L( 1 2 [X + XT ) = L(X sym ) Y anti := L anti (X) = L( 1 2 [X XT ) = L(X anti ). hence Y sym SR r r Y anti AR r r. As a consequence of that, equations involving crosssymmetric mappings can be split in two, as in the next lemma. Lemma 3.2: Let C R m m, L(X) : R m m R r r be a linear cross-symmetric mapping consider the equation L(X) = C. (12) Equation (12) admits a solution X = X sym + X anti where X sym SR m m X anti AR m m if only if L sym (X sym ) = 1 2 (C + CT ), L anti (X anti ) = 1 2 (C CT ). Proof: A proof of this lemma can be constructed combining Proposition 1 with Lemma 3.1. It means that one can characterize the existence of symmetric solutions by looking only at the symmetric component of the mapping. Indeed, it should be clear that if C = C T then a symmetric solution to an equation L(X) = C = C T exists is unique if only it L sym is nonsingular. This explains why L can be singular but a symmetric solution may exist be unique. In particular, because of Lemma 3.1, we should have rank(l) = rank(l sym ) + rank(l anti ) from block-diagonalization. In the next paragraphs we will discuss how L can be modified so that it becomes nonsingular whenever L sym is also nonsingular. We will tackle the square case first. Theorem 3.3: Let L(X) : R m m R m m be a crosssymmetric linear mapping. Define L α : R m m R m m where L α (X) := L(X) α 2 (X XT ). (13)

3 Then L α (R m m ) = L α (SR m m ) L α (AR m m ) rank(l α ) p + rank(l sym ) where p = m(m 1)/2. Furthermore rank(l α ) = p + rank(l sym ) for all but finitely many α R. Proof: The proof that L α is the direct sum of symmetric anti-symmetric subspaces is similar to that of Lemma 3.1. With that in mind all that we need to prove is that L α (AR m m ) is full rank for all but finitely many α, that is L α (X anti ) = 0 = X anti = 0 for X anti AR m m almost all α R. But L α (X anti ) = 0 with X anti 0 if only if L(X anti ) = αx anti that is, if α is an eigenvalue with anti-symmetric eigenvector X anti of the mapping L. Hence if α is not one of the finitely many eigenvalues of L we must have that X anti = 0. Note that L α is not cross-symmetric but it is still blockdiagonalizable in symmetric anti-symmetric blocks. From here it is not hard to be convinced that the vecrepresentation of L α will then be nonsingular whenever L sym is nonsingular for almost any α. In the next section we will come up with explicit formulas for such a representation, but at this point we can already revisit Example 1. Example 2: Let L(X) = A T X +XA+B T XB with data as in Example 1. We already now that L is singular but L sym is not. For α = 2 we have that L α is not singular. Indeed, one can verify that its vec-representation is L α = , which is non-singular. Theorem 3.3 can be generalized in many ways. One could replace the anti-symmetric term X X T by a symmetric one X +X T write comparable statements for L anti. One can also propose a version that works with non-square mappings, as in the next theorem. Theorem 3.4: Let L(X) : R m m R r r be a crosssymmetric linear mapping E, F R r m be full-rank matrices. Define L α : R m m R r r where L α (X) := L(X) α 4 F (X XT )E T α 4 E(X XT )F T. Then L α (R m m ) = L α (SR m m ) L α (AR m m ) (14) rank(l α ) p + rank(l sym ) where p = s(s 1)/2, s = min{m, r}. Furthermore rank(l α ) = p + rank(l sym ) for all but finitely many α R. Proof: A proof of the above theorem is omitted can be constructed as the proof of Theorem 3.3. Alternatively, one can rely on the concrete representations the argument to be presented in Section IV-D. We use these results in order to solve symmetric equations of the form (1). Corollary 3.5: Let L(X) : R m m R r r be a crosssymmetric linear mapping. Define L α : R m m R r r as in (13) if m = r or in (14) if m r. The equation L(X) = C = C T admits a symmetric solution X = X T if only if X also solves L α ( X) = C = C T. When m = r, if the symmetric solution X is unique then L α is nonsingular for all but finitely many α. Proof: For any X = X T we have L(X) = L α (X). Then X = X T also solves L( X) = C = C T. Uniqueness non-singularity follow from Theorem 3.3. IV. CONCRETE RESULTS FOR vec-representations The results presented in the previous section hold for abstract cross-symmetric linear mappings. In this sections we will give concrete representations, specifically vecrepresentations, for the mappings of the particular form (1). In order to do that we will first revisit some results on Kronecker products in the next section. The goal is to provide formulas for evaluating L, L sym, L anti in terms of Kronecker products. These formulas are useful for implementing numerical algorithms. A. Kronecker Products the vec Operator We review now some basic properties of Kronecker products the vec operator (see [7, Chapter 4 for much more). For a matrix X R m n the operator vec : R m n R mn produces a vector vec(x) R mn with the columns of X all stacked up in order. The Kronecker product of two matrices A R s n B R r m, denoted A B, is the C R rs mn block matrix where the (i, j) block entry is the matrix C ij = A ij B. The following lemma summarizes some useful properties of Kronecker products the vec operator. Lemma 4.1: Let X R m n, A R s n, B R r m, C R n p, D R m q be given. Then (i) vec(bxa T ) = (A B) vec(x); (ii) (A B) T = (A T B T ); (iii) (A B)(C D) = (AC BD); Consider X R m m define K m to be the unique permutation matrix K m R m2 m 2 such that vec(x T ) = K m vec(x). The following relations will be useful.

4 Lemma 4.2: Let X R m m, A R r m, B R r m, the unique permutation matrix K m such that vec(x T ) = K m vec(x) be given. Then (i) K T m = K m ; (ii) K T mk m = K m K T m = I; (iii) (A B) = K r (B A)K m. A proof of the above two lemmas can be found, for instance, in [7, Chapter 4. The next lemma on the properties of the matrices I+K m,m I K m,m is easy to prove. See also [8. Lemma 4.3: There exist full rank matrices P m R k m2, where k = m(m + 1)/2, Q m R p m2, where p = m(m 1)/2, satisfying (i) P m vec X = P m vec X T ; (ii) 1 2 P m(i + K m ) = P m K m = P m ; (iii) P m (I K m ) = 0. (iv) Q m vec X = Q m vec X T ; (v) 1 2 Q m(i K m ) = Q m K m = Q m ; (vi) Q m (I + K m ) = 0; Furthermore Q m Pm T = 0. Proof: By definition P m vec X = P m vec X T = P m K m vec X which holds for any X hence P m K m = P m. Consequently P m = 1 2 (P m+k m P m ) = 1 2 P m(i+k m ). This implies (ii). That such P m R k m2 is full rank follows from the fact that P m extracts the symmetric subspace of X, which has dimension k. Indeed, (i) (ii) imply that P m vec X = P m vec 1 2 (X +XT ). Note that (ii) also implies that P m P m K m = 0, hence (iii). A similar proof work for Q m in order to prove (iv) (vi). Finally, multiplying (v) on the right by Pm T using (iii) leads to Q m Pm T = 0. We can use the matrices P m Q m to split R m m into two complementary subspaces. Note however that matrices P m Q m are not unique even though K m is unique! Indeed, in R 2 2 x x1 x vec 3 = x 2 x 2 x 4 x 3, K 2 = x while both P2 1 = , P2 2 = satisfy all properties in Lemma 4.3. In many scenarios it is useful to settle on a concrete choice of P m Q m. One can do this by defining symmetric a anti-symmetric versions of the vec operator: if X R m m the symmetric vec operator svec : R m m R k, k = m(m + 1)/2, produces a vector svec(x) R k with the columns of the lower triangular part of the symmetric matrix 1 2 (X + XT ) all stacked up in order. Likewise, the anti-symmetric vec operator kvec : R m m R p, p = m(m 1)/2, produces a vector kvec(x) R p with the columns of the lower triangular part of the anti-symmetric matrix 1 2 (X XT ) all stacked up in order. It is simple to verify that matrices P m Q m satisfying Lemma 4.3 svec X = P m vec X, kvec X = Q m vec X are now uniquely defined. In particular, P 2 = P 2 2. B. vec-representations of Cross-Symmetric Mappings Let us begin by formalizing the notion of vecrepresentation, which holds not only for cross-symmetric linear mappings. Definition 2 (vec-representation): Let L : R m n R r s be a linear mapping. We say that the matrix L R rs mn is a vec-representation of the mapping L if L vec X = vec L(X) for all X. The following lemma explore some properties of vecrepresentations of cross-symmetric linear mappings, including a formula for computing the block-diagonalization of the vec-representation. Lemma 4.4: Let L(X) : R m m R r r be a linear cross-symmetric mapping L R r2 m 2 its vecrepresentation. Then where [ Pr Q r L [ P T m K r L = LK m (15) Q T Lsym 0 m =, (16) 0 L anti L sym = P r LP T m, L anti = Q r LQ T m. (17) P r, P m, Q r Q m are matrices satisfying Lemma 4.3. Proof: Because L is cross symmetric, L(X) T = L(X T ). That is K r L vec(x) = L vec(x T ) = LK m vec(x) which should hold for all X, hence (15). From Lemma 4.2, multiplication of (15) by K r on the left implies that L = K r LK m so L = 1 2 (L + K rlk m ). Then multiply out (16) use the above to show that P r LQ T m = 1 2 P r(l + K r LK m )Q T m use Lemma 4.3 to exp P r (L + K r LK m )Q T m = P r L(I + K m )Q T m = 0. A similar construction yields Q r LPm T = 0. For particular mappings of the form (1) we now derive formulas based on Kronecker products.

5 Lemma 4.5: Consider the particular cross-symmetric linear mapping L : R m m R r r given in (1). Define A := A vec-representation of L is A i B i (18) L = A + K r AK m where the permutation matrices K m K r are defined as in Lemma 4.2. Furthermore, L sym = 2 P r AP T m, L anti = 2 Q r AQ T m. Proof: Simply use Lemma 4.2 to write B i A i = K r (A i B i )K m. Then compute (17) L sym = P r (A + K r AK m )P T m = P r A(I + K m )P T m = 2P r AP T m after using Lemma 4.3. A similar calculation provides a formula for L anti. C. The Case M = 1 Because of the block-diagonalization property of crosssymmetric linear mappings it is possible to conclude that a square cross-symmetric linear mapping L will be nonsingular if only if both its symmetric part L sym antisymmetric L anti components are non singular. As anticipated in the introduction, in the case M 2 any combination of singular non-singular L sym L anti is possible depending on the data. We illustrate such possibilities by first revisiting Example 1. Example 3: Let L(X) = A T X+XA+B T XB A B as in Example 1. We already know that L sym ( L sym ) is non-singular but that L is singular. Using the formulas in the previous section we can compute L anti = 0. In this example, L is singular because L anti is singular. On the other extreme, consider another simple example with the same mapping but different data. Example 4: Let L(X) = A T X + XA + B T XB, with M = 2 A = 1/2 1, B = 1 1/ Here L = , L sym = 1 0 1, L anti = 1, where L L sym are singular while L anti is non-singular. In the case M = 1 the structure is much more rigid, a fact that is implicitly used to solve Lyapunov equations. The following lemma holds when L is square M = 1. Lemma 4.6: Let L : R m m R m m : L(X) = AXB T + BXA T. Then L is non-singular if only if L sym is non-singular. Proof: All we need to prove is that L singular implies L sym is also singular. It is easy to see that when either A or B are singular that L will also be singular. In this case, say A is singular, let x 0 be such that Ax = 0. Then X = xx T is symmetric such that L(X) = 0 which implies that L sym must also be singular. The case when neither A nor B are singular is more complicated. If A B are non-singular, then L is non-singular if only if B 1 L(X)B T is non-singular. A vec-representation of the mapping B 1 L(X)B T is (9) with à = B 1 A. It is then possible to use results on the eigenstructure of the Kronecker sums (e.g. [7, Theorem 4.4.5) to show that if (λ i, x i ) is an eigenvalue-eigenvector pair of the real matrix à then (µ ij, z ij ) where µ ij = λ i + λ j, z ij = x j x i is an eigenvalue-eigenvector pair of the real matrix (I Ã+à I) for all i, j = 1,..., m. This type of result is probably the one used in most proofs of solvability conditions for Sylvester Lyapunov equations. Therefore when A B are non-singular L will be singular if only if µ ij = λ i + λ j = 0 for some i j. Note that because à is real we should have that the real matrix Z ij defined such that vec(z ij ) = z ij + z ij 0, where the overline denotes complex conjugation, needs to satisfy L(Z ij ) = 0. But because for the same i j we should also have µ ji = 0, there will be a Z ji such that vec(z ji ) = z ji + z ji 0 L(Z ij ) = 0. Note that X ij := Z ij + Z ji = x i x T j + x j x T i + x i x T j + x jx T i is a symmetric real matrix for which L(X ij ) = 0, hence L sym must also be singular, which completes the proof. D. vec-representation of the Mapping L α We now turn our attention to the modified mappings L α introduced in Theorems Let us first deal with the square case, in which L α is defined as in (13). The following simple facts will be presented without proof. First note that for any linear mapping L its associate vec-representation we have that L α = L + α 2 (I K m) where K m is a permutation as in Lemma 4.2. Note that I K m is a sparse matrix with 2m nonzero entries. Hence the above modification should add few nonzero entries to L α as compared with L. In particular, it will often add less zeros than the projection P m LPm. T It is also not hard to show that when L is cross-symmetric that a concrete block-diagonal representation for L α is given by Pm L α [ Pm Q T Q T Lsym 0 m = m 0 L anti + α Q m Q T m This makes clear how the extra term affects the rank of only the anti-symmetric part. Furthermore, since Q m is a full-rank matrix we will have that Q m Q T m 0, i.e. positive definite.

6 When L is not square, L α is defined as in (14) for some full-rank matrices E F. Its vec-representation is similar L α = L + α 4 (I K r)d(i K m ) with D = E F, the associated block-diagonal form is Pr L α [ Pm Q T Q T Lsym 0 m = r 0 L anti + α Q r DQ T m That L anti is full-rank for almost all α comes then from the fact that Q r DQ T m is full-rank. V. CONCLUSIONS, APPLICATIONS AND FUTURE WORK In this paper we have studied the solvability of a single linear equation on a symmetric matrix variable of the form A i XBi T + B i XA T i = C = C T. The results clarify the relationship between the above equation the non-minimal vec-representation ( M ) R i L i + L i R i vec X = vec C. We have shown that for M 2 the vec-representation can be rank-deficient even when a solution to the original problem exists is unique. We have shown that if this is the case, for square equations, one can work with the vecrepresentation of the modified problem L i XRi T + R i XL T i + α(x X T ) = C = C T. which will be guaranteed to have a unique solution whenever the original problem does for some properly chosen scalar α. A slightly more complicated version of the above argument holds for non-square equations as well is given in Theorem 3.4. The results are directly applicable to the design of algorithms for solving matrix equations in which the data is highly structured, e.g. sparse, for which construction of the associated minimal representation can destroy the underlying problem structure. We shall be working in the future with concrete instances arising from interior-point algorithms applied to convex matrix optimization problems such as the ones described in [2. In such problems the equations are square the number of terms (M) is approximately 20 the variable dimension (m) can range from 10 to 100 in most practical problems. Another interesting application of the present result is in the study of general linear equations of the form L i XRi T + F i X T G T i = C (19) in which the variable X is not symmetric. As with the problem discussed here, minimal representations of the above equation will often destroy the structure present in the problem data this time because of the term in X T. An idea we are currently investigating is how to apply the results of the present paper to the equation R T Fi L i Z i G T = C (20) i where Z is a symmetric structured variable 0 X T Z =. (21) X 0 The current results are very promising will be reported in a future paper. REFERENCES [1 M. Konstantinov, V. Mehrmann, P. Petkov, On properties of Sylvester Lyapunov operators, Linear Algebra its Applications, vol. 312, pp , [2 J. F. Camino, J. W. Helton, R. E. Skelton, Solving matrix inequalities whose unknowns are matrices, SIAM Journal on Optimization, vol. 17, no. 1, pp. 1 36, [3 V. A. Tsachouridis B. Kouvaritakis, The homogeneous projective transformation of general quadratic matrix equations, IMA Journal of Mathematical Control Information, vol. 22, pp , [4 F. Ding T. W. Chen, Iterative least-squares solutions of coupled sylvester matrix equations, Systems & Control Letters, vol. 54, pp , Feb [5 R. H. Bartels G. W. Stewart, Algorithm - solution of matrix equation AX + XB = C, Communications of The ACM, vol. 15, no. 9, pp , [6 R. A. Horn C. R. Johnson, Matrix Analysis. Cambridge, UK: Cambridge University Press, [7 R. A. Horn C. R. Johnson, Topics in Matrix Analysis. Cambridge, UK: Cambridge University Press, [8 J. R. Magnus H. Neudecker, Commutation matrix - some properties applications, Annals Of Statistics, vol. 7, no. 2, pp , 1979.

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 4 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 12, 2012 Andre Tkacenko

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix 1 Matrix Algebra Reading [SB] 81-85, pp 153-180 11 Matrix Operations 1 Addition a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn + b 11 b 12 b 1n b 21 b 22 b 2n b m1 b m2 b mn a 11 + b 11 a 12 + b 12 a 1n

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix.

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. arxiv:math/0506382v1 [math.na] 19 Jun 2005 Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. Adviser: Charles R. Johnson Department of Mathematics College

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

12. Perturbed Matrices

12. Perturbed Matrices MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is

More information

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

1 Positive definiteness and semidefiniteness

1 Positive definiteness and semidefiniteness Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +

More information

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Kronecker product and

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 3 Review of Matrix Algebra Vectors and matrices are essential for modern analysis of systems of equations algebrai, differential, functional, etc In this

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a beginning course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc I will review some of these terms here,

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Solving Homogeneous Systems with Sub-matrices

Solving Homogeneous Systems with Sub-matrices Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS RICHARD J. CARON, HUIMING SONG, AND TIM TRAYNOR Abstract. Let A and E be real symmetric matrices. In this paper we are concerned with the determination

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

An alternative proof of the Barker, Berman, Plemmons (BBP) result on diagonal stability and extensions - Corrected Version

An alternative proof of the Barker, Berman, Plemmons (BBP) result on diagonal stability and extensions - Corrected Version 1 An alternative proof of the Barker, Berman, Plemmons (BBP) result on diagonal stability and extensions - Corrected Version Robert Shorten 1, Oliver Mason 1 and Christopher King 2 Abstract The original

More information

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

Linear Algebra Short Course Lecture 2

Linear Algebra Short Course Lecture 2 Linear Algebra Short Course Lecture 2 Matthew J. Holland matthew-h@is.naist.jp Mathematical Informatics Lab Graduate School of Information Science, NAIST 1 Some useful references Introduction to linear

More information

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics Ulrich Meierfrankenfeld Department of Mathematics Michigan State University East Lansing MI 48824 meier@math.msu.edu

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review 1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:

More information

12x + 18y = 30? ax + by = m

12x + 18y = 30? ax + by = m Math 2201, Further Linear Algebra: a practical summary. February, 2009 There are just a few themes that were covered in the course. I. Algebra of integers and polynomials. II. Structure theory of one endomorphism.

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Chapter XI Novanion rings

Chapter XI Novanion rings Chapter XI Novanion rings 11.1 Introduction. In this chapter we continue to provide general structures for theories in physics. J. F. Adams proved in 1960 that the only possible division algebras are at

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

d A 0 + m t k A k 0 whenever λ min (B k (x)) t k λ max (B k (x)) for k = 1, 2,..., m x n B n (k).

d A 0 + m t k A k 0 whenever λ min (B k (x)) t k λ max (B k (x)) for k = 1, 2,..., m x n B n (k). MATRIX CUBES PARAMETERIZED BY EIGENVALUES JIAWANG NIE AND BERND STURMFELS Abstract. An elimination problem in semidefinite programming is solved by means of tensor algebra. It concerns families of matrix

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

Vector Spaces and Linear Transformations

Vector Spaces and Linear Transformations Vector Spaces and Linear Transformations Wei Shi, Jinan University 2017.11.1 1 / 18 Definition (Field) A field F = {F, +, } is an algebraic structure formed by a set F, and closed under binary operations

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

Eigenvalues and Eigenvectors: An Introduction

Eigenvalues and Eigenvectors: An Introduction Eigenvalues and Eigenvectors: An Introduction The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Some New Results on Lyapunov-Type Diagonal Stability

Some New Results on Lyapunov-Type Diagonal Stability Some New Results on Lyapunov-Type Diagonal Stability Mehmet Gumus (Joint work with Dr. Jianhong Xu) Department of Mathematics Southern Illinois University Carbondale 12/01/2016 mgumus@siu.edu (SIUC) Lyapunov-Type

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

arxiv: v1 [math.na] 13 Mar 2019

arxiv: v1 [math.na] 13 Mar 2019 Relation between the T-congruence Sylvester equation and the generalized Sylvester equation Yuki Satake, Masaya Oozawa, Tomohiro Sogabe Yuto Miyatake, Tomoya Kemmochi, Shao-Liang Zhang arxiv:1903.05360v1

More information

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS Proceedings of the Second International Conference on Nonlinear Systems (Bulletin of the Marathwada Mathematical Society Vol 8, No 2, December 27, Pages 78 9) KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix 3.3. Matrix Rank and the Inverse of a Full Rank Matrix 1 Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix Note. The lengthy section (21 pages in the text) gives a thorough study of the rank

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information