Stability radii for real linear Hamiltonian systems with perturbed dissipation
|
|
- Justina Patrick
- 5 years ago
- Views:
Transcription
1 Stability radii for real linear amiltonian systems with perturbed dissipation Christian Mehl Volker Mehrmann Punit Sharma August 17, 2016 Abstract We study linear dissipative amiltonian (D) systems with real constant coefficients that arise in energy based modeling of dynamical systems. In this paper we analyze when such a system is on the boundary of the region of asymptotic stability, i.e., when it has purely imaginary eigenvalues, or how much the dissipation term has to be perturbed to be on this boundary. For unstructured systems the explicit construction of the real distance to instability (real stability radius) has been a challenging problem. In this paper, we analyze this real distance under different structured perturbations to the dissipation term that preserve the D structure and we derive explicit formulas for this distance in terms of low rank perturbations. We also show (via numerical examples) that under real structured perturbations to the dissipation the asymptotical stability of a D system is much more robust than for unstructured perturbations. Dissipative amiltonian system, port-amiltonian system, real distance to instability, real structured distance to instability, restricted real distance to instability. AMS subject classification. 93D20, 93D09, 65F15, 15A21, 65L80, 65L05, 34A30. 1 Introduction In this paper we study linear time-invariant systems with real coefficients. When a physical system is energy preserving, then a mathematical model should reflect this property and this is characterized by the property of the system being amiltonian. If energy dissipates from the system then it becomes a dissipative amiltonian (D) system, which in the linear time-invariant case can be expressed as ẋ = (J R) Qx, (1.1) where the function x x T Qx, with Q = Q T R n,n positive definite, describes the energy of the system, J = J T R n,n is the structure matrix that describes the energy flux among energy storage elements, and R R n,n with R = R T 0 is the dissipation matrix that describes energy dissipation in the system. Dissipative amiltonian systems are special cases of port-amiltonian systems, which recently have received at lot attention in energy based modeling, see, e.g., 3, 8, 19, 21, 22, Institut für Mathematik, MA 4-5, TU Berlin, Str. des 17. Juni 136, D Berlin, FRG. {mehl,mehrmann,sharma@math.tu-berlin.de. Supported by Einstein Stiftung Berlin through the Research Center Matheon Mathematics for key technologies in Berlin. 1
2 26, 25, 27, 28, 29. An important property of D systems is that they are stable, which in the time-invariant case can be characterized by the property that all eigenvalues of A = (J R)Q are contained in the closed left half complex plane and all eigenvalues on the imaginary axis are semisimple, see, e.g. 20 for a simple proof which immediately carries over to the real case discussed here. For general unstructured systems ẋ = Ax, if A has purely imaginary eigenvalues, then arbitrarily small perturbations (arising e.g. from linearization errors, discretization errors, or data uncertainties) may move eigenvalues into the right half plane and thus make the system unstable. So stability of the system can only be guaranteed when the system has a reasonable distance to instability, see 2, 5, 11, 12, 14, 31, and the discussion below. When the coefficients of the system are real and it is clear that the perturbations are real as well, then the complex distance to instability is not the right measure, since typically the distance to instability under real perturbations is larger than the complex distance. A formula for the real distance under general perturbations was derived in the ground-breaking paper 23. The computation of this real distance to instability (or real stability radius) is an important topic in many applications, see e.g. 10, 17, 18, 24, 30, 32. In this paper, we study the distance to instability under real (structure preserving) perturbations for D systems and we restrict ourselves to the case that only the dissipation matrix R is perturbed, because this is the part of the model that is usually most uncertain, due to the fact that modeling damping or friction is extremely difficult, see 10 and Example 1.1 below. Furthermore, the analysis of the perturbations in the matrices Q, J is not completely clear at this stage. Since D systems are stable, it is clear that perturbations that preserve the D structure also preserve the stability. owever, D systems may not be asymptotically stable, since they may have purely imaginary eigenvalues, which happens e.g. when the dissipation matrix R vanishes. So in the case of D systems, we discuss when the system is robustly asymptotically stable, i.e. small real and structure-preserving perturbations R to the coefficient matrix R keep the system asymptotically stable, and we determine the smallest perturbations that move the system to the boundary of the set of asymptotically stable systems. Motivated from an application in the area of disk brake squeal, we consider restricted structure-preserving perturbations of the form R = B B T, where B R n,r is a so-called restriction matrix of full column rank. This restriction allows the consideration of perturbations that only affect selected parts of the matrix R. (We mention that perturbations of the form B B T should be called structured perturbations by the convention following 13, but we prefer the term restricted here, because of the danger of confusion with the term structure-preserving for perturbations that preserve the D structure.) Example 1.1 The finite element analysis of disk brake squeal 10 leads to large scale differential equations of the form M q + (D + G) q + (K + N)q = f, where M = M T > 0 is the mass matrix, D = D T 0 models material and friction induced damping, G = G T models gyroscopic effects, K = K T > 0 models the stiffness, and N is a nonsymmetric matrix modeling circulatory effects. ere q denotes the derivative with respect to time. An appropriate first order formulation is associated with the linear system 2
3 ż = (J R)Qz, where J := G K N (K N ) 0, R := 1 D 2 N 1 2 N 0 M 0, Q := 0 K 1. (1.2) This system is in general not a D system, since for N 0 the matrix R is indefinite. But since brake squeal is associated with the eigenvalues in the right half plane, it is an important question for which perturbations the system is stable. This instability can be associated with the restricted indefinite perturbation matrix R + R := D N 1 2 N 0 and this matrix underlies further restrictions, since only a small part of the finite elements is associated with the brake pad and thus the perturbations in N are restricted to the coefficients associated with these finite elements. In the following denotes the spectral norm of a vector or a matrix while F denotes the Frobenius norm of a matrix. By Λ(A) we denote the spectrum of a matrix A R n,n, where R n,r is the set of real n r matrices, with the special case R n = R n,1. For A = A T R n,n, we use the notation A 0 and A 0 if A is positive or negative semidefinite, respectively, and A > 0 if A is positive definite. The Moore-Penrose of a matrix R n,r is denoted by A, see e.g. 9. The paper is organized as follows. In Section 2 we study mapping theorems that are needed to construct the minimal perturbations. The stability radii for D systems under perturbations in the dissipation matrix is studied in Section 3 and the results are illustrated with numerical examples in Section 4. 2 Mapping Theorems As in the complex case that was treated in 20, the main tool in the computation of stability radii for real D systems will be minimal norm solutions to structured mapping problems. To construct such mappings in the case of real matrices, and to deal with pairs of complex conjugate eigenvalues and their eigenvectors, we will need general mapping theorems for maps that map several real vectors x 1,..., x m to vectors y 1,..., y m. We will first study the complex case and then use these results to construct corresponding mappings in the real case. For given X C m,p, Y C n,p, Z C n,k, and W C m,k, the general mapping problem, see 16, asks under which conditions on X, Y, Z, and W the set S := { A C n,m AX = Y, A Z = W is non-empty, and if it is, then to construct the minimal spectral or Frobenius norm elements from S. To solve this mapping problem, we will need the following well-known result. Theorem Let A, B, C be given complex matrices of appropriate dimensions. Then for any positive number µ satisfying ( A µ max, A C ), (2.1) B 3,
4 there exists a complex matrix D of appropriate dimensions such that A C µ. (2.2) B D Furthermore, any matrix D satisfying (2.2) is of the form D = KA L + µ(i KK ) 1/2 Z(I LL ) 1/2, where K = (µ 2 I A A) 1/2 B, L = (µ 2 I AA ) 1/2 C, and Z is an arbitrary contraction, i.e., Z 1. Using Theorem 2.1 we obtain the following solution to the general mapping problem. Theorem 2.2 Let X C m,p, Y C n,p, Z C n,k, and W C m,k. Then the set S := { A C n,m AX = Y, A Z = W, is non-empty if and only if X W = Y Z, Y X X = Y, and W Z Z = W. If this is the case, then { S = Y X + (W Z ) (W Z ) XX + (I ZZ )R(I XX ) R C n,m. (2.3) Furthermore, we have the following minimal-norm solutions to the mapping problem AX = Y and A Z = W : 1) The matrix G := Y X + (W Z ) (W Z ) XX (2.4) is the unique matrix from S of minimal Frobenius norm G F = Y X 2 F + W Z 2 F trace((w Z )(W Z ) XX ) = inf A F. A S 2) The minimal spectral norm of elements from S is given by { µ := max Y X, W Z = inf A. (2.5) A S Moreover, assume that rank(x) = r 1 and rank(z) = r 2. If X = UΣV and Z = Û Σ V are singular value decompositions of X and Z, respectively, where U = U 1, U 2 with U 1 C m,r 1 and Û = Û1, Û2 with Û1 C n,r 2, then the infimum in (2.5) is attained by the matrix where F := Y X + (W Z ) (W Z ) XX + (I ZZ )Û2CU 2 (I XX ), (2.6) C = K(U1 (Y X )Û1)L + µ(i KK ) 1 2 P (I L L) 1 2, K = (µ 2 I U1 (Y X ) ZZ (Y X )U 1 ) 1 2 ( Û2 Y X U 1 ), ) L = (µ 2 I Û 1 (Y X )(Y X ) 1 ( ) 2 Û 1 Û1 (W Z ) U 2, and P is an arbitrary contraction. 4
5 Proof. Suppose that S is non-empty. Then there exists A C n,m such that AX = Y and A Z = W. This implies that X W = Y Z. Using the properties of the Moore-Penrose pseudoinverse, we have that Y X X = AXX X = AX = Y and W Z Z = A ZZ Z = A Z = W. Conversely, suppose that X W = Y Z, Y X X = Y, and W Z Z = W. Then S is non-empty, since for any C C n,m we have A = Y X + (W Z ) (W Z ) XX + (I ZZ )C(I XX ) S. In particular, this proves the inclusion in (2.3). To prove the other direction, let X = UΣV and Z = Û Σ V be singular value decompositions of X and Z, respectively, partitioned as in 2), with Σ 1 R r 1,r 1 and Σ 1 R r 2,r 2 being the leading principle submatrices of Σ and Σ, respectively. Let A S and set  := Û A11 A AU = 12, (2.7) A 21 A 22 where A 11 C r 1,r 2, A 12 C r 1,m r 2, A 21 C n r 1,r 2 and A 22 C n r 1,m r 2. Then A F =  F and A = Â. Multiplying ÛÂU X = AX = Y by Û from the left, we obtain A11 A 12 Σ1 V 1 A11 A = 12 U 1 Û A 21 A 22 0 A 21 A 22 U2 X = 1 Û Û2 Y = 1 Y Û2 Y, which implies that A 11 = Û 1 Y V 1 Σ 1 1 and A 21 = Û 2 Y V 1 Σ 1 1. (2.8) Analogously, multiplying UÂ Û Z = A Z = W from the left by U, we obtain that A 11 A 21 Σ1 V 1 A A 12 A = 11 A 21 Û 22 0 A 12 A 1 U Z = 1 U 22 U2 W = 1 W U2 W. This implies that Û 2 A 11 = ( Σ 1 1 ) V 1 W U 1 and A 12 = ( Σ 1 1 ) V 1 W U 2. (2.9) Equating the two expressions for A 11 from (2.8) and (2.9), we obtain Û 1 Y V 1 Σ 1 = ( Σ 1 ) V 1 W U 1 which shows that X W = Y Z. Furthermore, we get Û Â = 1 Y V 1 Σ 1 1 ( Σ 1 1 ) V 1 S U 2 Û Û2 Y V 1Σ 1 = 1 Y X U 1 Û1 (W Z ) U 2 1 A 22 Û2 Y X U 1 A 22. (2.10) Thus, using X U 1 U 1 = X, Z Û 1 Û 1 = Z, and U 1 U 1 = XX, we obtain A = ÛÂU = Û1Û 1 Y X U 1 U1 + Û2Û 2 Y X U 1 U1 + Û1Û 1 (W Z ) U 2 U2 + Û2A 22 U2 = Û1Û 1 Y X U 1 U1 + (I Û1Û 1 )Y X U 1 U1 + Û1Û 1 (SZ ) (I U 1 U1 ) + Û2A 22 U2 = Y X + (W Z ) (W Z ) XX + (I ZZ )Û2A 22 U2 (I XX ). (2.11) 5
6 1) In view of (2.10) and using that X U 2 = 0 and Û 2 (Z ) = 0, we obtain that A 2 F = Û Â 2 F = 1 Y X 2 U 1 Û2 Y + Û X 1 (W Z ) U 2 2 U 1 F + A 22 2 F F = Û Y X U 2 F + Û(W Z ) U 2 F Û 1 (W Z ) U 1 2 F + A 22 2 F = Y X 2 F + W Z 2 F trace ((W Z )(W Z ) XX ) + A 22 2 F. Thus, by setting A 22 = 0 in (2.11), we obtain a unique element of S which minimizes the Frobenius norm giving ) inf A F Y = X 2F + W A S Z ((W 2F trace Z )(W Z ) XX. 2) By definition of µ, we have { µ = max Y X, W Z { Û = max 1 Y X U 1 Û2 Y X U 1, Û 1 Y X U 1 Û1 (W Z ) U 2. Then it follows that for any A S, we have A =  µ with  as in (2.7). By Theorem 2.1 there exists matrices A S with A µ, i.e., inf A S A = µ. Furthermore, by Theorem 2.1 this infimum is attained with a matrix A as in (2.11), where ) A 22 = K (U 1 (Y X ) Û 1 L + µ(i KK ) 1 2 P (I L L) 1 2, K = L = ( ) µ 2 I U1 (Y X ) 1 ZZ (Y X 2 )U 1 (Û 2 Y X U 1 ), (µ 2 I Û 1 (Y X )(Y X ) Û 1 ) 1 2 ( Û 1 (W Z ) U 2 ), and P is an arbitrary contraction. ence the assertion follows by setting C = A 22. The special case p = k = 1 of Theorem 2.2 was originally obtained in 15, Theorem 2. The next result characterizes the ermitian positive semidefinite matrices that map a given X C n,m to a given Y C n,m, and we include the solutions that are minimal with respect to the spectral or Frobenius norm. This generalizes 20, Theorem 2.3 which only covers the case m = 1. Theorem 2.3 Let X, Y C n,m be such that rank(y ) = m. Define S := { C n,n = 0, X = Y. Then there exists = X Y > 0. If this holds, then 0 such that X = Y if and only if X Y = Y X and := Y (Y X) 1 Y (2.12) is well-defined and ermitian positive-semidefinite, and { ( S = + I n XX ) ( K K I n XX ) K C n,n. (2.13) Furthermore, we have the following minimal norm solutions to the mapping problem X = Y : 6
7 1) The matrix from (2.12) is the unique matrix from S with minimal Frobenius norm min { F S = Y (X Y ) 1 Y F. 2) The minimal spectral norm of elements from S is given by min { S = Y (X Y ) 1 Y and the minimum is attained for the matrix from (2.12). Proof. If S, then X Y = X X = (X) X = Y X and X Y = X X 0, since = 0. If X X were singular, then there would exist a vector v C m \ {0 such that v X Xv = 0 and, hence, Y v = Xv = 0 (as 0) in contradiction to the assumption that Y is of full rank. Thus, X Y > 0. Conversely, let X Y = Y X > 0 (which implies that also (X Y ) 1 > 0). Then in (2.12) is well-defined. Clearly X = Y and = Y (Y X) 1 Y 0, which implies that S. Let = + (I XX )K K(I XX ) C n,n be as in (2.13) for some K C n,n. Then clearly =, X = Y and also 0, as it is the sum of two positive semidefinite matrices. This proves the inclusion in (2.13). For the converse inclusion, let S. Since 0, we have that = A A for some A C n,n. Therefore, X = Y implies that (A A)X = Y, and setting Z = AX, we have AX = Z and A Z = Y. Since rank(y ) = m, we necessarily also have that rank(z) = m and rank(x) = m. Therefore, by Theorem 2.2, A can be written as for some C C n,n. Note that A = ZX + (Y Z ) (Y Z ) XX + (I ZZ )C(I XX ) (2.14) (Y Z ) XX = (Z ) Y XX = (Z ) Z ZX = (ZZ ) ZX = ZX, (2.15) since (ZZ ) = ZZ and Y X = Z Z. By inserting (2.15) in (2.14), we obtain that and thus, A = (Y Z ) + (I ZZ )C(I XX ), = A A = (Y Z )(Y Z ) + (I XX )C (I ZZ )(Y Z ) +(Y Z )(I ZZ )C(I XX ) + (I XX )C (I ZZ )(I ZZ )C(I XX ) = (Y Z )(Y Z ) + (I XX )C (I ZZ )(I ZZ )C(I XX ), (2.16) where the last equality follows since (Y Z )(I ZZ ) = Y (Z Z ZZ ) = Y (Z Z ) = 0. Setting K = (I ZZ )C in (2.16) and using Z (Z ) = (Z Z) 1 Z ( (Z Z) 1 Z ) = (Z Z) 1 = (Y X) 1 7
8 proves the inclusion in (2.13). 1) If S, then X Y = Y X > 0 and we obtain = Y (Y X) 1 Y + (I XX )K K(I XX ), (2.17) for some K C n,n. By using the formula BB + DD 2 F = BB 2 F +2 DB 2 F + D D 2 F for B = Y (Y X) 1/2 and D = K(I XX ), we obtain 2 F = Y (Y X) 1 Y 2 F +2 K(I XX )Y (Y X) F + (I XX )K K(I XX ) 2 F. ence, setting K = 0, we obtain = Y (Y X) 1 Y as the unique minimal Frobenius norm solution. 2) Let S be of the form (2.17) for some K C n,n. Since Y (Y X) 1 Y 0 and (I XX )K K(I XX ) 0, we have This implies that Y (Y X) 1 Y Y (Y X) 1 Y + (I XX )K K(I XX ). Y (Y X) 1 Y inf Y (Y X) 1 Y + (I XX )K K(I XX ) = inf. K C n,n S (2.18) One possible choice for obtaining equality in (2.18) is K = 0 which gives = Y (Y X) 1 Y S and = Y (Y X) 1 Y = min. S aving presented the complex versions of the mapping theorems, we now adapt these for the case of real perturbations. ere, real refers to the mappings, but not to the vectors that are mapped, because we need to apply the mapping theorems to eigenvectors which may be complex even if the matrix under consideration is real. Remark 2.4 If X C m,p, Y C n,p, Z C n,k, W C m,k are such that rank(x X) = 2p and rank(z Z) = 2k (or, equivalently, rank(re X Im X) = 2p and rank(re Z Im Z) = 2k), then minimal norm solutions to the mapping problem AX = Y and A Z = W with A R n,m can easily be obtained from Theorem 2.2. This follows from the observation that with AX = Y and A Z = W we also have AX = Y and A Z = W and thus ARe X Im X = Re Y Im Y and A Re Z Im Z = Re W Im W. We can then apply Theorem 2.2 to the real matrices X = Re X Im X, Y = Re Y Im Y, Z = Re Z Im Z, and W = Re W Im W. Indeed, whenever there exists a complex matrix A C n,m satisfying AX = Y and A Z = W, then there also exists a real one, because it is easily checked that the minimal norm solutions in Theorem 2.2 are real. (ere, we assume that for the case of the spectral norm the real singular value decompositions of X and Z are taken, and the contraction P is also chosen to be real.) A similar observation holds for solution of the real version of the ermitian positive semidefinite mapping problem in Theorem 2.3. Since the real version of Theorem 2.2 (and similarly of Theorem 2.3) is straightforward in view of Remark 2.4, we refrain from an explicit statement. The situation, however, changes considerably if the assumptions rank(re X Im X) = 2p and rank(re Z Im Z) = 2k as in 8
9 Remark 2.4 are dropped. In this case, it seems that a full characterization of real solutions to the mapping problems is highly challenging and very complicated. Therefore, we only consider the generalization of Theorem 2.3 to real mappings for the special case m = 1 which is in fact the case needed for the computation of the stability radii. We obtain the following two results. Theorem 2.5 Let x, y C n be such that rank(y ȳ) = 2. Then the set S := { R n,n, T = 0, x = y is non-empty if and only if x y > x T y (which includes the condition x y R). In this case let X := Re x Im x and Y := Re y Im y. Then the matrix := Y ( Y X ) 1 Y (2.19) is well defined and real symmetric positive semidefinite, and { S = + (I XX )K(I XX ) K R n,n, K T = K 0. (2.20) 1) The minimal Frobenius norm of an element in S is given by min { Y ( F S = Y T X ) 1 Y T and this minimal norm is uniquely attained by the matrix in (2.19). 2) The minimal spectral norm of an element in S is given by min { S = Y ( Y T X ) 1 Y T and the matrix in (2.19) is a matrix that attains this minimum. Proof. Let S, i.e. T = 0 and x = y. Since is real, we also have X = Y. Thus, by Theorem 2.3, X and Y satisfy X Y = Y X and Y X > 0. The first condition is equivalent to (Re x) T Im y = (Re y) T Im x which in turn is equivalent to x y R and the second condition is easily seen to be equivalent to x y > x T y. Conversely, if x y = y x and x y > x T y, then Y T X > 0 and hence := Y ( Y T X ) 1 Y T is positive semidefinite. Moreover, we obviously have X = Y and thus x = y. This implies that S. The inclusion in (2.20) is straightforward. For the other inclusion let S, i.e., T = 0 and x = y, and thus X = Y. Then, by Theorem 2.3, there exist L C n,n such that = + (I XX )L L(I XX ) = + (I XX ) ( Re(L ) Re(L) + Im(L) Im(L) ) (I XX ), F 9
10 where for the last identity we made use of the fact that is real. Thus, by setting K = (Re(L ) Re(L)+Im(L ) Im(L)), we get the inclusion in (2.20). The norm minimality in 1) and 2) follows immediately from Theorem 2.3, because any real map S also satisfies X = Y. Theorem 2.5 does not consider the case that y and ȳ are linearly dependent. In that case, we obtain the following result. Theorem 2.6 Let x, y C n, y 0 be such that ranky ȳ = 1. Then the set S = { R n,n, T = 0, x = y is non-empty if and only if x y > 0. In that case := yy x y is well-defined and real symmetric positive semidefinited. Furthermore, we have: 1) The minimal Frobenius norm of an element in S is given by (2.21) min { F S = y 2 x y and this minimal norm is uniquely attained by the matrix in (2.21). 2) The minimal spectral norm of an element in S is given by min { S = y 2 x y and the matrix in (2.19) is a matrix that attains this minimum. Proof. If S, i.e., T = 0 and x = y, then by Theorem 2.3 (for the case m = 1), we have that x y > 0. Conversely, assume that x and y satisfy x y > 0. Since y and ȳ are linearly dependent, there exists a unimodular α C such that αy ist real. But then also yy = (αy)(αy) is real and hence the matrix in (2.21) is well defined and real. By Theorem 2.3, it is the unique element from S with minimal Frobenius norm and also an element from S of minimal spectral norm. Remark 2.7 Note that results similar to Theorem 2.5 and Theorem 2.6 can also be obtained for real negative semidefinite maps. Indeed, for x, y C n such that ranky ȳ = 2, there exist a real negative semidefinite matrix R n,n such that x = y if and only if x y = y x and x y > x T y. Furthermore, it follows immediately from Theorem 2.5 by replacing y with y and with that a minimal solution in spectral and Frobenius norm is given by = Re y Im y ( Re y Im y Re x Im x ) 1 Re y Im y. An analogous argument holds for Theorem 2.6. Therefore, we will refer to Theorem 2.5 and Theorem 2.6 also in the case that we are seeking solutions for the negative semidefinite mapping problem. The minimal norm solutions for the real symmetric mapping problem with respect to both the spectral norm and Frobenius norm are well known, see 1, Theorem We do not restate this result in its full generality, but in terms of the following two theorems that are formulated in such a way that they allow a direct application in the remainder of this paper. 10
11 Theorem 2.8 Let x, y C n \ {0 be such that rank(x x) = 2. Then S := { R n,n T =, x = y is nonempty if and only if x y = y x. Furthermore, define X := Re x Im x, Y := Re y Im y, := Y X + (Y X ) T (XX ) T Y X. (2.22) 1) The minimal Frobenius norm of an element in S is given by min F = F = S and the minimum is uniquely attained by in (2.22). 2 Y X 2 F trace (Y X (Y X ) T XX ) 2) To characterize the minimal spectral norm, consider the singular value decomposition X = UΣV T and let U = U 1 U 2 where U 1 R n,2. Then and the minimum is attained by min = Y S X, Ĥ = (I n XX )KU T 1 Y X U 1 K T (I n XX ), (2.23) where K = Y X U 1 (µ 2 I 2 U T 1 Y X Y X U 1 ) 1/2 and µ := Y X. Proof. Observe that for T = R n,n the identity x = y is equivalent to X = Y. Thus, the result follows immediately from 1, Theorem applied to the mapping problem X = Y. Theorem 2.9 Let x, y C n \ {0 be such that rank(x x) = 1. Then S := { R n,n T =, x = y is nonempty if and only if x y = y x. Furthermore, we have: 1) The minimal Frobenius norm of an element in S is given by min F = y S x and the minimum is uniquely attained by the real matrix (If x and y are linearly dependent, then = yx x x.) := yx x 2 + xy x 2 (x y)xx x 4. (2.24) 2) The minimal spectral norm of an element in S is given by y min = S x, and the minimum is attained by the real matrix Ĥ := y y y x y x x x 1 x y 1 x y x y 1 y y x x (2.25) if x and y are linearly independent and for 11 Ĥ := yx x x otherwise.
12 Proof. By 1, Theorem (see also 20, Theorem 2.1 and 16) the matrices and Ĥ are the minimal Frobenius resp. spectral norm solutions to the complex ermitian mapping problem x = y. Thus, it only remains to show that and Ĥ are real. Since x and x are linearly dependent, there exists a unimodular α C such that αx is real. But then also αy = (αx) and thus xx = (αx)(αx) and yx = (αy)(αx) are real which implies the realness of. Analogously, Ĥ can be shown to be real. Obviously, the minimal Frobenius or spectral norm solutions from Theorem 2.9 have either rank one or two. The following lemma characterizes the rank of the minimal Frobenius or spectral norm solutions from Theorem 2.8 as well as the number of their negative and their positive eigenvalues, respectively. Lemma 2.10 Let x, y C n \ {0 be such that x y = y x and rank(x x) = 2. If and Ĥ are defined as in (2.22) and (2.23), respectively, then rank( ), rank(ĥ) 4 and both and Ĥ have at most two negative eigenvalues and at most two positive eigenvalues. Proof. Recall from Theorem 2.8 that X = Re x Im x, Y = Re y Im y and consider the singular value decomposition X = UΣV T = U Σ 0 V T with U = U 1 U 2 R n,n, U 1 R n,2, Σ R 2,2, and V R 2,2. If we set Y1 Y = U V T, where Y 1 R 2,2 and Y 2 R n 2,2, then = Y X + (Y X ) T (XX ) T Y X Y = U Σ Σ 1 U T Y + U 1 T Σ 1 Y2 Y Σ Σ 1 Y = U 1 T Σ 1 Y2 T Y Σ Y 2 U T Y1 U Σ U T U T. (2.26) Thus, is of rank at most four, and also has a n 2 dimensional neutral subspace, i.e., a subspace V R n satisfying z T z = 0 for all z, z V. This means that the restriction of to its range still has a neutral subspace of dimension at least 2 if rank( ) = 4, max{0, rank( ) 2 = 1 if rank( ) = 3, 0 if rank( ) 2. Thus, it follows by applying 7, Theorem that has at most two negative and at most two positive eigenvalues. On the other hand, we have Ĥ = (I n XX )KU T 1 Y X U 1 K T (I n XX ), (2.27) 12
13 where K = Y X U 1 W, W := (µ 2 I 2 U1 T Y X Y X U 1 ) 1/2 and µ := Y X. Then we obtain K = Y X Y U 1 W = U Σ U T Y U Y Σ 1 1 W = U Σ 1 1 W. 2 0 Y Σ 1 2 Also, := (I n XX )KU1 T Y X U 1 K T (I n XX ) 0 = U Y Σ 1 W U1 T Y U Σ U T U 2 Y Σ 1 1 W T = U Y 2 Σ 1 W Y 1 Σ 1 W T Σ 1 Y T 2 Σ 1 Y T 2 U T U T. (2.28) Inserting (2.26) and (2.28) into (2.27), we get Ĥ = Σ 1 Y = U 1 T Σ 1 Y2 T Y Σ 1 2 Y Σ 1 2 W Y Σ 1 1 W T Σ 1 Y2 T Z L T = U L LW Z T W T L T U T, U T where Z := Σ 1 Y1 T and L := Y 2 Σ 1. This implies that U T Z L T ĤU = L LW Z T W T L T and Z are real symmetric. Let Û Rn 2,n 2 be invertible such that ÛL = l l 12 l is in row echelon form, where L l11 l = 12 0 l 22 I2 0 0 Û Z L T L LW ZW T L T. Then T = I2 0 0 Û T L 0 = Z , where Z 11 = Z L LT LW ZW T LT R 4,4. This implies that rank(ĥ) = rank(z 11) 4. Furthermore, by Sylvester s law of inertia, Ĥ and Z 11 have the same number of negative eigenvalues. Thus, the assertion follows if we show that Z 11 has at most two negative eigenvalues. For this, first suppose that L is singular, i.e., l 22 = 0, which would imply that rank(z 11 ) 3. If rank(z 11 ) < 3, then clearly Z 11 can have at most two negative eigenvalues and at most two positive eigenvalues. If rank(z 11 ) = 3, then we have that Z 11 is indefinite. Indeed, if Z is indefinite then so is Z 11. If Z is positive (negative) semidefinite then LW ZW T LT is negative (positive) semidefinite. In this case Z 11 13
14 has two real symmetric matrices of opposite definiteness as block matrices on the diagonal. This shows that Z 11 is indefinite and thus can have at most two negative eigenvalues and at most two positive eigenvalues. If L is invertible, then I2 0 Z LT 0 W 1 L 1 L LW ZW T LT I2 0 0 L T W T = Z W 1 W T is a real symmetric amiltonian matrix. Therefore, by using the amiltonian spectral symmetry with respect to the imaginary axis, it follows that Z 11 has two positive and two negative eigenvalues. In the next section we will discuss real stability radii under restricted perturbations, where the restrictions will be expressed with the help of a restriction matrix B R n,r. To deal with those, we will need the following lemmas. Lemma 2.11 (20) Let B R n,r with rank(b) = r, let y C r \ {0, and let z C n \ {0. Then for all A R r,r we have BAy = z if and only if Ay = B z and BB z = z. Lemma 2.12 Let B R n,r with rank(b) = r, let y C r \ {0 and z C n \ {0. 1) If rank(z z) = 1, then there exists a positive semidefinite A = A T R r,r satisfying BAy = z if and only if BB z = z and y B z > 0. 2) If rank(z z) = 2. then there exists a positive semidefinite A = A T R r,r satisfying BAy = z if and only if BB z = z and y B z > y T B z. Proof. Let A R r,r, then by Lemma 2.11 we have that BAy = z if and only if Z BB z = z and Ay = B z. (2.29) If rank(z z) = 1, then by Theorem 2.6 the identity (2.29) is equivalent to BB z = z and y B z > 0. If rank(z z) = 2 then (2.29) is equivalent to BB z z = z z and Ay ȳ = B z B z, (2.30) because A is real. Note that rank(b z B z) = 2, because otherwise there would exist α C 2 \ {0 such that B z B zα = 0. But this implies that z zα = BB z BB zα = 0 in contradiction to the fact that rank(z z) = 2. Thus, by Theorem 2.5, there exists 0 A R r,r satisfying (2.30) if and only if BB z = z and y B z > y T B z. The following is a version of Lemma 2.12 without semidefiniteness. Lemma 2.13 Let B R n,r with rank(b) = r, let y C r \ {0 and z C n \ {0. Then there exist a real symmetric matrix A R r,r satisfying BAy = z if and only if BB z = z and y B z R. Proof. The proof is analogous to that of Lemma 2.12, using Theorem 2.8 and Theorem 2.9 instead of Theorem 2.5 and Theorem 2.6. In this section we have presented several mapping theorems, in particular for the real case. These will be used in the next section to determine formulas for the real stability radii of D systems. 14
15 3 Real stability radii for D systems Consider a real linear time-invariant dissipative amiltonian (D) system ẋ = (J R)Qx, (3.1) where J, R, Q R n,n are such that J T = J, R T = R 0 and Q T = Q > 0. For perturbations in the dissipation matrix we define the following real distances to instability. Definition 3.1 Consider a real D system of the form (3.1) and let B R n,r and C R q,n be given restrictions matrices. Then for p {2, F we define the following stability radii. 1) The stability radius r (R; B, C) of system (3.1) with respect to real general perturbations to R under the restriction (B, C) is defined by { r (R; B, C) := inf p R r,q, Λ ( (J R)Q (B C)Q ) ir. 2) The stability radius r S d (R; B) of system (3.1) with respect to real structure-preservig semidefinite perturbations from the set S d (R, B) := { R r,r T = 0 and (R + B B T ) 0 (3.2) under the restriction (B, B T ) is defined by r S d { (R; B) := inf p Sd (R, B), Λ ( (J R)Q (B B T )Q ) ir. 3) The stability radius r S i (R; B) of system (3.1) with respect to structure-preserving indefinite perturbations from the set S i (R, B) := { R r,r T = and (R + B B T ) 0 (3.3) under the restriction (B, B T ) is defined by r S i { (R; B) := inf p Si (R, B), Λ ( (J R)Q (B B T )Q ) ir. The formula for the stability radius r R,2 (R; B, C) is a direct consequence of the following well-known result. Theorem 3.2 (23) For a given M C p,m, define Then µ R (M) := ( inf { R m,p, det(i m M) = 0 ) 1. ( µ R (M) = inf σ 2 γ (0,1 Re M γ 1 Im M γ Im M Re M where σ 2 (A) is the second largest singular value of a matrix A. Furthermore, an optimal that attains the value of µ R (M) can be chosen of rank at most two. ), Applying this theorem to D systems we obtain the following corollary. 15
16 Corollary 3.3 Consider an asymptotically stable D system of the form (3.1) and let B R n,r and C R q,n be given restriction matrices. Then and ( ( r R,2 (R; B, C) = inf inf σ 2 ω R γ (0,1 Re M(ω) γ 1 Im M(ω) )) 1 γ Im M(ω) (3.4) Re M(ω) r R,2 (R; B, C) r R,F (R; B, C) 2 r R,2 (R; B, C), (3.5) where M(ω) := CQ ( (J R)Q iωi n ) 1B. Proof. By definition, we have { r R,2 (R; B, C) = inf R r,q, Λ ( (J R)Q (B C)Q ) ir = inf { R r,q, ω R, det ( iωi n (J R)Q + B CQ ) = 0 = inf { R r,q, ω R, det ( I n B CQ((J R)Q iωi n ) 1) = 0, where the last equality follows, since (J R)Q is asymptotically stable so that the inverse of (J R)Q iωi n exists for all ω R. Thus we have r R,2 (R; B, C) = inf { R r,q, ω R, det ( I n CQ((J R)Q iωi n ) 1 B ) = 0 ( ( ) ) 1 = inf µ R M(ω), ω R where M(ω) := CQ((J R)Q iωi n ) 1 B. Therefore, (3.4) follows from Theorem 3.2, and if is of rank at most two such that = r R,2 (R; B, C), then (3.5) follows by using the definition of r R,F (R; B, C) and by the fact that = r R,2 (R; B, C) r R,F (R; B, C) F rank( ) 2 = 2 r R,2 (R; B, C). To derive the real structured stability radii, we follow the strategy in 20 for the complex case, to reformulate the problem of computing r S d (R; B) or rs i (R; B) in terms of real structured mapping problems. The following lemma of 20 (expressed here for real matrices) gives a characterization when a D system of the form (3.1) has eigenvalues on the imaginary axis. Lemma , Lemma 3.1 Let J, R, Q R n,n be such that J T = J, R T = R 0 and Q T = Q > 0. Then (J R)Q has an eigenvalue on the imaginary axis if and only if RQx = 0 for some eigenvector x of JQ. Furthermore, all purely imaginary eigenvalues of (J R)Q are semisimple. After these preliminaries we obtain the following formulas for the stability radii. 3.1 The stability radius r S d (R; B) To derive formulas for the stability radius under real structure-preserving restricted and semidefinite perturbations we need the following two lemmas. Lemma 3.5 Let T = R n,n be positive semidefinite. Then x x x T x for all x C n, and equality holds if and only if x and x are linearly dependent. 16
17 Proof. Let S R n,n be a symmetric positive semidefinite square root of, i.e. S 2 =. Then using the Cauchy-Schwarz inequality we obtain that x T x = S x, Sx S x Sx = x x x x = x x, because x x = x x = x x as is real. In particular equality holds if and only if Sx and S x are linearly dependent which is easily seen to be equivalent to the linear dependence of x and x. Lemma 3.6 Let R, W R n,n be such that R T = R 0 and W T = W > 0. If x C n is such that rank(x x) = 2 and x W T RW x > x T W T RW x, set R := RW Re x Im x ( Re x Im x W RW Re x Im x ) 1 Re x Im x W R. Then R + R is symmetric positive semidefinite. Proof. Since R and W are real symmetric and x W T RW x > x T W T RW x, the matrix Re x Im x W RW Re x Im x is real symmetric and positive definite and, therefore, R is well defined and we have R = T R 0. We prove that R + R 0 by showing that all its eigenvalues are nonnegative. Since W is nonsingular, we have that W x 0. Also R is a real matrix of rank two satisfying R W Re x Im x = RW Re x Im x. This implies that (R + R )W Re x Im x = 0. (3.6) Since rankre x Im x = rankx x = 2 and since W is nonsingular, we have that W Re x and W Im x are linearly independent eigenvectors of R + R corresponding to the eigenvalue zero. Let λ 1,..., λ n be the eigenvalues of R and let η 1,..., η n be the eigenvalues of R + R, where both lists are arranged in nondecreasing order, i.e., 0 λ 1 λ n and η 1 η n. Since R is of rank two, by the Cauchy interlacing theorem 6, λ k η k+2 and η k λ k+2 for k = 1,..., n 2. (3.7) This implies that 0 η 3 η n, and thus the assertion follows once we show that η 1 = 0 and η 2 = 0. If R is positive definite, then λ 1,..., λ n satisfy 0 < λ 1 λ n and, therefore, 0 < η 3 η n. Therefore we must have η 1 = 0 and η 2 = 0 by (3.6). If R is positive semidefinite but singular, then let k be the dimension of the kernel of R. We then have k < n, because R 0. Letting l be the dimension of kernel of R + R, then using (3.7) we have that k 2 l k + 2, and we have η 1 = 0 and η 2 = 0 if we show that l = k + 2. Since W is nonsingular, the kernels of R and RW have the same dimension k. Let x 1,..., x k be linearly independent eigenvectors of RW associated with the eigenvalue zero, i.e., we have RW x i = 0 for all i = 1,..., k. Then R W x i = 0 for all i = 1,..., k, and hence (R + R )W x i = 0 for all i = 1,..., k. The linear independence of x 1,..., x k together with the nonsingularity of W implies that W x 1,..., W x k are linearly independent. By (3.6) we have (R + R )W Re x = 0 and (R + R )W Im x = 0. 17
18 Moreover, the vectors W Re x, W Im x, W x 1,..., W x k are linearly independent. Indeed, let α, β, α 1,..., α k R be such that αw Re x + βw Im x + α 1 W x α k W x k = 0. Then we have R(αW Re x + βw Im x) = 0 as RW x i = 0 for all i = 1,..., k. This implies that α = 0 and β = 0, because RW Re x and RW Im x are linearly independent. The linear independence of W x 1,..., W x k then implies that α i = 0 for all i = 1,..., k, and hence W Re x, W Im x, W x 1,..., W x k are linearly independent eigenvectors of R+ R corresponding to the eigenvalue zero. Thus, the dimension of the kernel of R + R is at least k + 2 and hence we must have η 1 = 0 and η 2 = 0. Using these lemmas, we obtain a formula for the structured real stability radius of D systems. Theorem 3.7 Consider an asymptotically stable D system of the form (3.1). Let B R n,r with rank(b) = r, and let p {2, F. Furthermore, for j {1, 2 let Ω j denote the set of all eigenvectors x of JQ such that (I n BB )RQx = 0 and rank ( RQx RQ x ) = j, and let Ω := Ω 1 Ω 2. Then r S d (R; B) is finite if and only if Ω is non-empty. If this is the case, then { r S d (R; B) = min inf (B RQx)(B RQx) x Ω 1 x QRQx, inf Y (Y X) 1 Y p, (3.8) p x Ω 2 where X = B T QRe x Im x and Y = B RQRe x Im x. Proof. By definition, we have r S d { (R; B) := inf p Sd (R, B), Λ ( (J R)Q (B B T )Q ) ir, where S d (R, B) := { R r,r T = 0 and (R + B B T ) 0. By using Lemma 3.4, we obtain that r S d { (R; B) = inf p Sd (R, B), (R + B B T )Qx = 0 for some eigenvector x of JQ { = inf p Sd (R, B), B B T Qx = RQx for some eigenvector x of JQ { = inf p Sd (R, B), B T Qx = B RQx for some x Ω, (3.9) since by Lemma 2.11 we have B B T Qx = RQx if and only if B T Qx = B RQx and BB RQx = RQx, and thus x Ω. From (3.9) and S d (R, B) { R r,r T = 0, we obtain that r S d (R; B) inf { p = T R r,r, 0, B T Qx = B RQx for some x Ω =: ϱ. (3.10) The infimum on the right hand side of (3.10) is finite if and only if Ω is non-empty. The same will also hold for r S d (R; B) if we show equality in (3.10). To this end we will use the abbreviations ϱ j := inf { p = T R r,r, 0, B T Qx = B RQx for some x Ω j 18
19 for j {1, 2, i.e., we have ϱ = min{ϱ 1, ϱ 2, and we consider two cases. Case (1): ϱ = ϱ 1. If x Ω 1, then BB RQx = RQx, and RQx and RQ x are linearly dependent. But then also B RQx and B RQ x are linearly dependent and hence, by Lemma 2.12 there exists R r,r such that 0 and B T Qx = B RQx (and thus ( ) 0 and ( )B T Qx = B RQx) if and only x QRQx > 0. This condition is satisfied for all x Ω 1. Indeed, since R is positive semidefinite, we find that x QRQx 0, and x QRQx = 0 would imply RQx = 0 and thus (J R)Qx = JQx which means that x is an eigenvector of (J R)Qx associated with an eigenvalue on the imaginary axis in contradiction to the assumption that (J R)Q only has eigenvalues in the open left half plane. Using minimal norm mappings from Theorem 2.6 we thus have r S d (R; B) ϱ 1 = inf { p = T R r,r, 0, B T Qx = B RQx, x Ω 1 = inf x Ω 1 { (B RQx)(B RQx) x QBB RQx = inf x Ω 1 { (B RQx)(B RQx) x QRQx. (3.11) As the expression in (3.11) is invariant under scaling of x, it is sufficient to take the infimum over all x Ω 1 of norm one. Then a compactness argument shows that the infimum is actually attained for some x Ω 1 and by 20, Theorem 4.2, the matrix := (B RQ x)(b RQ x) x QRQ x is the unique (resp. a) complex matrix of minimal Frobenius (resp. spectral) norm satisfying T = 0 and B T Qx = B RQx. Also, by Theorem 2.6 the matrix is real and by 20, Lemma 4.1 we have R + B B T 0. Thus S d (R, B). But this means that r S d (R; B) = ϱ 1 = ϱ. Case (2): ϱ = ϱ 2. If x Ω 2, then BB RQx = RQx, and Re RQx and Im RQx are linearly independent. But then, also Re B RQx and Im B RQx are linearly independent, because otherwise, also Re RQx = Re BB RQx and Im RQ x = Im BB RQ x would be linearly dependent. Thus, by Lemma 2.12 there exists R r,r such that 0 and B T Qx = B RQx if and only if x QRQx > x T QRQx. By Lemma 3.5 this condition is satisfied for all x Ω 2, because R and thus QRQ is positive semidefinite. Using minimal norm mappings from Theorem 2.5 we then obtain r S d (R; B) ϱ 2 = inf { p = T R r,r, 0, B T Qx = B RQx, x Ω 2 (3.12) = inf { Y (Y X) 1 Y p X = B T Q Re x Im x, Y = B RQ Re x Im x, x Ω 2 = Ỹ (Ỹ X) 1 Ỹ p, for some x Ω 2, where X = B T Q Re x Im x and Ỹ = B RQ Re x Im x. Indeed, the matrix Y (Y X) 1 Y is invariant under scaling of x, so that it is sufficient to take the infimum over all x Ω 2 having norm one. Then a compactness argument shows that the infimum is actually a minimum and attained for some x Ω 2. Then setting R := Ỹ (Ỹ X) 1 Ỹ, 19
20 we have equality in (3.12) if we show that R + B R B T is positive semidefinite, because this would imply that R S d (R, B). But this follows from Lemma 3.6 by noting that BB RQ Re x Im x = RQ Re x Im x. Indeed, by the definition of Ω 2, the vectors Re RQ x and Im RQ x are linearly independent, and R + B R B T = R BỸ (Ỹ X) 1 Ỹ B T = R BB RQRe x Im x ( Re x Im x QRQRe x Im x ) 1 Re x Im x QR(B ) T B T = R RQRe x Im x ( Re x Im x QRQRe x Im x ) 1 Re x Im x QR. is positive semidefinite by Lemma 3.6. This proves that r S d (R; B) = ϱ 2 = ϱ. Remark 3.8 In the case that R > 0 and J are invertible, the set Ω 1 from Theorem 3.7 is empty and hence Ω = Ω 2, because if R > 0 and if x is an eigenvector of JQ with rank ( RQx RQ x ) = 1 then, x is necessarily an eigenvector associated with the eigenvalue zero of JQ. Indeed, if RQx and RQ x are linearly dependent, then x and x are linearly independent, because RQ is nonsingular as R > 0 and Q > 0. This is only possible if x is associated with a real eigenvalue, and since the eigenvalues of JQ are on the imaginary axis, this eigenvalue must be zero. Remark 3.9 We mention that in the case that R and J are invertible, r S d (R; B) is also the minimal norm of a structure-preserving perturbation of rank at most two that moves an eigenvalue of (J R)Q to the imaginary axis. To be more precise, if S 2 (R, B) := { R r,r T =, rank 2, and (R + B B T ) 0 (3.13) and r S 2 { (R; B) := inf p S2 (R, B), Λ ( (J R)Q (B B T )Q ) ir, then we have r S 2 (R; B) = rs d (R; B). Indeed, assume that S 2(R, B) is such that (J R)Q (B B T )Q has an eigenvalue on the imaginary axis. By Lemma 3.4 we then have (R B B T )Qx = 0 for some eigenvector x of JQ. Since R and J are invertible, it follows from Remark 3.8 that RQx and RQ x are linearly independent. Since has rank at most two, it follows that the kernel of has dimension at least n 2. Thus, let w 3,..., w n be linearly independent vectors from the kernel of. Then B T Qx, B T Q x, w 3,..., w n is a basis of C n. Indeed, let α 1,..., α n C be such that Then multiplication with B yields α 1 B T Qx + α 2 B T Qx + α 3 w α n w n = 0. (3.14) 0 = α 1 B B T Qx + α 2 B B T Q x = α 1 RQx + α 2 RQ x and we obtain α 1 = α 2 = 0, because RQx and RQ x are linearly independent. But then (3.14) and the linear independence of w 3,..., w n imply α 3 = = α n = 0. Thus, setting T := B T Qx, B T Q x, w 3,..., w n we obtain that T is invertible and T T = diag(d, 0), where D = x QB B T Qx x T QB B T Qx x T QB B T Qx x x QB B T = QRQx x T QRQx Qx x T QRQx x QRQx 20.
21 Since by Lemma 3.5 we have x QRQx > x T QRQx, it follows that D is positive definite which implies S d (R; B) and hence r S 2 (R; B) rs d (R; B). The inequality is trivial as minimal norm elements from S d (R; B) have rank at most two. In this subsection we have characterized the real structured restricted stability radius under positive semidefinite perturbations to the dissipation matrix R. In the next subsection we extend these results to indefinite perturbations that keep R semidefinite. 3.2 The stability radius r S i (R; B) If the perturbation matrices R are allowed to be indefinite then the perturbation analysis is more complicated. We start with the following lemma. Lemma 3.10 Let 0 < R = R T, R = T R Rn,n be such that R has at most two negative eigenvalues. If dim (Ker(R + R )) = 2, then R + R 0. Proof. Let λ 1,..., λ n be the eigenvalues of R. As R has at most two negative eigenvalues we may assume that λ 3, λ 4,..., λ n 0 and we have the spectral decomposition R = n λ i u i u i i=1 with unit norm vectors u 1,..., u n R n. Since R = R + n λ i u i u i > 0. i=3 and λ 1 u 1 u 1 +λ 2u 2 u 2 is of rank two, we can apply the Cauchy interlacing theorem and obtain that R + R = R + λ 1 u 1 u 1 + λ 2u 2 u 2 has at least n 2 positive eigenvalues. But then using the fact that dim(ker(r + R )) = 2 we get R + R 0. Using this lemma, we obtain the following results on the stability radius r S i (R; B). Theorem 3.11 Consider an asymptotically stable D system of the form (3.1). Let B R n,r with rank(b) = r and let p {2, F. Furthermore, for j {1, 2 let Ω j denote the set of all eigenvectors x of JQ such that BB RQx = RQx and rank ( B T Qx B T Q x ) = j, and let Ω = Ω 1 Ω 2. 1) If R > 0, then r S i (R; B) is finite if and only if Ω is non-empty. In that case we have { r S i R,2 (R; B) = min (B RQx) inf x Ω 1 B T Qx, inf Y X (3.15) x Ω 2 and { R,F (R; B) = min (B RQx) inf x Ω 1 B Qx, inf r S i x Ω 2 Y X 2 F trace (Y X (Y X ) XX ), where X = Re B T Qx Im B T Qx and Y = Re B RQx Im B RQx for x Ω 2. (3.16) 21
22 2) If R 0 is singular and if r S i (R; B) is finite, then Ω is non-empty and we have { r S i R,2 (R; B) min (B RQx) inf x Ω 1 B T Qx, inf Y X (3.17) x Ω 2 and { r S i R,F (R; B) min (B RQx) inf x Ω 1 B T Qx, inf Y X 2 F trace (Y x Ω X (Y X ) XX ), 2 (3.18) where X = Re B T Qx Im B T Qx and Y = Re B RQx Im B RQx for x Ω 2. Proof. By definition r S i { (R; B) = inf p Si (R, B), Λ ( (J R)Q (B B T )Q ) ir, where S i (R, B) := { = T R r,r (R + B B T ) 0. Using Lemma 3.4 and Lemma 2.11 and following the lines of the proof of Theorem 3.7, we get r S i (R; B) = inf { p S i (R, B), B T Qx = B RQx for some eigenvector x of JQ satisfying BB RQx = RQx. Since all elements of S i (R, B) are real symmetric, we obtain that r S i { (R; B) inf p = T R r,r, B T Qx = B RQx for some x Ω =: ϱ (p). (3.19) The infimum in the right hand side of (3.19) is finite if Ω is non-empty, as by Lemma 2.13 for x Ω there exist = T R r,r such that B T Qx = B RQx if and only if x QBB RQx R. This condition is satisfied because of the fact that BB RQx = RQx and R is real symmetric. If r S i (R; B) is finite, then Ω is non-empty because otherwise the right hand side of (3.19) would be infinite. To continue, we will use the abbreviations ϱ (p) j := inf { p = T R r,r, B T Qx = B RQx for some x Ω j for j {1, 2, i.e., ϱ (p) = min{ϱ (p) 1, ϱ(p) 2, and we consider two cases. Case (1): ϱ (p) = ϱ (p) 1. If x Ω 1, then BB RQx = RQx and B T Qx and B T Q x are linearly dependent. Then using mappings of minimal spectral resp. Frobenius norms (and again using compactness arguments), we obtain from Theorem 2.9 that r S i (R; B) ϱ(p) 1 = inf = inf x Ω 1 { p = T R r,r, B T Qx = B RQx, x Ω 1 { B RQx p B T Qx p = B RQ x p B T Q x p for some x := x Ω 1. This proves 2) and in (3.15) and (3.16) in the case ϱ (p) = ϱ (p) 1. Case (2): ϱ (p) = ϱ (p) 2. If x Ω 2, then BB RQx = RQx and B T Qx and B T Q x are linearly independent. Using mappings of minimal spectral resp. Frobenius norms (and once more 22
23 using compactness arguments), we obtain from Theorem 2.8 that r S i R,2 (R; B) ϱ(2) 2 = inf { = T R r,r, B T Qx = B RQx, x Ω 2 = inf { Y X X = Re B T Qx Im B T Qx, Y = Re B RQx Im B RQx, x Ω 2 = Ŷ X, for some x Ω 2, where X = Re B T Q x Im B T Q x and Ŷ = Re B RQ x Im B RQ x, and ( r S i R,F (R; B) ) 2 ( = inf ) 2 { = inf 2 F = T R r,r, B T Qx = B RQx, x Ω 2 ( {2 Y X 2 F trace Y X (Y X ) XX ) X = Re B T Qx Im B T Qx, Y = Re B RQx Im B RQx, x Ω 2 ϱ (F ) 2 = 2 Ỹ X 2 F trace (Ỹ X (Ỹ X ) X X ) for some x Ω 2, where X = Re B T Q x Im B T Q x and Ỹ = Re B RQ x Im B RQ x. This proves 2) and in (3.15) and (3.16) in the case ϱ (p) = ϱ (p) 2. In both cases (1) and (2), it remains to show that equality holds in (3.15) and (3.16) when R > 0. This would also prove that in the case R > 0 the non-emptyness of Ω implies the finiteness of r S i R,F (R; B). Thus, assume that R > 0 and let = T R r,r and = T R r,r be such that they satisfy B T Q x = B RQ x and B T Q x = B RQ x, (3.20) and such that they are mappings of minimal spectral or Frobenius norm, respectively, as in Theorem 2.8 or Theorem 2.9, respectively. The proof is complete if we show that (R + B B T ) 0 and (R + B B T ) 0, because this would imply that and belong to the set S i (R, B). In the case ϱ (p) = ϱ (p) 1 this follows exactly as in the proof of 20, Theorem 4.5 which is the corresponding result to Theorem 3.11 in the complex case. (In fact, in this case and coincide with the corresponding complex mappings of minimal norm.) In the case ϱ (p) = ϱ (p) 2, we obtain from Lemma 2.10, that the matrices and are of rank at most four with at most two negative eigenvalues. This implies that also the matrices B B T and B B T individually have at most two negative eigenvalues. Indeed, let B 1 R n,n r be such that B B 1 R n,n is invertible then we can write B B T 0 = B B B B 1 and by Sylvester s law of inertia also B B T has at most two negative eigenvalues. A similar argument proves the assertion for B B T. Furthermore, using (3.20), we obtain (R + B B T )Q x = RQ x BB RQ x = RQ x RQ x = 0 and (R + B B T )Q x = RQ x BB RQ x = RQ x RQ x = 0, 23
24 since x, x Ω, i.e., BB RQ x = RQ x and BB RQ x = RQ x. Also rank x x = 2 and rank x x = 2, respectively, imply that ( dim Ker(R + B B ) ( T ) = 2 and dim Ker(R + B B ) T ) = 2. Thus, Lemma 3.10 implies that (R + B B T ) 0 and (R + B B T ) 0. Remark 3.12 It follows from the proof of Theorem 3.11 that in the case R 0, the inequalities in (3.17) and (3.18) are actually equalities if the minimal norm mappings and from (3.20) satisfy (R + B B T ) 0 and (R + B B T ) 0, respectively. In our numerical experiments this was always the case, so that we conjecture that in general equality holds in (3.17) and (3.18). 4 Numerical experiments In this section, we present some numerical experiments to illustrate that the real structured stability radii are indeed larger than the real unstructured ones. To compute the distances, in all cases we used the function fminsearch in Matlab Version No (R2009a) to solve the associated optimization problems. We computed the real stability radii r R,2 (R; B, B T ), r S i R,2 (R; B) and rs d R,2 (R; B) with respect to real restricted perturbations to R, as obtained in Theorem 3.2, Theorem 3.7, and Theorem 3.11, respectively, and compared them to the corresponding complex distances r C,2 (R; B, B T ), r S i C,2 (R; B) and rs d C,2 (R; B) as obtained in 20, Theorem 3.3, 20, Theorem 4.5 and 20, Theorem 4.2, respectively. We chose random matrices J, R, Q, B R n,n for different values of n 14 with J T = J, R T = R 0 and B of full rank, such that (J R)Q is asymptotically stable and all restricted stability radii were finite. The results in Table 4.1 illustrate that the stability radius r R,2 (R, B, B T ) obtained under general restricted perturbations is significantly smaller than the real stability radii obtained under structure-preserving restricted perturbations, and it also illustrates that the real stability radii may bd significantly larger than the corresponding complex stability radii. size n r C,2 (R; B, B T ) r R,2 (R; B, B T ) r S i C,2 (R; B) rs i R,2 (R; B) rs d C,2 (R; B) rs d R,2 (R; B) Table 4.1: Comparison between complex and real stability radii. Example 4.1 As a second example consider the lumped parameter, mass-spring-damper dynamical system shown in Figure 4.1, see e.g. 32. It has two point masses m 1 and m 2, 24
Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control
Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet
More informationPerturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop
Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationDefinite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra
Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationPort-Hamiltonian Realizations of Linear Time Invariant Systems
Port-Hamiltonian Realizations of Linear Time Invariant Systems Christopher Beattie and Volker Mehrmann and Hongguo Xu December 16 215 Abstract The question when a general linear time invariant control
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with
Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationMATH36001 Generalized Inverses and the SVD 2015
MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationOptimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications
Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,
More informationLinear Algebra Formulas. Ben Lee
Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationThere are six more problems on the next two pages
Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationMCE693/793: Analysis and Control of Nonlinear Systems
MCE693/793: Analysis and Control of Nonlinear Systems Systems of Differential Equations Phase Plane Analysis Hanz Richter Mechanical Engineering Department Cleveland State University Systems of Nonlinear
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationonly nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr
The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationLinear algebra properties of dissipative Hamiltonian descriptor systems
Linear algebra properties of dissipative Hamiltonian descriptor systems C. Mehl V. Mehrmann M. Wojtylak January 6, 28 Abstract A wide class of matrix pencils connected with dissipative Hamiltonian descriptor
More informationNONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction
NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques
More information18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in
86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationLecture 1: Review of linear algebra
Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations
More informationThe Singular Value Decomposition and Least Squares Problems
The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationKernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman
Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu
More informationNonlinear Programming Algorithms Handout
Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are
More informationThe Singular Value Decomposition
The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition
More informationEigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations
Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Christian Mehl Volker Mehrmann André C. M. Ran Leiba Rodman April
More informationSPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS
SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties
More informationApplied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices
Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More information1 Matrices and vector spaces
Matrices and vector spaces. Which of the following statements about linear vector spaces are true? Where a statement is false, give a counter-example to demonstrate this. (a) Non-singular N N matrices
More informationIV. Matrix Approximation using Least-Squares
IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that
More informationComponentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])
SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to
More informationPositive definite preserving linear transformations on symmetric matrix spaces
Positive definite preserving linear transformations on symmetric matrix spaces arxiv:1008.1347v1 [math.ra] 7 Aug 2010 Huynh Dinh Tuan-Tran Thi Nha Trang-Doan The Hieu Hue Geometry Group College of Education,
More informationMath 323 Exam 2 Sample Problems Solution Guide October 31, 2013
Math Exam Sample Problems Solution Guide October, Note that the following provides a guide to the solutions on the sample problems, but in some cases the complete solution would require more work or justification
More informationBalanced Truncation 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationRELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES
RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES MEGAN DAILEY, FROILÁN M. DOPICO, AND QIANG YE Abstract. In this paper, strong relative perturbation bounds are developed for a number of linear
More informationIntroduction to Numerical Linear Algebra II
Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationOHSx XM511 Linear Algebra: Solutions to Online True/False Exercises
This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationMatrix Theory. A.Holst, V.Ufnarovski
Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationUniqueness of the Solutions of Some Completion Problems
Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationMATH 581D FINAL EXAM Autumn December 12, 2016
MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationScattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions
Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the
More informationMATH 612 Computational methods for equation solving and function minimization Week # 2
MATH 612 Computational methods for equation solving and function minimization Week # 2 Instructor: Francisco-Javier Pancho Sayas Spring 2014 University of Delaware FJS MATH 612 1 / 38 Plan for this week
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationCheat Sheet for MATH461
Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationA proof of the Jordan normal form theorem
A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it
More informationTopics in Applied Linear Algebra - Part II
Topics in Applied Linear Algebra - Part II April 23, 2013 Some Preliminary Remarks The purpose of these notes is to provide a guide through the material for the second part of the graduate module HM802
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationZeros and zero dynamics
CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)
More informationCanonical forms of structured matrices and pencils
Canonical forms of structured matrices and pencils Christian Mehl and Hongguo Xu Abstract This chapter provides a survey on the development of canonical forms for matrices and matrix pencils with symmetry
More informationBasic Elements of Linear Algebra
A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationA lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo
A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d
More information6.241 Dynamic Systems and Control
6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May
More informationThis property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.
34 To obtain an eigenvector x 2 0 2 for l 2 = 0, define: B 2 A - l 2 I 2 = È 1, 1, 1 Î 1-0 È 1, 0, 0 Î 1 = È 1, 1, 1 Î 1. To transform B 2 into an upper triangular matrix, subtract the first row of B 2
More informationLinear Algebra Lecture Notes-II
Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered
More informationLecture notes: Applied linear algebra Part 2. Version 1
Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least
More information12. Perturbed Matrices
MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationACI-matrices all of whose completions have the same rank
ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationDiagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu
Eigenwerte, Eigenvektoren, Diagonalisierung Vorlesungsnotizen zu Mathematische Methoden der Physik I J. Mark Heinzle Gravitational Physics, Faculty of Physics University of Vienna Version 5/5/2 2 version
More informationIn these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.
1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear
More informationGI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil
GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition
More information