Port-Hamiltonian Realizations of Linear Time Invariant Systems

Size: px
Start display at page:

Download "Port-Hamiltonian Realizations of Linear Time Invariant Systems"

Transcription

1 Port-Hamiltonian Realizations of Linear Time Invariant Systems Christopher Beattie and Volker Mehrmann and Hongguo Xu December Abstract The question when a general linear time invariant control system is equivalent to a port-hamiltonian systems is answered Several equivalent characterizations are derived which extend the characterizations of 38 to the general non-minimal case An explicit construction of the transformation matrices is presented The methods are applied in the stability analysis of disc brake squeal Keywords: port-hamiltonian system passivity stability system transformation linear matrix inequality Lyapunov inequality even pencil quadratic eigenvalue problem AMS subject classification 93A3 93B17 93B11 1 Introduction The synthesis of system models that describe complex physical phenomena often involves the coupling of independently developed subsystems originating within different disciplines Systematic approaches to coupling such diversely generated subsystems prudently follows a system-theoretic network paradigm that focusses on the transfer of energy mass and other conserved quantities among the subsystems When the subsystem models themselves arise from variational principles then the aggregate system typically has structural features that reflects underlying conservation laws and often it may be characterized as a port-hamiltonian (PH) system see for some major references Although PH systems may be formulated in a more general framework we will restrict ourselves to input-state-output PH systems which have the form ẋ = (J R) x H(x) + (F P)u(t) (1) y(t) = (F + P) T x H(x) + (S + N)u(t) Department of Mathematics Virginia Tech Blacksburg VA 2461 USA beattie@vtedu Supported by Einstein Foundation Berlin through an Einstein Visiting Fellowship 2 Institut für Mathematik MA 4-5 TU Berlin Str des 17 Juni 136 D-1623 Berlin FRG mehrmann@mathtu-berlinde The authors gratefully acknowledge the support by the Deutsche Forschungsgemeinschaft (DFG) as part of the collaborative research center SFB 129 Sub- stantial efficiency increase in gas turbines through direct use of coupled unsteady combustion and flow dynamics project A2 Development of a reduced order model of pulsed detonation combuster and by Einstein Center ECMath in Berlin 3 Department of Mathematics University of Kansas Lawrence KS 6645 USA xu@mathkuedu Partially supported by Alexander von Humboldt Foundation and by Deutsche Forschungsgemeinschaft through the DFG Research Center Matheon Mathematics for Key Technologies 1

2 where x R n is the n-dimensional state vector; H : R n ) is the Hamiltonian a continuously differentiable scalar-valued vector function describing the distribution of internal energy among the energy storage elements of the system; J = J T R n n is the structure matrix describing the energy flux among energy storage elements within the system; R = R T R n n is the dissipation matrix describing energy dissipation/loss in the system; F±P R n m are the port matrices describing the manner in which energy enters and exits the system and S + N with S = S T R mm and N = N T R mm describing the direct feed-through of input to output The matrices R P and S must satisfy R P K = P T ; (2) S that is K is symmetric positive-semidefinite This implies in particular that R and S are also positive semidefinite R and S Port-Hamiltonian systems generalize the classical notion of Hamiltonian systems expressed in our notation as ẋ = J x H(x) The analog of the conservation of energy for Hamiltonian systems is for PH systems (1) the dissipation inequality: H(x(t 1 )) H(x(t )) t1 t y(t) T u(t) dt (3) which has a natural interpretation as an assertion that the increase in internal energy of the system as measured by H cannot exceed the total work done on the system H(x) is a storage function associated with the supply rate y(t) T u(t) In the language of system theory (3) constitutes the property that (1) is a passive system 1 One may verify with elementary manipulations that the inequality in (3) is an immediate consequence of the inequality in (2) and holds even when the coefficient matrices J R F P S and N depend on x or explicitly on time t (see 27) or indeed (with care taken to define suitable operator domains) when they are defined as linear operators acting on infinite dimensional spaces Notice that with a null input u(t) = the the dissipation inequality asserts that H(x) is non-increasing along any unforced system trajectory Thus H(x) defines a Lyapunov function for the unforced system so PH systems are implicitly Lyapunov stable 19 Similarly H(x) is non-increasing along any system trajectory that produces a null output y(t) = so PH systems also have Lyapunov stable zero dynamics 9 PH systems constitute a class of systems that is closed under power-conserving interconnection This means that port-connected PH systems produce an aggregate system that must also be port-hamiltonian This aggregate system hence will be guaranteed to be both stable and passive Modeling with PH systems thus represents physical properties in such a way as to facilitate automated modeling 23 and at same time encoding physical properties in the structure of equations This framework also provides compelling motivation to identify and preserve PH structure when producing reduced order surrogate models see Now consider a general linear time-invariant (LTI) system: ẋ = A x + B u y = C x + D u with A R n n B R n m C R m n and D R m m The system (4) is passive if there exists a continuously differentiable storage function H : R n ) such that (3) holds for all admissible inputs u 2 (4)

3 So the natural question arises: When is (4) equivalent to a port-hamiltonian system? Here specifically we consider equivalence to LTI port-hamiltonian systems taking the form ξ = (J R)Q ξ + (F P) ϕ (5) η = (F + P) T Q ξ + (S + N) ϕ with J = J T R = R T Q = Q T > S = S T N = N T where J R Q R n n F P R n m S N R m m and K as defined in (2) is positive semidefinite The notion of system equivalence that we consider here engages three invertible transformations connecting (4) and (5) one on each of the input output and state space: u(t) = Ṽϕ(t) η(t) = VT y(t) and x(t) = Tξ(t) (with Ṽ V T invertible) Within this context the supply rate associated with (4) is transformed as y(t) T u(t) = η(t) T V 1 Ṽϕ(t) We wish to constrain the permissible transformations characterizing system equivalence so as to be power conserving; that is so that supply rates remain invariant y(t) T u(t) = η(t) T ϕ(t) Thus we assume that Ṽ = V and we say that (4) is equivalent to (5) if there exist invertible matrices V and T such that u(t) = V ϕ(t) η(t) = V T y(t) and x(t) = T ξ(t) (6) Since PH systems are structurally passive our starting point is the following characterization of passivity introduced in 39 for minimal linear time invariant systems The system (4) is minimal if it is both controllable and observable The system (4) (and more specifically the pair of matrices (A B) with A R n n B R n m ) is controllable if rank si A B = n for all s C Similarly the system (4) (and the pair (A C) with A R n n C R m n is si A observable if rank = n for all s C C Theorem 1 (39) Assume that the LTI system (4) is minimal The matrix inequality A T Q + QA QB C T (7) B T Q C (D + D T ) has a solution Q = Q T > if and only if (4) is a passive system in which case: i) H(x) = 1 2 xt Qx defines a storage function for (4) associated with the supply rate y T u satisfying (3) ii) There exist maximum and minimum symmetric solutions to (7): < Q Q + such that for all symmetric solutions Q to (7) Q Q Q + This result yields an immediate consequence for PH realizations Corollary 2 Assume that the LTI system (4) is minimal Then (4) has a PH realization if and only if it is passive Moreover if (4) is passive then every equivalent system to (4) (as generated by transformations as in (6)) may also be directly expressed as a PH system 3

4 Proof If (4) has a PH realization then a fortiori it is passive Conversely if (4) is passive then (7) has a positive definite solution Q = Q T = T T T written in terms of a Cholesky or square root factor T Then we can define directly Q = I J = 1 2 (TAT 1 (TAT 1 ) T ) R = 1 2 (TAT 1 + (TAT 1 ) T ) F = 1 ( 2 TB + (CT 1 ) T ) P = 1 ( 2 TB + (CT 1 ) T ) S = 1 2 (D + DT ) N = 1 2 (D DT ) and (7) can be written in terms of these defined quantities as T T R P T 2 I P T S I which verifies (2) Thus J R Q F P S and N as defined in (8) indeed determine a PH system The matrix inequality (7) actually includes two properties the Lyapunov inequality in the upper left block A T Q + QA (9) via Lyapunov s theorem 26 guarantees that the uncontrolled system ẋ = Ax is stable ie A has all eigenvalues in the closed left half plane and those on the imaginary axis are semisimple while passivity is encoded in the solvability of the full matrix inequality (7) If the inequality (9) is strict then the system is asymptotically stable ie all eigenvalues of A are in the open left half plane and strictly passive ie also the dissipation inequality (3) is strict see 24 for a detailed analysis We will discuss the solution of these two types matrix inequalities in the more general situation of non-minimal systems in Section 2 Based on these results we will then construct explicit transformations from of a general LTI system (minimal or not) to port-hamiltonian form (5) in Section 3 2 Lyapunov and Riccati inequalities The solution of Lyapunov and Riccati inequalities as they arise in the characterizations (9) and (7) is usually addressed via the solution of semidefinite programming see 4 We discuss the more explicit characterization via invariant subspaces 21 Solution of Lyapunov inequalities The stability of A is a necessary condition for (9) and (7) which require that TAT 1 + T T A T T T or equivalently that the Lyapunov inequality (9) has a positive definite solution Q = T T T It is well known that the equality case in (9) always has a positive definite solution if A is stable see 26 In the following we recall see eg 4 a characterization of the complete set of solutions of the inequality case If A is stable but not asymptotically stable then due to the fact that the eigenvalues on the imaginary axis are all semi-simple using the real Jordan form of A see eg 2 there exist a nonsingular matrix M R n n such that MAM 1 = diag (A 1 α 2 J 2 α r J r ) (1) (8) 4

5 where A 1 R n 1 n 1 is asymptotically stable α 2 α r are real and distinct and J j = I nj j = 2 r To characterize the solution set of (9) we make the ansatz that I nj Q = M T diag (Q 1 ˆQ 2 ˆQ ) r M (11) and separately consider the determination of the block Q 1 and the other blocks Let W(n 1 ) be the set of symmetric positive semidefinite matrices Θ 1 R n 1 n 1 with the property that Θ 1 x for any eigenvector x of A 1 Then for any Θ 1 W(n 1 ) we define Q 1 to be the unique symmetric positive definite solution of the Lyapunov equation A T 1 Q 1 + Q 1 A 1 = Θ 1 see 26 The other matrices ˆQ j j = 2 r are chosen of the form Yj Z ˆQ j = j > Z j Y j with Z j = Z T j when α j > or an arbitrary ˆQ j > when α j = We have the following characterization of the solution set of (9) Lemma 3 Let A R n n Then the Lyapunov inequality (9) has a symmetric positive definite solution Q R n n if and only if A is stable If A is asymptotically stable then the solution set is given by the set of all symmetric positive definite solutions of the Lyapunov equation A T Q + QA = Θ where Θ is any symmetric positive semidefinite matrix with the property that Θx for any eigenvector x of A or in other words (A Θ) is observable If A is stable but not asymptotically stable then with the transformation (1) any solution of (9) has the form (11) solving the Lyapunov equation with Θ 1 W(n 1 ) A T Q + QA = M T diag(θ 1 )M (12) Proof Consider a transformation to the form (1) and set for a symmetric matrix Q M T QM 1 = Q ij partitioned accordingly Form M T (A T Q + QA)M 1 A T 1 Q 11 + Q 11 A 1 A T 1 Q 12 + α 2 Q 12 J 2 A T 1 Q 1r + α r Q 1r J r α 2 J T 2 = QT 12 + QT 12 A 1 α 2 (J T 2 Q 22 + Q 22 J 2 ) α 2 J T 2 Q 2r + α r Q 2r J r α r J T r Q T 1r + QT 1r A 1 α r J T r Q T 2r + α 2Q T 2r J 2 α r (J T r Q rr + Q rr J r ) again partitioned accordingly For any j = 2 r with α r partition Qj11 Q Q jj = j12 Q T j12 Q j22 according to the block structure in J j so that J T Qj12 Q T j12 Q j11 Q j22 j Q jj + Q jj J j = Q j11 Q j22 Q j12 + Q T j12 5

6 Then α j (J T j Q jj +Q jj J j ) if and only if Q j12 +Q T j12 = and it follows that Q j11 Q j22 = Yj Z Thus any matrix of the form Q jj = j > with Z Z j Y j = Z T j is a positive definite j solution If α j = then clearly any Q jj > is a solution In both cases the resulting diagonal blocks Q jj > satisfy α j (J T j Q jj + Q jj J j ) = Thus for i j the blocks Q ij have to satisfy one of the two matrix equations A T 1 Q 1j + α j Q 1j J j = α i J T i Q ij + α j Q ij J j = which implies that Q ij = for all i j since the spectra of the respective pairs A 1 α j J j and α i J i α j J j are different and thus the corresponding Sylvester equations only have as solution 26 For any positive semidefinite matrix Θ 1 satisfying Θ 1 x for any eigenvector x of A 1 by Lyapunov s Theorem 26 there exists a symmetric positive definite matrix Q 1 satisfying A T 1 Q 1 + Q 1 A 1 = Θ 1 Thus the Lyapunov inequality (9) has a solution Q > and it has the form (11) satisfying (12) The results for an asymptotically stable A can be easily derived as a special case with all ˆQ 2 ˆQ r void in (11) Conversely if the Lyapunov inequality has a solution Q > and if x is an eigenvector of A ie Ax = λx then ( λ + λ)x Qx Since x Qx > it follows that λ + λ ie the real part of λ is non-positive Suppose that A has a purely imaginary eigenvalue iα Since a similarity transformation of A does not change the Lyapunov inequality (9) we may assume that A is in (complex) Jordan canonical form and we may consider each Jordan block separately If A k is a single real Jordan block of size k k k > 1 with eigenvalue iα then for any Hermitian matrix ˆQ k = q ij we have that q 11 q 1k 1 A k Q q 11 q 12 + q 12 q 1k + q 2k 1 k + Q k A k = q 1k 1 q 2k 1 + q 1k q k 1k + q k 1k This matrix cannot be negative semidefinite since otherwise q 11 = which would contradict that ˆQ k is positive definite Therefore A has to be stable ie if it has purely imaginary eigenvalues these eigenvalues must be semi-simple Remark 1 The construction in Lemma 3 relies on the computation of the real Jordan form of A which usually cannot be computed in finite precision arithmetic For the numerical computation of the solution it is better to use the real Schur form see Solution of Riccati inequalities By Corollary 2 we can characterize (at least in the minimal case) the existence of a transformation to PH form via the existence of a symmetric positive definite matrix Q solving the 6

7 linear matrix inequality A T Q QA C T QB C B T Q D + D T (13) In this subsection we will discuss the explicit solution of this matrix inequality in the general (non-minimal) case It is clear that for the inequality to hold S := D + D T has to positive semidefinite However we will see below that we can restrict ourselves to the case that S := D + D T is positive definite so we only consider this case here and using Schur complements (13) is equivalent to the Riccati inequality (A BS 1 C) T Q + Q(A BS 1 C) + QBS 1 B T Q + C T S 1 C (14) We first investigate the influence of the purely imaginary eigenvalues of A Clearly a necessary condition for (13) to be solvable is that Q satisfies A T Q + QA Following Lemma 3 if A has the form (1) then Q must have the form (11) and A T Q + QA has the form (12) Written in compact form we get where MAM 1 A1 = A 2 M T QM 1 Q1 = Q 2 A 2 = diag(α 2 J 2 α r J r ) = A T 2 Q 2 = diag( ˆQ 2 ˆQ r ) satisfying A T 2 Q 2 + Q 2 A 2 = and Setting M T (A T Q + QA)M 1 = MB = B1 B 2 A T 1 Q 1 + Q 1 A 1 CM 1 = C 1 C 2 and premultiplying M T and post-multiplying M 1 to the first block row and column of (13) respectively one has A T 1 Q 1 Q 1 A 1 C T 1 Q 1B 1 C T 2 Q 2B 2 C 1 B T 1 Q 1 C 2 B T 2 Q 2 S Therefore Q 2 must be positive definite satisfying B T 2 Q 2 = C 2 A T 2 Q 2 + Q 2 A 2 = (15) and A T 1 Q 1 Q 1 A 1 C T 1 Q 1B 1 C 1 B T 1 Q (16) 1 S or equivalently it has to satisfy the reduced Riccati inequality Ψ(Q 1 ) := (A 1 B 1 S 1 C 1 ) T Q 1 +Q 1 (A 1 B 1 S 1 C 1 )+Q 1 B 1 S 1 B T 1 Q 1 +C T 1 S 1 C 1 (17) To study the solvability of (17) we note that A 1 is asymptotically stable We claim that A 1 B 1 S 1 C 1 is necessarily asymptotically stable as well To show this suppose that the 7

8 inequality (17) has a solution Q 1 > Since Q 1 B 1 S 1 B T 1 Q 1 + C T 1 S 1 C 1 it follows from Lemma 3 that A 1 B 1 S 1 C 1 is stable and there exits an invertible matrix M 1 such that M 1 (A 1 B 1 S 1 C 1 )M 1 1 has real Jordan form (1) M T 1 Q 1 M 1 1 has the form (11) and following (12) we have M T 1 (Q 1 B 1 S 1 B T 1 Q 1 + C T 1 S 1 C 1 )M 1 1 = diag(θ ) Then due to the positive definiteness of S it follows that C 1 M 1 1 = C 11 and by making use of the block diagonal structure of M T 1 Q 1 M 1 1 we also have M 1B 1 = B T 11 T Thus it follows that M 1 A 1 M 1 1 = diag(a 11 α 2 J 2 α r J r ) Since A 1 is asymptotically stable all α j J j must be void which implies that A 1 B 1 S 1 C 1 must be asymptotically stable as well In order to characterize the solution of the reduced Riccati inequality (17) we use the following lemmas The first is the orthogonal version of the Kalman decomposition Lemma 4 Consider a general LTI system of the form (4) Then there exists a real orthogonal matrix U such that A 11 A 13 U T AU = A 21 A 22 A 23 Ã11 Ã =: 12 Ã A (18) B 1 U T B = B 2 B1 =: CU = C 1 C 3 =: C1 C2 where the pair (Ã11 B 1 ) is controllable and the pair (A 11 C 1 ) is observable Proof Applying first the controllability and then the observability staircase forms of 37 to the system there exists a real orthogonal matrix U such that U T AU is in the form (18) with (Ã11 B 1 ) controllable and (A 11 C 1 ) observable Note that by using the block structure in Lemma 4 we have that (A 11 B 1 ) is controllable as well The next lemma considers the Riccati inequality in the form (refstcf) Lemma 5 Consider the reduced Riccati inequality (17) and suppose that both A 1 and A 1 B 1 S 1 C 1 are asymptotically stable with S > Then there exists a transformation to the condensed form (18) of A 1 B 1 C 1 such that U T (A 1 B 1 S 1 Ã11 C 1 )U = B 1 S 1 C1 Ã 12 B 1 S 1 C2 Ã 22 = A 11 B 1 S 1 C 1 A 13 B 1 S 1 C 3 A 21 B 2 S 1 C 1 A 22 A 23 B 2 S 1 C 3 A 33 (19) where A 11 B 1 S 1 C 1 A 22 and A 33 are all asymptotically stable In addition also A 11 is asymptotically stable (Ã11 B 1 ) and (A 11 B 1 ) are controllable and (A 11 C 1 ) is observable 8

9 The proof is straightforward The following Lemma is also well-known see eg but we present a proof in our notation Lemma 6 Suppose that (A B) is controllable (A C) is observable A is asymptotically stable and S > Then the Riccati equation (A BS 1 C) T Q + Q(A S 1 C) + QBS 1 B T Q + C T S 1 C = (2) has a solution Q > if and only if the Hamiltonian matrix A BS H := 1 C BS 1 B T C T S 1 C (A BS 1 C) T (21) has a Lagrangian invariant subspace ie there exist square matrices W 1 W 2 E such that W1 T W 2 = W2 T W 1 W1 T W2 T T has full column rank and W1 H W 2 = W1 W 2 Moreover if (22) holds then Q := W 2 W 1 1 > solves (2) E (22) Proof Suppose that Q > solves (2) It is straightforward to prove that W1 T W2 T T I Q satisfies the required conditions and (22) with E = A + BS 1 B T Q T = Conversely suppose there exist W 1 and W 2 satisfying all the described conditions Without loss of generality we may assume W1 T W2 T T has orthogonal columns Then it is well known 6 7 that there exists a CS decomposition ie there exist orthogonal matrices U V and diagonal matrices Γ such that U T W 1 V = U T Γ W 2 V = I where is invertible and 2 + Γ 2 = I (Note that the zero block in U T W 1 V could be void Partition U T A11 A AU = 12 U T BS 1 B T G11 G U = 12 A 21 A 22 G T 12 G 22 U T C T S 1 H11 H CU = 12 H T V T E11 E EV = H 22 E 21 E 22 Then (22) takes the form A 11 A 12 G 11 G 12 A 21 A 22 G T 12 G 22 Y 11 Y 12 A T 11 A T 21 Y T 12 Y 22 A T 12 A 22 Γ I = Γ I E11 E 12 E 21 E 22 By comparing the (22) block on both sides one has G 22 = Due to the positive semidefiniteness of S then also G 12 = By comparing the (12) blocks on both sides of the above 9

10 equation and using the nonsingularity of it follows that E 12 = Then by comparing the (32) block on both sides one has A 21 = Hence U T A11 A AU = 12 A 22 U T BS 1 B T G11 U = implying that (A B) is not controllable if W 1 is not invertible Therefore W 1 must be invertible Similarly the observability of (A C) implies that W 2 is invertible Define Q = W 2 W1 1 which is then invertible and also symmetric because of the relation WT 1 W 2 = W2 T W 1 From (22) we then obtain I H Q = I Q (W 1 RW 1 1 ) which implies that Q solves (2) The positive definiteness of Q follows from the fact that Q is invertible A is asymptotically stable and QBS 1 B T Q + C T S 1 C Remark 2 The previous lemmas still hold if A is replaced by A BS 1 C provided that this matrix is also asymptotically stable Note that (A B) is controllable if and only if (A BS 1 C B) is controllable and (A C) is observable if and only if (A BS 1 C C) is observable Lemma 6 shows that the Riccati equation (2) has a solution Q > whenever the Hamiltonian matrix H has a Lagrangian invariant subspace The existence of such an invariant subspace depends only on the purely imaginary eigenvalues of H eg 14 When such an invariant subspace exists then there are many such invariant subspaces Note that the eigenvalues of H are symmetric with respect to the imaginary axis in the complex plane If (22) holds then the union of the eigenvalues of E and E T form the spectrum of H One particular choice is that the spectrum of E is in the closed left half complex plane another choice is that it is in the closed right half complex plane The two corresponding solutions of the Riccati equation (2) are the minimal solution Q and the maximal solution Q + and all other solution of the Riccati equation lie (in the Loewner ordering of symmetric matrices) between these extremal solutions Example 1 Consider the example B = S = 1 C = 1 A = 1 α where α > So A and A BS 1 C = α are both asymptotically stable The Riccati equation (2) is q 2 2αq + 1 = which does not have a real positive semidefinite solution when α ( 1) If α = 1 then it has a unique solution q = 1 associated with E = If α > 1 then it has two solutions α ± α 2 1 > with E = ± α 2 1 Suppose that A 1 B 1 C 1 from (17) have the condensed form (18) with an orthogonal matrix U Partition Q11 U T Q12 Q 1 U = Q 22 Q T 12 The reduced Riccati inequality (17) then is equivalent to U T Ψ(Q 1 )U = Ψ11 (Q 1 ) Ψ12 (Q 1 ) Ψ 12 (Q 1 ) T Ψ22 (Q 1 ) (23) 1

11 where (leaving off arguments) Ψ 11 = (Ã11 B 1 S 1 C1 ) T Q11 + Q 11 (Ã11 B 1 S 1 C1 ) + Q 11 B1 S 1 BT 1 Q11 + C T 1 S 1 C1 Ψ 12 = (Ã11 B 1 S 1 ( C 1 B T 1 Q 11 )) T Q12 + Q 12 Ã 22 + Q 11 (Ã12 B 1 S 1 C2 ) + C T 1 S 1 C2 Ψ 22 = ÃT 22 Q 22 + Q 22 Ã 22 + (Ã12 B 1 S 1 C2 ) T Q12 + Q T 12(Ã12 B 1 S 1 C2 ) + Q T 12 B 1 S 1 BT 1 Q12 + C T 2 S 1 C2 For (17) to have a solution Q 1 > it is necessary that Ψ 11 (Q 1 ) = Ψ 11 ( Q 11 ) has a positive definite solution Q 11 or equivalently the dual Riccati inequality has a solution (Ã11 B 1 S 1 C1 )Ỹ + Ỹ(Ã11 B 1 S 1 C1 ) T + Ỹ C T 1 S 1 C1 Ỹ + B 1 S 1 BT 1 1 Ỹ = Q 11 > Using the partitioning in (18) for Ỹ = Y11 Y 12 Y12 T Y 22 can write this as Φ11 (Ỹ) Φ12(Ỹ) Φ 12 (Ỹ)T Φ 22 (Ỹ) where then one Φ 11 (Ỹ) = (A 11 B 1 S 1 C 1 )Y 11 + Y 11 (A 11 B 1 S 1 C 1 ) T + Y 11 C T 1 S 1 C 1 Y 11 + B 1 S 1 B T 1 Again it is necessary that Φ 11 (Ỹ) = Φ 11(Y 11 ) has a solution Y 11 > or equivalently that the dual inequality (A 11 B 1 S 1 C 1 ) T Q 11 + Q 11 (A 11 B 1 S 1 C 1 ) + Q 11 B 1 S 1 B T 1 Q 11 + C T 1 S 1 C 1 (24) has a positive definite solution Since (A 11 B 1 ) is controllable and (A 11 C 1 ) is observable the inequality (24) is solvable if and only if the corresponding Riccati equation is solvable see 4 25 From Lemma 6 we have as necessary condition for the solvability of (13) or (14) that A 11 B 1 S 1 C 1 is asymptotically stable and that the Hamiltonian matrix A11 B H = 1 S 1 C 1 B 1 S 1 B T 1 C T 1 S 1 C 1 (A 11 B 1 S 1 C 1 ) T (25) has a Lagrangian invariant subspace We now show these two conditions as well as (15) are also sufficient for the existence of a positive definite solution by explicit constructions Clearly we only need to consider (16) or (17) We first show that Ψ 11 has a solution Q 11 > where Ψ 11 is defined in (23) Suppose the Hamiltonian matrix H has a Lagrangian invariant subspace By Lemma 6 we can determine a matrix Q 11 > that solves the equation (24) with the eigenvalues of A 11 B 1 S 1 (C 1 B T 1 Q 11 ) in the closed left half complex plane Then Q Q 11 =: 11 is a solution of Ψ 11 = but Q 11 is only positive semidefinite if A 22 is present If A 11 B 1 S 1 (C 1 B T 1 Q 11 ) 11

12 has no purely imaginary eigenvalues or equivalently H does not have purely imaginary eigenvalues then the Hamiltonian matrix Ã11 H = B 1 S 1 C1 B1 S 1 BT 1 C T 1 S 1 C1 (Ã11 B 1 S 1 C1 ) T does not have purely imaginary eigenvalues either since the spectrum of H is the union of the spectra of H A 22 and A 22 and A 22 is asymptotically stable Consider the Riccati equation Ψ 11 + Ξ = (26) with Ξ being chosen such that (Ã11 B 1 S 1 C1 C T 1 S 1 C1 + Ξ) is observable; such Ξ always exists Recall that (Ã11 B 1 S 1 C1 B 1 ) is controllable The Riccati equation (26) corresponds to the Hamiltonian matrix H Ξ For a sufficiently small (in norm) Ξ by continuity this new Hamiltonian matrix has a Lagrangian invariant subspace (when its eigenvalues are away from the imaginary axis) and by Lemma 6 Ψ = Ξ has a positive definite solution Q11 with all the eigenvalues of à 11 B 1 S 1 ( C 1 B T Q 1 11 ) in the closed left half complex plane If A 11 B 1 S 1 (C 1 B T 1 Q 11 ) has purely imaginary eigenvalues (including ) then there exists an invertible matrix L such that L(A 11 B 1 S 1 (C 1 B T 1 Q 11)L 1 Σ1 = Σ 2 where Σ 1 has only purely imaginary eigenvalues and Σ 2 is asymptotically stable So there exists an invertible matrix L such that Là L 11 1 = L A11 B 1 S 1 (C 1 B T 1 Q 11 ) Σ 1 A 21 B 2 S 1 (C 1 B T 1 Q 11 ) A L 1 = Σ 2 Σ1 =: 22 Σ 2 Σ 32 A 22 where à 11 = Ã11 B 1 S 1 ( C 1 B T 1 Q 11) and Σ 2 is asymptotically stable Since similarity transformations will not affect our analysis without loss of generality we may assume that both L and L are identity matrices Subtracting (Ã11 B 1 S 1 C1 ) T Q 11 + Q 11(Ã11 B 1 S 1 C1 ) + Q 11 B 1 S 1 BT 1 Q 11 + C T 1 S 1 C1 = from (26) yields the Riccati equation (à 11) T Ỹ + Ỹà 11 + Ỹ B 1 S 1 BT 1 Ỹ + Ξ = (27) where Ỹ = Q 11 Q 11 Repartition B B 1 = 1 12 B 2

13 according to à 11 and set Ξ = Ξ 22 Ỹ = Y 2 Then the Riccati equation (27) is reduced to (Σ 2) T Y 2 + Y 2 Σ 2 + Y 2 B 2S 1 (B 2) T Y 2 + Ξ 22 = Since (à 11 B 1 ) is controllable so is (Σ 2 B 2 ) Recall also that Σ 2 is asymptotically stable Analogous to the previous case one can choose (sufficiently small) Ξ 22 with (Σ 2 Ξ 22) observable and then the reduced Riccati equation has a positive definite solution Y 2 with the eigenvalues of Σ 2 + B 2 S 1 (B 2 )T Y 2 in the closed left half complex plane Then Q 11 = Q Q 11 +Ỹ = Q Q 12 + = (Q Y 12 )T Q 22 + Y 11 Y 12 > 2 Y12 T Y 22 where Q 11 has the same size of Σ 1 and Y 22 has the same size of A 22 and is a solution to Ψ 11 = Ξ 22 since à 11 B 1 S 1 ( C 1 B T Q 1 11 ) = à 11 + B 1 S 1 Σ1 B BT 1 Ỹ = 1 S 1 (B 2 )T Y 2 Σ 2 + B 2 S 1 (B 2 )T Y 2 has its eigenvalues is in the closed left half complex plane Once we have such a solution Q 11 we can solve Ψ 12 = for Q 12 This Sylvester equation has a unique solution because Ã22 = A 33 is asymptotically stable and the eigenvalues of à 11 B 1 S 1 ( C 1 B T Q 1 11 ) are in the closed left half complex plane Having (leaving off arguments) solved the equality Ψ 12 = we finally need to solve the inequality Ψ 22 Let Θ = I Q 1 11 Q 12 I From the equations Ψ 11 = Ξ and Ψ 12 = we obtain Ξ Θ T (U T Ψ(Q 1 )U)Θ 1 = Θ T Θ 1 = Ψ22 On the other hand Θ T (U T Q 1 U)Θ 1 = Q11 Π Q T 12 Ξ Ξ 1 Q 11 Ξ Ψ 22 Q T 12 Π = Q 22 Q T 1 12 Q Q Q Q Q 11 Ξ Q 1 11 Q 12 and Θ(U T A 1 U)Θ 1 = Θ(U T B 1 ) = U T B 1 = 1 Ã11 à 12 Ã11 Q Q à 22 B1 Q 1 11 Q 12 à 22 (C 1 U)Θ 1 = C1 C2 C 1 Q 1 11 Q 12 13

14 The (22) block of Θ T (U T Ψ(Q 1 )U)Θ 1 can be rewritten as so that à T 22Π + ΠÃ22 + ( C 2 C 1 Q 1 11 Q 12 ) T S 1 ( C 2 C 1 Q 1 11 Q 12 ) Ψ 22 = ÃT 22Π + ΠÃ22 + ( C 2 C 1 Q 1 11 Q 12 ) T S 1 ( C 2 C 1 Q 1 11 Q 12 ) + Q T 12 Since Ã22 is asymptotically stable and ( C 2 C 1 Q 1 11 Q 12 ) T S 1 ( C 2 C 1 Q 1 11 Q 12 ) + Q T 12 one can always select Ξ such that the Lyapunov equation Ψ 22 = Ξ 1 1 Q 11 F Q Q has a unique solution Π > Clearly for Q 22 = Π + Q T 1 12 Q Q the matrix Q11 Q12 Q 1 = U U Q T > 22 satisfies Q T 12 ξ Ψ(Q 1 ) = U Ξ U T 1 1 Q 11 F Q Q ie Q 1 > solves (16) and (17) We summarize the conditions for the existence of a positive definite solution of the matrix inequality (13) in the following theorem Theorem 7 Consider a general LTI system of the form (4) with A stable and S = D+D T > Let M be invertible such that MAM 1 A1 B1 = MB = CM 1 = C A 2 B 1 C 2 2 where A 2 is diagonalizable and contains all the purely imaginary eigenvalues of A Let A 1 B 1 C 1 have the condensed form (19) with an orthogonal matrix U Then the LMI (13) has a positive definite solution Q if and only if the following conditions hold (a) There exists a positive definite matrix Q 2 satisfying B T 2 Q 2 = C 2 A 2 Q 2 = Q 2 A 2 (b) The block A 11 B 1 S 1 C 1 is asymptotically stable (c) The Hamiltonian matrix H defined in (25) has a Lagrangian invariant subspace If these conditions hold then the LMI (13) has a positive definite solution of the form Q = M T Q1 M > where Q Q 1 > solves (17) and Q 2 is determined from condition 2 (a) and (A BS 1 C) T Q + Q(A BS 1 C) + QBS 1 B T Q + C T S 1 C Ξ = M T U U Ξ T M where Ξ Ξ as in above construction 14

15 Proof The proof follows from the given construction Remark 3 Relation (22) is equivalent to the property that the even matrix pencil λ I I A B A T C T B T C S (28) is regular and has index at most one ie the eigenvalues at are all semi-simple (this follows because S > ) and that it has a deflating subspace spanned by the columns of Q I Y T T with Y = S 1 (C B T Q) ie I I Q I Y (A BY) = A B A T C T B T C S Q I Y (29) A pencil λn M is called even if N = N T and M = M T Since even pencils have the Hamiltonian spectral symmetry in the finite eigenvalues this means that there are equally many eigenvalues in the open left and in the open right half plane Numerically to compute Q it is preferable to work with the even pencil (29) over the Hamiltonian matrix (21) since explicit inversion of S can be avoided Numerically stable structure preserving methods for this task are available see 2 3 Note that this procedure can be directly applied to the original data in (4) if S = D + D T is positive definite and we attempt to solve the Riccati equation of (14) The situation is more complicated if S is singular because then it is well-known that the pencil (28) has Jordan blocks of size larger than 1 (the index of (28) is larger than 1) associated with the eigenvalue which means that the number of finite eigenvalues is less than 2n In this case one has to first deflate the high index and possibly singular part of the system This can be done with the help of the staircase form 8 which has been implemented in a production code 5 The situation is also more difficult if S is indefinite since in this case both BS 1 B T and C T S 1 C cannot be guaranteed to be positive semidefinite and Lemma 6 may no longer be valid Remark 4 To show that the system (4) is passive it is sufficient that the LMI (13) has a positive semidefinite solution Q In this case the conditions can be relaxed First of all the condition B T 2 Q 2 = C 2 can be relaxed to Ker B 2 Ker C T 2 rank C 2B 2 = rank C 2 and C 2 B 2 Also A 2 may have purely imaginary eigenvalues with Jordan blocks For example in the extreme case when C 2 = Q 2 = always satisfies the conditions for any A 2 Secondly for (16) or (17) we still require that A 11 B 1 Ŝ 1 C 1 is asymptotically stable and that the Hamiltonian matrix Ĥ has a Lagrangian invariant subspace Since this only requires that Q 1 a solution can be determined in a simpler way We may simply set Q 11 = Q 11 and Ψ 11 = Then with the block structure the solution of Ψ 12 = has a form Q12 Q 12 = To solve Ψ 22 = for Q Q11 Q12 22 one can show Q 1 = U Q T U 12 Q T and 22 solves the Riccati equation Ψ(Q 1 ) = Also under certain conditions Ã22 = A 33 does not have to be asymptotically stable This shows that for non-minimal systems passivity is more general than the PH property 15

16 3 Construction of port-hamiltonian realizations In the last section we have seen that the existence of a port-hamiltonian realization for (4) reduces to the existence of a nonsingular matrix T R n n or a positive definite matrix Q = T T T such that the matrix inequalities (9) or (7) hold In this section we develop a constructive procedure to check these conditions In our previous considerations the matrix V that is used for basis change in the input space suffices to be invertible but to implement the transformations in a numerical stable algorithms we require V to be real orthogonal Consider V = V 1 V 2 where V 1 is chosen so that the columns of V 1 form an orthonormal basis of the kernel of D+D T To construct such a V we can use a singular value or rank-revealing QR decomposition 16 Then for the symmetric part S of the feedthrough matrix we have S = 1 2 (VT DV + V T DV ) = 1 ( 2 VT D + D T ) V = (3) S 2 where S 2 is symmetric and invertible Set B1 B 2 = BV C T 1 C T 2 := C T V (31) each partitioned compatibly with V as in (3) Scaling the second block row and column of the matrix inequality (7) with V T and V respectively the matrix inequality A T Q + QA QB 1 C T 1 QB 2 C T 2 B T 1 Q C 1 B T 2 Q C 2 S 2 (32) has a solution Q = Q T > if and only if the matrix inequality A T Q + QA QB 2 C T 2 B T 2 Q C (33) 2 S 2 has a positive definite solution Q satisfying the constraint B T 1 Q C 1 We will characterize conditions when this constraint is satisfied in the following subsections For the solution of the matrix inequality then we can restrict the input and output space to the invertible part of D + D T and once the transformation matrices have been constructed this transformation can be extended to the full system making sure that the constraint is satisfied 31 Construction of the transformation in the case D = D T To explicitly construct the transformation to port Hamiltonian form let us first discuss the special case that D = D T ie S = Considering the matrix K in (2) to have K we must have P = and the block V 2 in the transformation of the feedthrough term is void while V = V 1 is any orthogonal matrix Then we can express the conditions (32) and (33) in terms of the factor T in a factorization Q = TT T Corollary 8 For a state-space system of the form (4) with D + D T = the following two statements are equivalent: 16

17 1 There exists a change of basis x = Tz with an invertible matrix T R n n such that the resulting realization in the new basis has PH structure as in (5) 2 There exists an invertible matrix T such that a) (TB) T = CT 1 and b) (TAT 1 ) T + TAT 1 (34) Note that without the constraint (34a) if A is stable then by Lemma 3 the second condition (34b) can always be satisfied Adding the constraint (34a) however makes the question nontrivial We have the following characterization of the transformation matrices T that satisfy (34a) Lemma 9 Consider B C T R n m and assume that rank B = r a) There exists an invertible transformation T satisfying condition (34a) if and only if Ker C T = Ker B rank CB = r and CB or equivalently there exists an invertible (orthogonal) matrix V such that BV = B 1 C T V = C T 1 C 1 B 1 = YY T > where B 1 C T 1 Rn r have full column rank and Y R r r is invertible b) Let N B R n (n r) have columns that form a basis of Ker B T If Condition a) is satisfied then any T satisfying condition (34a) has the form T = UT Z T with T = N T B Y 1 C 1 Z T Z = I (35) where U R n n is an arbitrary orthogonal matrix and Z R (n r) (n r) is arbitrary nonsingular Proof Condition (34a) is equivalent to C = B T T T T Necessary for the existence of an invertible T with this property are the conditions Ker C T = Ker T T TB = Ker B CB = B T T T TB (36) rank CB = rank B T T T TB = rank B = r Conversely Ker C T = Ker B is equivalent to the existence of an orthogonal matrix V R m m such that BV = B 1 C T V = C T 1 (37) with B 1 C T 1 Rn r of full column rank The conditions rank CB = r and CB together are equivalent to C 1 B1 > So there exists an invertible matrix Y (eg a Cholesky factor or a positive square root) such that C 1 B 1 = YY T The matrix T as in (35) is then well defined Furthermore T is invertible since if T Q = then C 1 Q = and N T BQ = The latter statement implies that Q Ran(B) so Q = B 1 z for some z and furthermore C 1 B 1 z = This in turn implies that z = and Q = ; so T is injective and hence invertible 17

18 The invertibility of T implies that B T T T T = V T B T 1 = V T (Y 1 C 1 B 1 ) T NB C T 1 Y T N T B Y 1 C 1 = V T (C1 B 1 )(C 1 B 1 ) 1 C 1 N T B Y 1 C 1 = V T C1 = C Hence (34a) holds with T = T Now suppose that T is any invertible transformation satisfying (34a) Then B T T T (TT 1 ) = CT 1 = B T T T which is equivalent to B T 1 T T ( (TT 1 )T (TT 1 ) I) = Y ( (TT 1 )T (TT 1 ) I) = From this it follows that (TT 1 )T (TT 1 Z ) = T Z I for some invertible matrix Z and that TT 1 Z 1 = U must be real orthogonal I In order to explicitly construct a transformation matrix T as in (35) it will be useful to construct bi-orthogonal bases for the two subspaces Ker B T and Ker C Toward this end let N C contain columns that form a basis of Ker C so that Ran N C = Ker C Such a matrix is easily constructed in a numerically stable way via the singular value decomposition or a rank-revealing QR decomposition of C see 16 Since B and C T are assumed to satisfy Ker B = Ker C T we have singular value decompositions B = U B1 U B2 ΣB with Σ B Σ C R r r both invertible We then obtain V B T C = U C ΣC V T C1 V T C2 (38) N C = V C2 N B = U B2 (39) Observe that N T B N C is nonsingular since if N T B N CQ = and z = N C Q then N T B z = implies that z Ran B But then z = B 1 Y = N C Q implies C 1 B 1 Y = where B 1 and C 1 are defined in (37) and so Y = and hence z = Then since N C has full column rank we have Q = Thus N T B N C is injective hence invertible Performing another singular value decomposition N T B N C = Ũ ṼT with positive diagonal and Ũ Ṽ real orthogonal we can perform a change of basis ÑB = N B Ũ 1/2 and Ñ C = N C Ṽ 1/2 and obtain that the columns of ÑB form a basis for Ker B T the columns of ÑC form a basis for Ker C and these two bases are bi-orthogonal ie ÑT BÑC = I and we have T = Ñ T B Y 1 C 1 T 1 = Ñ C B 1 Y T (4) Using the formula (35) we can express the conditions for a transformation to PH form that we have obtained so far in a more concrete way 18

19 Corollary 1 Consider system (4) with D = D T and rank B = r Let the columns of ÑB and ÑC span the kernels of B T C and satisfy ÑT BÑC = I Then system (4) is equivalent to a PH system if and only if 1 Ker C T = Ker B rank CB = r CB and 2 there exists an invertible matrix Z such that ( Z T I AT 1 Z 1 Z + I I and T T 1 are defined in (4) T AT 1 Z 1 I ) T (41) Proof The condition follows from Corollary 8 and the representation (35) by setting U = I and T as in (4) 32 Construction of the transformation in the case of general D For the case that D is general we will present a recursive procedure The first step is to perform the transformations (3) (31) and to obtain the following characterization when a transformation to PH form exists Lemma 11 Consider system (4) transformed as in (3) and (31) Let the columns of ÑB 1 and ÑC 1 form the kernels of B T 1 C 1 respectively and satisfy ÑT B 1 Ñ C1 = I Then the system is equivalent to a PH system if and only if 1 Ker C T 1 = Ker B 1 rank C 1 B 1 = rank B 1 and C 1 B 1 and 2 there exists an invertible matrix Z such that with and T = Ỹ := Z I Ñ T B 1 Y 1 C 1 Ỹ + ỸT (42) T AT 1 T B 2 C 2 T 1 V2 T DV 2 Z 1 I (43) T 1 = Ñ C1 B 1 Y T C 1 B 1 = YY T (44) and B 1 C 1 are defined similar as in (37) just with B C being replaced by B 1 C 1 Proof Condition (34a) in this case has the form (TB 1 ) T = C 1 T 1 If this condition holds then applying the result of Lemma 9 to B 1 and C 1 one has the form (35) The result is proved by applying this formula to (34b) Note that Ỹ is obtained from the one given in (34b) via a congruence (and similarity) transformation with diag(u I) where U is orthogonal 19

20 We can repartition the middle factor of Ỹ in (43) as T AT 1 T B 2 C 2 T 1 V2 T DV 2 = =: ÑT B 1 AÑC ÑT 1 B 1 AB 1 Y T ÑT B 1 B 2 Y 1 C 1 AÑC 1 Y 1 C 1 AB 1 Y T Y 1 C 1 B 2 C 2 Ñ C1 C 2 B 1 Y T V2 T DV 2 Ã 1 B 1 (45) C 1 D1 Thus we have that Ỹ = Z I Ã 1 B 1 Z 1 C 1 D1 I and condition (42) is analogous to condition (9) just replacing T A B C D with Z Ã 1 B 1 C 1 D 1 Hence the existence of Z can be checked again by using Lemma 9 This implies that the procedure of checking the existence of a transformation from (4) to a PH system can be performed in a recursive way One first performs the transformation (3) and checks whether a condition as in Part 1 of Lemma 11 does not hold in which case a transformation does not exist or one has that D + D T in (45) is invertible In the latter case if the associated matrix inequality does not have a positive definite solution then a transformation does not exist Otherwise a transformation exists and a corresponding transformation matrix T can be constructed by computing Z satisfying Ỹ(Z) + ỸT (Z) and the matrix T is formed accordingly After this the process is repeated in a recursive manner To formalize the recursive procedure let A B G = C D and suppose that (34a) is satisfied Then form where T = G 1 := ṼT T G T 1 Ṽ T I I Ṽ = V V is the matrix in the decomposition (3) times a permutation that interchanges the last block columns and T is obtained from (44) In this way we obtain ÑT B 1 AÑC ÑT 1 B 1 AB 1 Y T ÑT B 1 B 2 Y G 1 = 1 C 1 AÑC 1 Y 1 C 1 AB 1 Y T Y 1 C 1 B 2 Y T C 2 Ñ C1 C 2 B 1 Y T V2 T DV 2 V2 T DV 1 Y V1 T DV 2 V1 T DV 1 by using the fact that C 1 T 1 = (T B 1 ) T Y = V 2

21 where V is defined in (37) for B 1 C 1 By (3) we have that V T 2 DV 1 = V T 1 DV 2 V T 1 DV 1 = V T 1 DT V 1 so that we can express G 1 as G 1 = Ã1 B 1 C 1 D1 Γ 1 Γ T 1 Φ 1 Ã If 1 B 1 satisfies condition (34a) then in an analogous way we construct C 1 D1 such that T 1 = diag(t 1 I I) Ṽ 1 = diag(i V 1 I) G 2 = ṼT T 1 1 G T Ṽ 1 = Ã2 B 2 C 2 D2 Γ 2 Γ 11 Γ T 2 Φ 2 Γ 12 Γ T 11 Γ T 12 Φ 1 and we continue Since the size of the matrices T j is monotonically decreasing this procedure will terminate after a finite number of k steps with Ãk B k G k = ṼT T k 1 k 1 ṼT T G T 1 1 Ṽ T k 1Ṽk 1 = C k Dk Γ k ΓT k Φk where D k + D T k is positive definite Φ k = Φ T k T j = diag(t j I I) Ṽ j = diag(i lj V j I) and l j is the size of T j for j = k 1 If there exists an invertible matrix T k such that Tk I Ã k B k C k Dk T 1 k I ( Tk + I Ã k B k C k Dk T 1 k I ) T see the solvability conditions in the previous section then with T k = diag(t k I I) we have T k G k T 1 k + ( T k G k T 1 k )T (46) Observe that for each i j with j > i the matrices Ṽi and ṼT i commute with T j and and thus setting T = T k T Ṽ = Ṽ Ṽk 1 then and inequality (46) implies that T k G k T 1 k = ṼT TG T 1 Ṽ TG T 1 + ( TG T 1 ) Then the desired transformation matrix T is positioned in the top diagonal block of T and the matrix V is positioned in the bottom diagonal block of Ṽ T 1 j 21

22 Remark 5 The recursive procedure described above requires at each step the computation of three singular value decompositions to check the ranks of the matrices B j C j and to construct the bi-orthogonal bases so that (44) holds While each step of this procedure can be implemented in a numerically stable way the consecutive rank decisions and the consequence of performing these subtle decisions are hard to analyze similar to the case of staircase algorithms In general the strategy should be adapted to the goal of the computation which in our case is to obtain a representation in PH form that is robust to small perturbations 33 Explicit solution of linear matrix inequalities via even pencils We have seen that to check the existence of the transformation to PH form and to explicitly construct the transformation matrices T V we can consider the solution of the linear matrix inequality (7) As we have discussed before the best way to do this is via the transformation of the even pencil (28) In this subsection we combine the recursive procedure with the construction of a staircase like form for this even pencil Let for a given real symmetric matrix Q denote by A Ψ (Q) := T Q QA C T QB C B T Q D + D T the corresponding block matrix which is supposed to be positive semidefinite Let V 1 B 1 C 1 be defined as in (3) (31) If B 1 C 1 satisfy Part 1 of Lemma 11 then since Ker C T 1 = Ker B 1 there exist real orthogonal matrices Ũ1 Ṽ1 (which can be obtained by performing a permuted singular value decomposition of B 1 ) such that Ũ T 1 B 1 Ṽ 1 = Σ B Ṽ1C T C11 C 1 Ũ 1 = 12 where Σ B is invertible and C 12 Σ B is real symmetric and positive definite Transforming the desired Q correspondingly as Q11 Q Q := ŨT 1 QŨ1 = 12 Q T 12 Q 22 then since the linear matrix inequality (7) implies QB 1 = C T 1 it follows in the transformed variables that Q 22 := C T 12Σ 1 B > Q 12 := C T 11Σ 1 B (48) and it remains to determine Q 11 so that Q > To achieve this we set Q12 Q 1 Q := 22 QT 12 Q 12 Q T = T T 12 Q T 1 22 Q T I = 22 Q 1 22 QT 12 I (47) and we clearly have that Q Then we can rewrite Q as ( ) Q1 Q = + Q = T T Q1 + T 1 Q = T T Q1 22 Q 22 T 1 22

23 partitioned analogously and we obtain that Q > if and only if Q 1 > By performing a congruence transformation on Ψ (Q) with T Z := V T = Ũ1T V = V Ṽ1 I and using the fact that Q(ŨT 1 B 1Ṽ1) = (ṼT 1 C 1Ũ1) T for any real symmetric Q 1 it follows that (T 1 Z T AT ) T T T QT T T QT (T 1 AT ) T T CT 2 TT QT (T 1 B 2) Ψ (Q)Z = C 2 T (T 1 B 2) T T T QT 2S 2 Partitioning T 1 AT =: A11 A 12 A 21 A 22 T 1 B B13 2 = B 23 C 2 T = C 31 C 32 and using that we obtain that Z T Ψ (Q)Z = T T Q1 QT = Q 22 A T 11 Q 1 Q 1 A 11 Q 1 A 12 A T 21 Q 22 C T 31 Q 1B 13 A T 12 Q 1 Q 22 A 21 A T 22 Q 22 Q 22 A 22 C T 32 Q 22B 23 C 31 B T 13 Q 1 C 32 B T 23 Q 22 2S 2 In this way we have that (7) holds for some Q > if and only if Ψ 1 (Q 1 ) := =: A T 11 Q 1 Q 1 A 11 Q 1 A 12 A T 21 Q 22 C T 31 Q 1B 13 A T 12 Q 1 Q 22 A 21 A T 22 Q 22 Q 22 A 22 C T 32 Q 22B 23 C 31 B T 13 Q 1 C 32 B T 23 Q 22 2S 2 A T 1 Q 1 Q 1 A 1 C T 1 Q 1B 1 C 1 B T 1 Q 1 D 1 + D T 1 holds for some real symmetric positive definite Q 1 where Q22 A A 1 = A 11 C 1 = 21 B 1 = A 12 B 13 and C 31 D 1 + D T A T 1 = 22 Q 22 Q 22 A 22 C T 32 Q 22B 23 C 32 B T 23 Q 22 2S 2 This construction has reduced the solution of the linear matrix inequality (7) to the solution of a smaller linear matrix inequality of the same form Thus we can again proceed in a recursive manner with the same reduction process until either the condition in Part 1 of Lemma 9 no longer holds (in which case no solution exists) or D k + D T k is positive definite for some k 23

24 This reduction process can be considered as the construction of a structured staircase form for the even pencil (28) By applying a congruence transformation to the pencil (28) with the matrix T T Y = T V it follows that I I I λ I A 11 A 12 B 13 A 21 A 22 Σ B B 23 A T 11 A T 21 A T 12 A T 22 C T 12 C T 31 C T 32 Σ T B C 12 B T 13 B T 23 C 31 C 32 S1 where S 1 := 2S 2 By performing another congruence transformation with the matrix the pencil becomes λ Ỹ = I I I I I I Q 22 I I I I I A 11 A 12 B 13 A 21 A 22 Σ B B 23 A T 11 A T 21 A T 21 Q 22 A T 12 A T 22 Q 22 A 21 A T 22 Q 22 Q 22 A 22 C T 31 C T 32 Q 22B 23 Σ T B B T 13 B T 23 C 31 C 32 B T 23 Q 22 S1 By further moving the last block row and column to the fifth position and then the 2nd block 24

25 row and column to the fifth position ie by performing a congruence permutation with one obtains λ I I Γ 1 Γ T 1 I I I I I I I A 1 B 1 A T 1 C T 1 A T 21 B T 1 C 1 D 1 + D T 1 1 A 21 T 1 Σ 1 Σ T 1 where Γ 1 = I A T 1 = 22 B T 23 Σ 1 := Σ B and A 1 B 1 C 1 D 1 + D T 1 are as defined before In this way we may repeat the reduction process on the (11) block which corresponds to Ψ 1 In order to exploit the block structures of the pencil we use a slightly different compression technique for D 1 + D T 1 Note that we may write D 1 + D T D11 D 1 = 12 D T 12 S 1 with S 1 symmetric positive definite Then we have D 1 + D T 1 = I S 1 1 DT 12 I T D11 D 12 S 1 1 DT 12 S 1 I S 1 1 DT 12 I Let D 11 D 12 S 1 1 DT 12 = Z 1 S 2 Z T 1 where S 2 is invertible and Z 1 is orthogonal Then Z 1 S Z 1 I T (D 1 + D T 1 ) Z 1 S 1 1 DT 12 Z 1 I = S 2 S 1 =: S 2 A necessary condition for the existence of a transformation to PH form is that S 2 > or equivalently S 2 > If this holds then using the fact Z 1 S 1 1 DT 12 Z 1 I T Γ 1 = Z T 1 25

26 by performing a congruence transformation on the 3rd block rows and columns with Z 1 S 1 1 DT 12 Z 1 I and another congruence transformation on the fourth block row and column with Z 1 we obtain the pencil λ I I Γ 11 Γ 12 Γ T 11 Γ T 12 A 1 B 11 B 12 A T 1 C T 11 C T B T 11 C B T 12 C 21 S 2 31 T 11 T 21 T 31 Σ 1 Σ T 1 where Γ11 Γ 12 = Γ = I I In order to proceed B 11 C T 11 must satisfy the same conditions as B 1 C T 1 If these conditions hold then we can perform a second set of congruence transformation and transform the pencil to λ I I Γ 2 Γ 11 Γ T 2 Γ 12 Γ 13 Γ T 11 Γ T 12 Γ T 13 A 2 B 2 A T 2 C T 2 Θ T 2 11 B T 2 C 2 D 2 + D T Θ 2 T 2 Σ 2 Σ T T 11 T 21 T 41 T 51 Σ 1 Σ T 1 where Γ 11 Γ 12 Γ 13 = Γ 12 Γ 11 Γ 2 = I 26

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

Linear algebra properties of dissipative Hamiltonian descriptor systems

Linear algebra properties of dissipative Hamiltonian descriptor systems Linear algebra properties of dissipative Hamiltonian descriptor systems C. Mehl V. Mehrmann M. Wojtylak January 6, 28 Abstract A wide class of matrix pencils connected with dissipative Hamiltonian descriptor

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Stability radii for real linear Hamiltonian systems with perturbed dissipation

Stability radii for real linear Hamiltonian systems with perturbed dissipation Stability radii for real linear amiltonian systems with perturbed dissipation Christian Mehl Volker Mehrmann Punit Sharma August 17, 2016 Abstract We study linear dissipative amiltonian (D) systems with

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Canonical forms of structured matrices and pencils

Canonical forms of structured matrices and pencils Canonical forms of structured matrices and pencils Christian Mehl and Hongguo Xu Abstract This chapter provides a survey on the development of canonical forms for matrices and matrix pencils with symmetry

More information

Balancing of Lossless and Passive Systems

Balancing of Lossless and Passive Systems Balancing of Lossless and Passive Systems Arjan van der Schaft Abstract Different balancing techniques are applied to lossless nonlinear systems, with open-loop balancing applied to their scattering representation.

More information

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS Electronic Transactions on Numerical Analysis Volume 44 pp 1 24 215 Copyright c 215 ISSN 168 9613 ETNA STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS VOLKER MEHRMANN AND HONGGUO

More information

Stabilization and Passivity-Based Control

Stabilization and Passivity-Based Control DISC Systems and Control Theory of Nonlinear Systems, 2010 1 Stabilization and Passivity-Based Control Lecture 8 Nonlinear Dynamical Control Systems, Chapter 10, plus handout from R. Sepulchre, Constructive

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri Technische Universitat Chemnitz Sonderforschungsbereich 393 Numerische Simulation auf massiv parallelen Rechnern Peter Benner Ralph Byers Volker Mehrmann Hongguo Xu Numerical Computation of Deating Subspaces

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

NUMERICAL COMPUTATION OF DEFLATING SUBSPACES OF SKEW-HAMILTONIAN/HAMILTONIAN PENCILS

NUMERICAL COMPUTATION OF DEFLATING SUBSPACES OF SKEW-HAMILTONIAN/HAMILTONIAN PENCILS NUMERICAL COMPUTATION OF DEFLATING SUBSPACES OF SKEW-AMILTONIAN/AMILTONIAN PENCILS PETER BENNER RALP BYERS VOLKER MERMANN AND ONGGUO XU Abstract. We discuss the numerical solution of structured generalized

More information

LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE

LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE TIMO REIS AND MATTHIAS VOIGT, Abstract. In this work we revisit the linear-quadratic

More information

Model reduction for linear systems by balancing

Model reduction for linear systems by balancing Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,

More information

A new block method for computing the Hamiltonian Schur form

A new block method for computing the Hamiltonian Schur form A new block method for computing the Hamiltonian Schur form V Mehrmann, C Schröder, and D S Watkins September 8, 28 Dedicated to Henk van der Vorst on the occasion of his 65th birthday Abstract A generalization

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

The norms can also be characterized in terms of Riccati inequalities.

The norms can also be characterized in terms of Riccati inequalities. 9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Model order reduction of electrical circuits with nonlinear elements

Model order reduction of electrical circuits with nonlinear elements Model order reduction of electrical circuits with nonlinear elements Andreas Steinbrecher and Tatjana Stykel 1 Introduction The efficient and robust numerical simulation of electrical circuits plays a

More information

Semidefinite Programming Duality and Linear Time-invariant Systems

Semidefinite Programming Duality and Linear Time-invariant Systems Semidefinite Programming Duality and Linear Time-invariant Systems Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 2 July 2004 Workshop on Linear Matrix Inequalities in Control LAAS-CNRS,

More information

Full-State Feedback Design for a Multi-Input System

Full-State Feedback Design for a Multi-Input System Full-State Feedback Design for a Multi-Input System A. Introduction The open-loop system is described by the following state space model. x(t) = Ax(t)+Bu(t), y(t) =Cx(t)+Du(t) () 4 8.5 A =, B =.5.5, C

More information

Model Reduction for Unstable Systems

Model Reduction for Unstable Systems Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) = Math 395. Quadratic spaces over R 1. Algebraic preliminaries Let V be a vector space over a field F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v) for all v V and c F, and

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Robust port-hamiltonian representations of passive systems

Robust port-hamiltonian representations of passive systems Robust port-hamiltonian representations of passive systems Christopher Beattie and Volker Mehrmann and Paul Van Dooren January 19, 2018 Abstract We discuss the problem of robust representations of stable

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs

On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs Saeed Ahmadizadeh a, Iman Shames a, Samuel Martin b, Dragan Nešić a a Department of Electrical and Electronic Engineering, Melbourne

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

On some interpolation problems

On some interpolation problems On some interpolation problems A. Gombani Gy. Michaletzky LADSEB-CNR Eötvös Loránd University Corso Stati Uniti 4 H-1111 Pázmány Péter sétány 1/C, 35127 Padova, Italy Computer and Automation Institute

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Lecture 4 and 5 Controllability and Observability: Kalman decompositions 1 Lecture 4 and 5 Controllability and Observability: Kalman decompositions Spring 2013 - EE 194, Advanced Control (Prof. Khan) January 30 (Wed.) and Feb. 04 (Mon.), 2013 I. OBSERVABILITY OF DT LTI SYSTEMS

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

Chapter 6 Balanced Realization 6. Introduction One popular approach for obtaining a minimal realization is known as Balanced Realization. In this appr

Chapter 6 Balanced Realization 6. Introduction One popular approach for obtaining a minimal realization is known as Balanced Realization. In this appr Lectures on ynamic Systems and Control Mohammed ahleh Munther A. ahleh George Verghese epartment of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 6 Balanced

More information

1. Introduction. Consider the following quadratically constrained quadratic optimization problem:

1. Introduction. Consider the following quadratically constrained quadratic optimization problem: ON LOCAL NON-GLOBAL MINIMIZERS OF QUADRATIC OPTIMIZATION PROBLEM WITH A SINGLE QUADRATIC CONSTRAINT A. TAATI AND M. SALAHI Abstract. In this paper, we consider the nonconvex quadratic optimization problem

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic

More information

Topics in Applied Linear Algebra - Part II

Topics in Applied Linear Algebra - Part II Topics in Applied Linear Algebra - Part II April 23, 2013 Some Preliminary Remarks The purpose of these notes is to provide a guide through the material for the second part of the graduate module HM802

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Subdiagonal pivot structures and associated canonical forms under state isometries

Subdiagonal pivot structures and associated canonical forms under state isometries Preprints of the 15th IFAC Symposium on System Identification Saint-Malo, France, July 6-8, 29 Subdiagonal pivot structures and associated canonical forms under state isometries Bernard Hanzon Martine

More information

Outline. Linear Matrix Inequalities in Control. Outline. System Interconnection. j _jst. ]Bt Bjj. Generalized plant framework

Outline. Linear Matrix Inequalities in Control. Outline. System Interconnection. j _jst. ]Bt Bjj. Generalized plant framework Outline Linear Matrix Inequalities in Control Carsten Scherer and Siep Weiland 7th Elgersburg School on Mathematical Systems heory Class 3 1 Single-Objective Synthesis Setup State-Feedback Output-Feedback

More information

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Christian Mehl Volker Mehrmann André C. M. Ran Leiba Rodman April

More information

Stationary trajectories, singular Hamiltonian systems and ill-posed Interconnection

Stationary trajectories, singular Hamiltonian systems and ill-posed Interconnection Stationary trajectories, singular Hamiltonian systems and ill-posed Interconnection S.C. Jugade, Debasattam Pal, Rachel K. Kalaimani and Madhu N. Belur Department of Electrical Engineering Indian Institute

More information

Trimmed linearizations for structured matrix polynomials

Trimmed linearizations for structured matrix polynomials Trimmed linearizations for structured matrix polynomials Ralph Byers Volker Mehrmann Hongguo Xu January 5 28 Dedicated to Richard S Varga on the occasion of his 8th birthday Abstract We discuss the eigenvalue

More information

E2 212: Matrix Theory (Fall 2010) Solutions to Test - 1

E2 212: Matrix Theory (Fall 2010) Solutions to Test - 1 E2 212: Matrix Theory (Fall 2010) s to Test - 1 1. Let X = [x 1, x 2,..., x n ] R m n be a tall matrix. Let S R(X), and let P be an orthogonal projector onto S. (a) If X is full rank, show that P can be

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

On Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time

On Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time On Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time Alle-Jan van der Veen and Michel Verhaegen Delft University of Technology Department of Electrical Engineering

More information

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation. 1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

RECENTLY, there has been renewed research interest

RECENTLY, there has been renewed research interest IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 49, NO 12, DECEMBER 2004 2113 Distributed Control of Heterogeneous Systems Geir E Dullerud Raffaello D Andrea Abstract This paper considers control design for

More information

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm. Norman Lebovitz. September 8, 2016 Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

3118 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 57, NO. 12, DECEMBER 2012

3118 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 57, NO. 12, DECEMBER 2012 3118 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 57, NO 12, DECEMBER 2012 ASimultaneousBalancedTruncationApproachto Model Reduction of Switched Linear Systems Nima Monshizadeh, Harry L Trentelman, SeniorMember,IEEE,

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

16. Local theory of regular singular points and applications

16. Local theory of regular singular points and applications 16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic

More information

On doubly structured matrices and pencils that arise in linear response theory

On doubly structured matrices and pencils that arise in linear response theory On doubly structured matrices and pencils that arise in linear response theory Christian Mehl Volker Mehrmann Hongguo Xu Abstract We discuss matrix pencils with a double symmetry structure that arise in

More information

A DECOMPOSITION THEOREM FOR FRAMES AND THE FEICHTINGER CONJECTURE

A DECOMPOSITION THEOREM FOR FRAMES AND THE FEICHTINGER CONJECTURE PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 A DECOMPOSITION THEOREM FOR FRAMES AND THE FEICHTINGER CONJECTURE PETER G. CASAZZA, GITTA KUTYNIOK,

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:???

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:??? MIT 6.972 Algebraic techniques and semidefinite optimization May 9, 2006 Lecture 2 Lecturer: Pablo A. Parrilo Scribe:??? In this lecture we study techniques to exploit the symmetry that can be present

More information