Model reduction of interconnected systems A Vandendorpe and P Van Dooren 1 Introduction Large scale linear systems are often composed of subsystems that interconnect to each other Instead of reducing the entire system without taking into account its structure it makes sense to reduce each subsystem or a few of them) by taking into account its interconnection with the others subsystems in order to approximate the entire system This is the purpose of this paper Interconnected or aggregated) systems have been studied in the eighties [1] in the model reduction framework but received recently only little attention [2] It turns out that many model reduction techniques such as weighted balanced truncation controller reduction and second-order balanced truncation can be seen as particular interconnected model reduction techniques Let us assume that we are given a large scale linear system Gs) This system is composed of an interconnection of k sub-systems T i s) Each subsystem is assumed to be a linear MIMO transfer function Subsystem T j s) has α j inputs denoted by the vector a j and β j outputs denoted by the vector b j : b i s) = T i s)a i s) 1) Define α = k i=1 α i and β = k j=1 β j The inputs of each subsystem are either outputs of other subsystems or external inputs that do not depend on the other subsystems First one can rewrite a transfer function from its subsystems via the use of an interconnection matrix k a i s) = u i s) + K ij b j s) 2) Sometimes it is preferable to define the external output u i s) as a linear combination of a global external output us) This is written as u i s) = H i us) where H i R α i m Define as) = [ a 1 s) T a k s) T ] T bs) = [ b 1 s) T b k s) T ] T T s) = T 1 s) H = [ H1 T H T ] T k Tks) j=1 and finally the connectivity matrix K as follows K = K 11 K 1k 3) K k1 K kk A research fellowship from the Belgian National Fund for Scientific Research is gratefully acknowledged by AV This paper presents research supported by the Belgian Programme on Inter-university Poles of Attraction initiated by the Belgian State Prime Minister s Office for Science Technology and Culture 1
The Mc Millan degree of T i s) is n i and A i B i C i D i ) is a minimal state space realization of T i s) From these definitions T s) = CsI A) 1 B + D with C = C 1 A = A 1 C k A k B = B 1 D = D 1 B k D k The preceding equations can be rewritten as follows : as) = Hus) + Kbs) bs) = T s)as) 4) The output of Gs) denoted by ys) is a linear function of the outputs of the subsystems: ys) = F bs) with F R p β The input of Gs) is the vector us) From 4) ys) = F I T s)k) 1 T s)hus) 5) In others words Gs) = F I T s)k) 1 T s)h with F R p β Hence a state space realization of Gs) is given by A G B G C G D G ) defined by see for instance [3] pg 66) C G = F I DK) 1 C A G = A + BKI DK) 1 C B G = BI KD) 1 H D G = F DI KD) 1 H 6) 2 Examples of Interconnected Systems Many structured systems can be seen as interconnected As a first example let us consider the following weighted transfer function : ys) = W out s)t s)w in s)us) = Gs)us) Let C o A o B o D o ) C A B D) and C i A i B i D i ) be the state space realizations of respectively W out s) T s) and W in s) A state space realization C G A G B G D G ) of Gs) is given by [ ] A o B o C B o DC i B o DD i AG B G = A BC i BD i C G D G A i B i C 0 D o C D o DC i D o DD i The transfer function Gs) corresponds to the interconnected system S with S : { b1 s) = W o s)a 1 s) b 2 s) = T s)a 2 s) b 3 s) = W i s)a 3 s) ys) = b 1 s) a 1 s) = b 2 s) a 2 s) = b 3 s) a 3 = us) and H = 0 0 I K = 0 I 0 0 0 I 0 0 0 F = [ I 0 0 ] Controlled systems and second order systems [4] can also be seen as a special case of Interconnected System 2
3 Structured Optimization Problems This section is inspired from [5] and [6] and presents some basic lemmas that will be given without proof Lemma 31 Let x i R n i and M ij R n i n j for 1 i k Define x = x 1 M = M 11 M 1k x k M k1 M kk Assume that the matrix M is positive definite Let us consider the product Then for any fixed x i R n i n i Jx M) = x T M 1 x min Jx M) = x j j i xt i Mii 1 x i Another optimization result is often used in the context of weighted model reduction Instead of finding the minimum of J with respect to the other variables one might be interested in finding the value of J by putting the other states equal to zero This gives rise to the following result : Lemma 32 Let x i R n i and M ij R n i n j for 1 i 2 Define x = [ ] x1 M [ ] M11 M = 12 x 2 M 21 M 22 Assume that the matrix M is positive definite Let us consider the product Jx M) = x T M 1 x Then for any fixed x i R n i n i Jx M) xj =0 = x T i ) M ii M ij Mjj 1 M ji x i The generalization to k different states is straightforward 4 Balanced Truncation Let us consider a transfer function T s) = CsI n A) 1 B which corresponds to the linear system { ẋt) = Axt) + But) S yt) = Cxt) + Dut) ut) Rm xt) R n yt) R p 7) If the matrix A is Hurwitz the controllability and observability gramians denoted respectively by P and Q are the unique solutions of the following equations AP + P A T + BB T = 0 A T Q + QA + C T C = 0 3
These have the following energetic interpretation If we apply an input u) L 2 [ 0] to the system 7) for t < 0 the position of the state at time t = 0 by assuming the zero initial condition x ) = 0) is equal to x0) = 0 e At But)dt = C o ut) By assuming that a zero input is applied to the system for t > 0 then for all t 0 the output y) L 2 [0 + ] of the system 7) is equal to yt) = Ce At x0) = O b x0) Let us briefly recall the concept of the dual of an operator : Definition 41 Let L be a linear operator acting from a Hilbert space U to a Hilbert space Y equipped respectively with the inner products < > U and < > Y The dual of L denoted by L is defined as the linear operator acting from Y to U such that < Lu y > Y = < u L y > U for all y Y and all u U The so-called controllability operator C o : L 2 [ 0] R n mapping past inputs u) to the present state) and observability operator O b : R n L 2 [0 + ] mapping the present state to future outputs y)) also have dual operators respectively Co and Ob It can be shown that the controllability and observability gramians are related to those via the identities P = CoC o and Q = O b Ob [3]) Another physical interpretation of the gramians is the following The controllability matrix arises from the following optimization problem Let Jvt) a b) = b a vt) T vt)dt be the energy of the vector function vt) in the interval [a b] Then see [7]) and symmetrically we have the dual property min C 0 ut)=x 0 Jut) 0) = x T 0 P 1 x 0 8) min Jyt) 0) = x T 0 Q 1 x 0 9) Ob yt)=x 0 Two essential algebraic properties of gramians P and Q are as follows First under a coordinate transformation xt) = S xt) the new gramians P and Q corresponding to the state-space realization Ā B C) = S 1 AS S 1 B CS) undergo the following so-called contragradient) transformation : P = S 1 PS T Q = S T QS 10) This implies that there exists a state-space realization A bal B bal C bal ) of T s) such that the corresponding gramians are equal and diagonal P = Q = Σ [3] Secondly because these gramians appear in the solutions of the optimization problems 8) and 9) they tell something about the energy that goes through the system and more specifically about the distribution of this energy among the state variables The idea is to perform a state space transformation that gives equal and diagonal gramians and to keep only the more controllable and observable states We show how to apply the preceding idea to a set of interconnected systems 4
5 Interconnected Systems Balanced Truncation Let us consider the controllability and observability gramians of Gs) : A G P G + P G A T G + B G B T G = 0 A T GQ G + Q G A G + C T GC G = 0 Let us decompose P 11 P 1k P G = P k1 P kk Q G = Q 11 Q 1k Q k1 Q kk where P ij R n i n j If we perform a state space transformation Φ i to the state x i t) = Φ i x i t) of each interconnected transfer function T i s) or to the linear system { ẋi t) = A S i x i t) + B i u i t) y i t) = C i x i t) + D i u i t) ut) Rm xt) R n i yt) R p 11) we actually perform a state space transformation Φ = Φ 1 to the realization Ā B C D) = ΦAΦ 1 ΦB CΦ 1 D) of T s) This in turn implies that ĀG B G C G D G ) = ΦA G Φ 1 ΦB G C G Φ 1 D G ) From this Φ k P G Q G ) = ΦP G Φ T Φ T Q G Φ 1 ) ie they also perform a contragradient transformation This implies that P ii Q ii ) = Φ i P ii Φ T i Φ T i Q ii Φ 1 i ) which is a contra-gradient transformation that only depends on the state space transformation on x i ie on the state space associated to T i s) From Lemma 31 and the preceding section the minimal past input energy necessary to reach x i 0) = x i over all initial input condition x j 0) j i is x T i P ii 1 x i We can thus perform a bloc diagonal transformation in order to make the gramians P ii and Q ii both equal and diagonal: P ii = Q ii = Σ i This suggests then that we can truncate each subsystem T i s) by deleting the states corresponding to the smallest eigenvalues of Σ i If one balance and then project via the Schur complement of P ii and Q ii the statespace of each system C i A i B i is sorted with respect to the optimization problem min u ut) 2 such that x i 0) = x 0 and x j = 0 for j i 6 Krylov techniques for interconnected systems Krylov techniques for structured systems have already been considered in the literature See for instance [8] in the controller reduction framework or [9] in the second-order model reduction framework This last case has been revisited recently in [10] and [11] The problem is the following If one projects the state-space realizations C i A i B i ) of the interconnected transfer functions T i s) with projecting matrices Z i V i containing Krylov subspaces giving rise to reduced-order transfer functions ˆT i s) that satisfy interpolation conditions with respect to T i s) what are the resulting relations between Ĝs) and Gs)? 5
If one chooses the same interpolation conditions for each subsystem then ˆT s) automatically satisfies the same interpolation conditions with respect to T s) Let us investigate what this implies for Gs) and Ĝs) Let us assume that such that Z T V = I and Ĉ Â ˆB) = CV Z T AV Z T B) K k λi A) 1 λi A) 1 B ) ImV ) It is well known that ˆT s) = ĈsI Â) 1 ˆB interpolates T s) at s = λ up to the k first derivatives Concerning Gs) the matrices F K D and H are unchanged One obtains Ĝs) = C G V si Z T A G V ) 1 Z T B G + D G It turns out that V contains automatically the corresponding Krylov subspace for Gs): K k λi AG ) 1 λi A G ) 1 B G ) Kk λi A) 1 λi A) 1 B ) For ease of notation rewrite A G = A+BM and B G = BN If v K 1 λi AG ) 1 λi A G ) 1 B G ) there exists a vector w such that This implies that v = λi A BM) 1 BNw λi A)v = BNw + Mv) = v K 1 λi A) 1 λi A) 1 B ) 12) The case k > 1 follows by induction Concerning Generalized Krylov Subspaces associated to the more general tangential interpolation problem [12]it can be deduced from 12) that the interpolation directions are modified but not the interpolation points see also [6]) It is possible to perform an interconnected structure preserving Krylov technique for an arbitrary point λ in the complex plane as follows Lemma 61 Define V R n r such that V i R n i r Assume that = [ V T 1 V T k ] T K k λi A) 1 λi A) 1 B ) ImV ) Construct left projecting matrices Z i R ni r such that Zi T V i = I r Project each subsystem as follows : Ĉi Âi ˆB i ) = CV i Zi T A i V i Zi T B i ) Then Ĝs) interpolates Gs) at λ up to the first k derivatives It is also possible to obtain moment matching results by just reducing one or some) subsystems) without changing the others More detailed comments and full proofs will be given in a subsequent paper Remark 61 Since Krylov subspaces are not affected by H G K it is crucial to choose different interpolation points in order to compute well the new dynamics of the interconnected system 6
References [1] A Feliachi C Bhurtun Model reduction of large-scale interconnected systems Int J Syst Sci 18 1987) 2249 2259 [2] B Shapiro Frequency Weighted Sub-System Model Truncation to Minimize Systems Level Model Reduction Errors submitted to IEEE 2001) [3] K Zhou J C Doyle K Glover Robust and optimal control Prentice Hall Inc Upper Saddle River NJ 1996 [4] Y Chahlaoui D Lemonnier A Vandendorpe P Van Dooren Second Order Structure Preserving Balanced Truncation in: MTNS 2004 16th Symp on the Mathematical Theory of Networks and Systems) 2004 [5] D G Meyer S Srinivasan Balancing and model reduction for second-order form linear systems IEEE Trans Automat Control 41 11) 1996) 1632 1644 [6] Y Chahlaoui D Lemonnier A Vandendorpe P Van Dooren Second-Order Balanced Truncation Linear Algebra Appl Special Issue on Model Reduction Accepted [7] K D Glover All optimal Hankel-norm approximation of linear multivariable systems and their L -error bounds Int J Control 39 6) 1984) 1115 1193 [8] R Skelton B Anderson q-markov covariance equivalent realizations Int J Control 44 1986) 1477 1490 [9] T-J Su R Craig Jr Model Reduction and Control of Flexible Structures Using Krylov Vectors J Guidance 14 2) 1991) 260 267 [10] R Freund Padé-Type Reduced Order Modeling of Higher Order Systems seminar given at Stanford University USA 2003) [11] A Vandendorpe P Van Dooren Krylov techniques for model reduction of second-order systems Tech Rep 07 CESAME Université catholique de Louvain 2004) [12] K Gallivan A Vandendorpe P Van Dooren Model reduction of MIMO systems via tangential interpolation SIAM Journal on Matrix Analysis and Applications Accepted http://wwwautouclacbe/vandendorpe/ 7