Adaptive rational Krylov algorithms for model reduction

Size: px
Start display at page:

Download "Adaptive rational Krylov algorithms for model reduction"

Transcription

1 Proceedings of the European Control Conference 2007 Kos, Greece, July 2-5, 2007 WeD12.3 Adaptive rational Krylov algorithms for model reduction Michalis Frangos and Imad Jaimoukha Abstract The Arnoldi and Lanczos algorithms, which belong to the class of Krylov subspace methods, are increasingly used for model reduction of large scale systems. The standard versions of the algorithms tend to create reduced order models that poorly approximate low frequency dynamics. Rational Arnoldi and Lanczos algorithms produce reduced models that approximate dynamics at various frequencies. Thispaper tackles the issue of developing simple Arnoldi and Lanczos equations for the rational case that allow simple residual error expressions to be derived. Thisin turn permits the development of computationally efficient model reduction algorithms, where thefrequenciesatwhichthedynamicsaretobematchedcanbe updated adaptively resulting in an optimized reduced model. I. INTRODUCTION Consider a linear time-invariant single-input single-output system described by the state-space equations Eẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t) (1) where x(t) C n denotesthestatevectorand u(t)and y(t) the scalar input and output signals, respectively. The system matrices A, E C n n areassumedtobelargeandsparse, and B, C C n.theseassumptionsaremetbylargescale models in many applications. The transfer function for the systemin(1)isdenotedas G(s) = C(sE A) 1 B s = [ E A B C 0 To simplify subsequent analysis and design based on the large nth order model in(1), the model reduction problem seeksanapproximate mthordermodeloftheform E m ẋ m (t) = A m x m (t) + B m u(t), y m (t) = C m x m (t) where x m (t) C m, E m, A m C n m, B m, C m C m and m < n.the associated lower order transfer function is denoted by G m (s) = C m (se m A m ) 1 B m s = [ Em A m B m C m 0 Krylov projection methods, and in particular Arnoldi and Lanczos algorithms[1 [5, have been extensively used for model reduction of large scale systems; see [5 [8 and the references therein. Methods related to the work of[9, [10useKrylovsubspacemethodstoobtainbasesforthe controllability and observability subspaces; projecting the original system using these bases results in a reduced system M.Frangos michalis.frangos@imperial.ac.uk and I.Jaimoukha i.jaimouka@imperial.ac.uk are with the Electrical &Electronic Engineering Department, Imperial College London, Exhibition Road, London SW7 2AZ. This work was not supported by any organization. that matches the transfer function and the derivatives of the transfer function of the system evaluated around infinity. The advantage associated to the projection methods using the bases of the controllability and observability subspaces istheexistenceofasetofequationsknownasthestandard Arnoldi and Lanczos equations. The Arnoldi and Lanczos equations can be used to derive algorithms based on the standard Arnoldi and Lanczos algorithms, which improve further the approximations. Implicit restarts are an example [11. Moreover the specific form of the standard equations allows the use of Arnoldi and Lanczos methods directly for solving large Lyapunov equations; see[10,[12. Another important benefit associated to the standard Arnoldi and Lanczos equations is that simple residual error expressions can be derived through them, which provide useful stopping criteria for the algorithms. Unfortunately the standard versions of the Arnoldi and Lanczos algorithms tend to create reduced order models that poorly approximated low frequency dynamics. Rational Arnoldi and Lanczos algorithms[13 [16 are Krylov projection methods which produce reduced models that approximate the dynamics of the system at given frequencies. An existing research problem for the rational case is the efficient and fast selection of the interpolation frequencies to be matched. Such a task requires simple residual expressions and simple equations to describe these algorithms. A drawback of rational methods is that no simple Arnoldi and Lanczos-like equations exist.[14,[15,[17,[18 deal with these challenging problems where they give equations that describe the rational algorithms; however these equations are notinthestandardarnoldiandlanczosform.theworkof thispaperstudiesthesameproblemsbutitisbasedonthe development of Arnoldi and Lanczos-like equations for the rational case in the standard form. Residual error expressions canthenbederivedasinthecaseofstandardarnoldiand Lanczos methods and adaptive algorithms are developed. The paper is organized as follows. Section II briefly describes the approximation problem by moment matching. The standard Lanczos and the rational Lanczos algorithms for moment matching are also described in this section. Section III gives the derivation of the Lanczos and Arnoldilike equations and of the residual errors for the rational case. Section IV suggests new procedures for the efficient computation of the interpolation frequencies to improve the approximation and it gives the pseudo-code for the adaptive algorithm developed in this paper.numerical results to illustrate the adaptive method are also presented and compared with the results of the standard rational method for model reduction and the results of balanced truncation. Finally ISBN:

2 section V gives our conclusions. II. MOMENT MATCHING PROBLEM To simplify the presentation of our results, we only considerthecasewhen E = I n where I n istheidentity matrixandwewillwrite G(s) = C(sI n A) 1 B s = [ A B C 0 The system in (1) can be expanded by Taylor series aroundaninterpolationpoint s 0 Cas G(s) = µ 0 + µ 1 (s s 0 ) + µ 2 (s s 0 ) 2 + wherethetaylorcoefficients µ i areknownasthemoments of the system around s 0 and are related to the transfer functionofthesystemanditsderivativesevaluatedat s 0. The approximation problem by moment matching isto find alowerordersystem G m withtransferfunctionexpanded as G m (s) = ˆµ 0 + ˆµ 1 (s s 0 ) + ˆµ 2 (s s 0 ) 2 + suchthat µ i = ˆµ i,for i = 0,1,...,m. Inthecasewhere s 0 = themomentsarecalledmarkov parametersandaregivenby µ i = CA i B.Themoments aroundanarbitraryinterpolationpoint s 0 Careknown asshiftedmomentsandtheyaredefinedas µ i = C(s 0 I n A) (i+1) B, i = 0,1,... A more general definition of approximation by moment matching is related to rational interpolation. By rational interpolation we mean that the reduced order system matches the moments of the original system at multiple interpolation points. Let V m, W m C n m.byprojectingthestatesofthe high order system with the projector P m = V m (W mv m ) 1 W m, (2) assumingthat W mv m isnonsingular,areducedordermodel isobtained as: [ G m (s) = s Em A m B m C m 0 [ W := m V m W mav m W mb. (3) CV m 0 Acarefulselectionof V m and W m asthebasesofcertain Krylovsubspacesresultsinmomentmatching.For R C n n and Q C n p a Krylov subspace K m (R,Q) is defined as K m (R,Q) = span ([ Q RQ R 2 Q R m 1 Q ). If m = 0, K m (R,Q)isdefinedtobetheemptyset. A. Standard Lanczos algorithm An iterative method for model reduction based on Krylov projections isthe Lanczos process. The Lanczos process, given in algorithm 1, constructs the bases V m = [ v 1,...,v m C n m and W m = [ w 1,...,w m C n m forthekrylovsybspaces K m (A,B)and K m (A,C )respectively,suchthattheyarebiorthogonal,i.e., W mv m = I m. The following equations, referred to as the Lanczos equations[19 hold, AV m = V m A m + v m+1 C Vm B = V m B m A W m = W m A m + w m+1 B W m C = C m W m. Duetobiorthogonality W m+1v m+1 = I m+1, W mv m+1 = V mw m+1 =0,thematrix A m isinthetridiagonalform α 11 α 12. A m = α 21 α αm 1,m, and α m,m 1 α m,m C Vm = w m+1av m = α m+1,m e T m B Wm = v m+1a W m = α m,m+1e T m The bases V m and W m are constructed such that K m (A,B) colsp(v m )and K m (A,C ) colsp(w m ) and so the reduced ordersystem G m (s), defined in (3), matches the first 2mmarkov parameters of G(s)[17. B. Approximation and residual error expressions Theapproximationerror ǫ(s) = G(s) G m (s)inkrylov projection methods can be shown to be ǫ(s) = R C (s) (si n A) 1 R B (s), (4) where R C (s)and R B (s)areknownastheresidualerrors. Thesearegivenby R B (s) = B (si n A)V m X m (s), R C (s) = C (si n A) W m Y m (s). where Y m (s)and X m (s)arethesolutionsofthesystemof equations (si m A m )X m (s) = B m, (si m A m ) Y m (s) = C m, and satisfy the Petrov-Galerkin conditions R B (s) span{w m }, R C (s) span{v m }. i.e., W mr B (s) = V mr C (s) = 0. Using the Lanczos equations, the residual errors can be simplified as (5) R B (s) = v m+1 C Vm (si m A m ) 1 B m (6) R C (s) = w m+1 B W m (si m A m) 1 C m (7) which involves terms related to the reduced order system only. 4180

3 Algorithm 1 Basic Lanczos algorithm 1:Inputs: A,B,C,m,ǫ 2:Initialize: v 0 = 0, w 0 = 0, ˆv 1 = B,ŵ 1 = C 3:If { ŵ 1ˆv 1 < ǫ},stopend ˆv 4: v 1 = 1, w ŵ 1ˆv1 ŵ 1 = ŵ 1 / 1ˆv 1 ŵ 1ˆv 1 5:for j = 1 to m 6: α j,j =w j Av j, α j 1,j =w j 1 Av j, 7: α j,j 1 =w j Av j 1 8: ˆv j+1 = Av j v j 1 α j 1,j v j α j,j 9: ŵ j+1 = A w j w j 1 α j,j 1 w jα j,j 10: α j+1,j = ŵ j+1ˆv j+1 11: If {α j+1,j < ǫ},stopend 12: α j,j+1 = ŵ j+1ˆvj+1 α j+1,j 13: v j+1 = ˆv j+1 /α j+1,j, w j+1 = ŵ j+1 /α j,j+1 14: end 15:Outputs: V m = {v 1,...,v m },W m = {w 1,...,w m } C. The rational Lanczos method The rational Lanczos procedure[14 [16 is an algorithm for constructing biorthogonal bases of the union of Krylov subspaces. Let V m,w m C n m be the bases of such subspacesandlet Pbeaprojectordefinedas P = V m W m. Applyingthisprojectoronthesystemin(1)areducedorder system is obtained with a transfer function as in(3). The nextresultshowsthataproperselectionofkrylovsubspaces will result in reduced order systems that matches the moments of the system at given interpolation frequencies. Theorem1:Let S = {s 1,s 2,...,s K } and S = { s 1, s 2,..., s K}betwosetsofinterpolationpoints.If K K msk ((s k I n A) 1,(s k I n A) 1 B) span{v m } k=1 and K k=1 K m sk (( s ki n A ) 1,( s ki n A ) 1 C ) span{w m } where m sk and m sk are the multiplicities of the interpolationpoints s k and s k respectivelyand K k=1 m s k = K k=1 m s k = mthen: if s k = s k thereducedordersystemmatchesthefirst m sk + m sk shiftedmomentsaroundtheinterpolation point if s k s k the reduced order system matches the first m sk shiftedmomentsaround s k andthefirst m sk shiftedmomentsaround s k respectively assuming the matrices (s k I n A) and ( s k I n A) are invertible. TheproofofTheorem1canbefoundin[17.Arational Lanczos process is given in algorithm 2. For simplicity weassumethat m sk = m sk andalsoitisassumedthat s i s j and s i s j for i j.asmentionedinsection I,a drawback of rational methods is that no Lanczos-like equations exist. This issue has been investigated in[14, [15, [17, [18, where they give equations that describe the rational algorithms; however these equations are not in the standard Lanczos form. The work of this paper studies the same problems but it is based on the development of Lanczos-like equations for the rational case in the standard form. Residual error expressions can then be readily derived as in the case of standard Lanczos methods and adaptive algorithms are developed. Algorithm 2 RationalLanczos algorithm 1: Inputs: S = {s 1,...,s K }, S = { s1,..., s K } 2: A,B,C,ǫ, m sk = m sk 3: Initialize: V m = [,W m = [, i = 0 4: for k = 1 to K 5: if {s k = }, ˆv i+1 = B 6: else 7: ˆv i+1 = (s k I n A) 1 B 8: end 9: if { s k = }, ŵ i+1 = C 10: else 11: ŵ i+1 = (( s k I n A) 1 ) C 12: end 13: If { ŵ iˆv i < ǫ},stopend ˆv 14: v i+1 = i+1, w ŵ i+1 = ŵ i+1 / i+1ˆv i+1 ŵ i+1ˆvi+1 ŵ i+1ˆv i+1 15: V m = [V m v i+1, W m = [W m w i+1 16: i = i : for j = 1 to m sk 1 18: if {s k = }, ˆv i+1 = Av i 19: else 20: ˆv i+1 = (s k I n A) 1 v i 21: end 22: if { s k = }, ŵ i+1 = A w i 23: else 24: ŵ i+1 = (( s k I n A) 1 ) w i 25: end 26: ˆv i+1 = ˆv i+1 V m W mˆv i+1 27: ŵ i+1 = ŵ i+1 W m V mŵ i+1 28: ρ = ŵ i+1ˆv i+1 29: If {ρ < ǫ},stopend ŵ i+1ˆvi+1 30: λ = ρ 31: v i+1 = ˆv i+1 /ρ, w i+1 = ŵ i+1 /λ 32: V m = [ V m v i+1, W m = [ W m w i+1 33: i = i : end 35: end 36:Outputs:V m = {v 1,...,v m },W m = {w 1,...,w m } III. LANCZOS AND ARNOLDI-LIKE EQUATIONS FOR RATIONAL INTERPOLATION Rational interpolation generally provides a more accurate reduced-order model than interpolation around a single point since it is based on the combination of approximations around many interpolation points. The reduced-order model 4181

4 created by rational interpolation can be improved by a proper selection of these points. For this it is necessary to obtain frequency dependent error expressions. The interpolation points could then be the frequencies at which the error expressionsaremaximumusingthe H normasmeasure. The aim is to create rapidly and efficiently accurate reduced order models. In[20 for example they develop an exact expressionforthe H 2 normoftheerrorterm.theysuggest that by interpolating at the negative of the eigenvalues of A with the highest residues of the transfer function G(s) the H 2 isminimized.thismethod howeverrequiresthe computationoftheeigenvaluesofthe Amatrixofthesystem whichcanbeveryslowwhen Aisverylarge. ManyoftheresultsbasedonKrylovsubspacesfollow from the Arnoldi and Lanczos equations of the standard Arnoldi and Lanczos algorithm. No results for extending these equations to the rational Arnoldi or Lanczos case exist in the literature. A main contribution of this work is to derive corresponding equations for the rational version of these algorithms. In this section we derive with minimum additional effort Arnoldi and Lanczos-like equations in the familiar form of the standard Arnoldi and Lanczos equations. The standard version of these equations allow us to have simple residual error expressions for the rational case. A. Derivation of Lanczos equations in the rational case ThefollowingLemmaderivesasetofequationsinthe standard form for the Lanczos process for the rational case. Using these equations, the Lemma also provides expressions for the residuals errors. These expressions will prove crucial in deriving an adaptive rational Lanczos algorithm for choosing the frequencies at which the reduced order system matches the dynamics of the original system such that the approximation error tends to be reduced. Lemma1: Let V m and W m beasdefinedintheorem1 andlet m and m bethemultiplicitiesof in Sand S,respectively.Ifwecompute V m+1 = [V m v m+1 and W m+1 = [W m w m+1 suchthat then [V m A m B span{v m+1 }, [W m (A m ) C span{w m+1 }, W m+1v m+1 = I m+1, AV m = V m A m + v m+1 C Vm, (8) B = V m B m + v m+1 b m, (9) A W m = W m A m + w m+1 B W m, (10) C = C m W m + c m w m+1, (11) where b m = w m+1b, c m = Cv m+1, C Vm = w m+1av m and B Wm = W mav m+1.furthermore, b m = 0if m > 0 and c m = 0if m > 0. Proof: Wefirstprovetheresultfor m = 0.Let V m bedefinedasintheorem1.thenweextend V m to V m+1 = [V m v m+1 suchthat [V m B span{v m+1 } bybiorthogonalising Bagainstallpreviouscolumnsof W m with the Lanczos algorithm. Then the following are true: (s 1 I n A) 1 B span{v 1 } (12) (s k I n A) (i 1) B span{v j 1 }. (s k I n A) i B span{v j } (13). B span{v m+1 } (14) WestartbyprovingtheLemmaforthefirstcolumnof V m. Multiply(12)by (s 1 I n A)fromtheleftandrearrangeto get Av 1 span{v 1,B} (14) AV 1 span{v m+1 }. Weproceedtheproofbyinduction.Weassumethatthe resultholdsforanarbitraryinterpolationpoint s k ofthe K sk krylovsubspaceuptothe (i 1) th multiplicity.we will prove the result for the next multiplicity and hence the proof will be complete. Therefore we assume AV j 1 = A[v 1... v j 1 span{v m+1 } (15) and then we prove that it is true for the next index j. Multiply(13)fromtheleftby (s k I n A)andrearrange to get: Av j span{v j,av j 1 } (15) Av j span{v m+1 }. Combining the above with the assumption made in(15) AV j span{v m+1 }. (16) Thereforeitiseasytoseethattheresultin(16)holdsfor allcolumnsin V m,i.e., AV m span{v m+1 }. (17) To prove (8) we proceed as follows: Since AV m span{v m+1 }thereexistsamatrix Y C (m+1) m such that AV m = V m+1 Y. (18) [ Am Partitioning Y as andmultiplying(18)by W C m+1 Vm fromtheleft,duetobiorthogonalitybetween W m+1 and V m+1 wegetthat A m = W mav m and C Vm = w m+1av m AV m = V m A m + v m+1 C Vm. Similarlytoprove(9)weproceedasfollows:Since B span{v m+1 }thereexistsamatrix Z C (m+1) 1 suchthat Partitioning Z as [ Bm b m B = V m+1 Z. (19) andmultiplying(19)by W m+1 fromtheleftwehavethat B m = W mband b m = w m+1b. B = V m B m + v m+1 b m 4182

5 Thereforewehavethefirststepoftheprooffor(8)and(9). Nextweprovetheresultfor m > 0.Letthebasis V m besuchthat [B AB A p 1 B V m p span{v m } where V m p isasdefinedintheorem1.since Bisalready inthespanof V m itiseasytoseethat(8)wouldholdif {Av 1,...,Av p } span{v m+1 }. (20) Thiscanbeshownbyextending V m to V m+1 =[V m v m+1 such that [V m A p B span{v m+1 } and W m+1v m+1 =I m+1 bybiorthogonalising A p Bagainstallthepreviouscolumns of W m withthelanczosalgorithm.itfollowsthat(20)holds since by construction we have that, { A k span{v1,...,v B k+1 }, for 0 < k < p 1 span{v 1,...,v m+1 }, for k=p 1 whichcompletestheproofof(8)and(9).toprovethelast part,notethatif m > 0then B span{v m }fromwhich itfollowsthat b m = 0. TheproofofEquations(10)and(11)issimilaranditis therefore omitted. B. Simplified Lanczos residual errors WiththeuseofLemma1,theresidualerrors R B (s)and R C (s)in(6)and(7)cannowbesimplifiedfortherational Lanczos procedure as R B (s)=v m+1 {C Vm (si m A m ) 1 B m + b m } (21) R C (s)=w m+1 {B W m ((si m A m ) 1 ) C m + c m} (22) The residual errors in this case can be computed less expensively than the approximation errors, since only quantitiesrelatedtothe m th ordermodelareinvolved,compared to the approximation errors in the initial form which involve quantitiesrelatedtoan n th ordersystem.alsoitisimportant tomentionthatlemma1doesnotalter V m or W m and the only additional cost is an extra iteration of the Lanczos algorithm. Using(8),(9),(10) and(11) we can derive the residual errors in(21) and(22) respectively. R B (s)=b (si n A)V m X m (s) (8) = B V m (si m A m )X m (s) + v m+1 C Vm X m (s) (5),(9) = v m+1 {C Vm (si m A m ) 1 B m + b m }. Thederivationfor R C (s)issimilarandisomitted. C. The Arnoldi-like equations Following similar steps as for the derivation for the rational Lanczos case, we have developed Arnoldi-like equations for the rational Arnoldi case which are stated without proof in the following Lemma. Lemma2: Let V m and W m beasdefinedintheorem1 andlet m and m bethemultiplicitiesof in Sand S, respectively.ifwecomputetheorthogonalbases V m+1 = [V m v m+1 and W m+1 = [W m w m+1 suchthat then [V m A m B span{v m+1 }, V m+1v m+1 = I m+1, [W m (A m ) C span{w m+1 }, W m+1w m+1 = I m+1, AV m =V m Em 1 A m + (v m+1 V m Em 1 W mv m+1 )C Vm, B=V m Em 1 B m + (v m+1 V m Em 1 W mv m+1 )b m, W ma=a m Em 1 W m + B Wm (w m+1 w m+1v m Em 1 W m), C =C m Em 1 W m + c m (w m+1 w m+1v m Em 1 W m) where E m = W mv m, b m = v m+1b, c m = Cw m+1, C Vm = v m+1av m,and B Wm =W maw m+1. D. Simplified Arnoldi residual errors Theresidualerrors R B (s)and R C (s)inthiscasecanbe simplified as R B (s)=(v m+1 V m E 1 m W mv m+1 ) {C Vm (se m A m ) 1 B m + b m } R C (s)=(w m+1 W m (E 1 m ) V mw m+1 ) {B W m ((se m A m ) 1 ) C m + c m}. IV. ADAPTIVE KRYLOV ALGORITHMS FOR MODEL REDUCTION The problem in adaptive rational methods is to decide on the next interpolation point on every iteration. In this section a number of options are described for the selection of the interpolation points. The choice may depend on the computational power or time restrictions required. The proposed methods are based in the idea that the interpolation points should be selected such that the approximation error mentioned in Section I in equation(4) should be minimized ateveryiterationintermsofasuitablenorm.theerror expression in(4) is rewritten below for convenience ǫ(s)=r C (s) (si n A) 1 R B (s). In Section III simple expression terms for the residual errors were derived, involving quantities related to the reduced ordermodelonly.let R C and R B bethefrequencydependent termsoftheresidualerrorsand Band Cthenon-frequency dependent terms of the residual errors. In the Arnoldi case R B (s)=c Vm (se m A m ) 1 B m + b m [ s Em A = m B m, C Vm b m R C(s)=C m (se m A m ) 1 B Wm + c m s Em A =[ m B Wm C m c m B=v m+1 V m Em 1 W mv m+1, C =w m+1 w m+1v m Em 1 W m 4183

6 whereas in the Lanczos case R B (s)=c Vm (si m A m ) 1 B m + b m [ Am B m s = C Vm b m R C(s)=C m (si m A m ) 1 B Wm + c m s Am B =[ Wm, C m B=v m+1, C =w m+1, c m where C Vm, B Wm, b m and c m foreachofthetwomethods areasdefinedinlemmas1and2. Now the error expression in(4) becomes where ǫ(s)= R C(s) C(sI n A) 1 B RB (s) = R C(s)H(s) R B (s) [ H(s) = s A B C 0 As already mentioned the idea is to select an interpolation point at every iteration such that ǫ(s) is small in some norm.ideallywewouldlike,ateveryiteration,tofindthe frequency at which ǫ(s) achieve itsmaximum and then use this frequency as the next interpolation frequency for the Krylov methods. RC (s) and R B (s) are small terms involving quantities related to the reduced order model, but H(s) contains terms related to the original system. Thusacomputationof R C H R B oneveryiterationfor medium and large scale systems iscomputationally very expensive. Therefore instead of using H(s) we suggest using itsapproximation Ĥ(s). The approximation of the error ǫ(s) does not necessarily havetobe ˆǫ(s) = R C (s)ĥ(s) R B (s).theerrorapproximation ˆǫ(s)canbeoneoftheexpressionslistedinTableI. TABLEI ERROREXPRESSIONS. 1 ˆǫ(s)= R B (s) 2 ˆǫ(s)= R C (s) 3 ˆǫ(s)=Ĥ(s) R B (s) 4 ˆǫ(s)=Ĥ(s) 5 ˆǫ(s)= R C (s) Ĥ(s) 6 ˆǫ(s)= R C (s) Ĥ(s) R B (s) In all cases the interpolation points are selected to be the frequencies s C and s C at which one of the approximated error expressions achieve its maximum, i.e., S ={s : ˆǫ(s) = ˆǫ } and S ={s : ˆǫ(s) = ˆǫ } where S and S denotethesetofinterpolationpointsas definedintheorem1.inthecasewhereweareinterested in a range of frequencies, then the interpolated points can beselectedtobethefrequenciesatwhichaweighedapproximatederror ˆǫ W (s) = W oˆǫ(s)w i achieveitsmaximum, where W o,w i arestablefilters. We suggest approximating H(s) using the projection P = Ṽ ( W Ṽ ) 1 W withthearnoldiorlanczosalgorithmsas Ĥ(s) = CṼ (s W Ṽ W AṼ ) 1 W B so that it includes small terms only.note that in the Lanczos case W Ṽ = I. The projection method to approximate H(s) depends on the computational power and the time constrains. Below there is a list of different suggestions for approximating Ĥ(s). On every iteration approximate H(s) by standard rational Arnoldi or Lanczos algorithms using the set offrequencies S and Salreadycomputedsofar.As the algorithm progresses the approximation of H(s) is improved. Insteadofcomputing Ĥ(s)byapplyingrationalinterpolation on H(s) on every iteration which requires long computional time we can perform Arnoldi or Lanczos algorithm to match only at the last frequencies added in Sand S.Thereforeinthiscase: Ṽ =ṽ, and W = w. Ĥ(s)canalsobeprojectedwith Ṽ and Was: Ṽ = [V m ṽ, and W = [W m w. where ṽand wareobtainedasdescribedintheprevious method. As an alternative we suggest to approximate H(s) by usingthealreadycomputedcolumns w m+1 and v m+1 : Ṽ = v m+1, and W =w m+1 Finallywealsosuggestthat Ĥ(s)canbeprojectedwith Ṽ and W asthealreadycomputedbases W m+1 and V m+1 : Ṽ = V m+1, and W = W m+1. A pseudocode for the algorithm for the adaptive rational method, is given in algorithm 3. A. NumericalExamples Figure 1 shows the magnitude of the bode plot of a randomly generated system of order n = 150,the magnitude plot of the reduced order system using the standard rational Arnoldimethodaswellasthemagnitudeoftheerror.In this example it was assumed that there was no previous knowledge on the characteristics of the transfer function of the original system. Therefore the standard Arnoldi rational algorithm was applied to match 4 moments at ten equally spaced(on logarithmic scale) interpolation points S = S = {10 4 j,10 3 j,...,10 5 j}.itcanbeseenthatthe approximation is not satisfactory.figure 2 shows the bode plot of the original system, of the reduced order system using the adaptive algorithm developed in this paper and of 4184

7 Bode Diagram Bode Diagram Magnitude (db) Magnitude (db) Frequency (rad/sec) Frequency (rad/sec) Fig. 1. Bode Diagram of the transfer functions of the actual system (continuous line), of the reduced order system obtained by standard rational Krylov method(dotted line), and of the error between them(dashed line). Theorderoftheoriginalsystemis n=150andofreducedordersystem is m=40. Fig. 2. Bode Diagram of the transfer functions of the actual system (continuous line), of the reduced order system obtained by the adaptive rational Krylov method(dotted line), and of the error between them(dashed line).theorderoftheoriginalsystemis n = 150andofreducedorder system is m=40. the error.it can be clearly seen that the adaptive algorithm has a better performance. It is important to note that in practice there is usually no previous knowledge of the original system frequency response. Applying model approximation using the adaptive algorithm presented in this paper results in reduced models with little redundant information and improves the overall performance of the Krylov reduction methods. It is essential that the extra cost of the adaptive algorithm is small due to speed and storage limitations. In this paper we have developed an adaptive method for a fast computation of the next interpolation frequency using the simple error expressions developed in this paper. Table II gives a comparison of the approximation error G(s) G m (s) ofthemethodsdescribedinthispaper. In all cases the algorithms were performed in real arithmetic[13 and a stable projection was applied to the reduced Algorithm 3 Adaptive rational method pseudo-code 1:Inputs:A,B,C,m 2:Chooseinitialfrequenciestomatch,e.g. s 1 = and s 1 = 3:fori=1tom 4: Compute v m and w m by rational Lanczos (or Arnoldi)methodtomatchat s i and s i. 5: Generate the Lanczos (or Arnoldi) equations as describedby Lemma1(or2). 6: Use one of the error expressions in Table I to computenextinterpolationfrequencies s i+1 and s i+1 7: Iftheinfinitynormoftheerrorexpressionusedis below a threshold stop. 8:end 9:Outputs:A m =W m AV m, E m =W m V m, B m =W m B and C m =CV m system.notethatif Ṽ Cn m isabaseofasubspasethen V = [real{ṽ } imag{ṽ } Rn 2m isalsoabaseof the same subspace. Therefore in this part a complex basis is converted to a real one before projecting the original system.ascanbeseenfromtheresultsintableii,inallthe caseswheretheerrorinvolves Ĥ(s)andaresidualerrorthe approximation error is comparable or better than the error obtained by balanced truncation. Our numerical experience indicatesthatthistendstobethecasewhenthesizeof V m and W m islargeenoughtogiveagoodapproximation, otherwise for very low mbalance truncation gives better results. V. CONCLUSIONS The work of this paper derives Arnoldi and Lanczoslike equations for the rational versions of these algorithms. Simple residual error expressions have also been developed which are used for computing the interpolation frequencies at which the moments of the original system are to be matched. The simple error expressions derived in this paper contain only terms of the reduced order system and this allows fast computations. This selection of the interpolation points reduces the overall approximation error compared to standard rational methods and conventional methods; the adaptive method developed tends to reduce the error in the range of frequencies of interest without a priori knowledge of the original system frequency response. REFERENCES [1 C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, J. Res. Nat. Bur. Stand, vol. 45, pp , [2, Solution of systems of linear equations by minimized iterations, J.Res.Nat.Bur.Stand,vol.49,pp.33 53,1952. [3 W. Arnoldi, The principle of minimized iterations in the solution of matrix eigenvalue problem, Quart. Appl. Math., vol. 9, pp ,

8 TABLEII COMPARISON, n=250, m=50 ErrorExpressionsusingResidualErrorinformationonly S ={s : R B (s) = R B }, S ={s : RC (s) = R C } S = S ={s : R B (s) = R B } S = S ={s : R C (s) = R C } G G m ErrorExpressionsusing Ĥ(s)projectedwith Ṽ = v m+1and W = w m+1 {S = S ={s : Ĥ(s) = Ĥ } S = S ={s : R C (s)ĥ(s) = R S = S ={s : ˆǫ(s) = R CĤ R B } ErrorExpressionsusing Ĥ(s)projectedwith Ṽ = V m+1and W = W m+1 S = S ={s : Ĥ(s) = Ĥ } S = S ={s : R C (s)ĥ(s) = R S = S ={s : ˆǫ(s) = R CĤ R B } ErrorExpressionsusing Ĥ(s)projectedwith Ṽ =[ṽand W =[ w S = S ={s : Ĥ(s) = Ĥ } S = S ={s : R C (s)ĥ(s) = R S = S ={s : ˆǫ(s) = R CĤ R B } ErrorExpressionsusing Ĥ(s)projectedwith Ṽ =[Vm ṽand W =[W m w S = S ={s : Ĥ(s) = Ĥ } S = S ={s : R C (s)ĥ(s) = R S = S ={s : ˆǫ(s) = R CĤ R B } ErrorExpressionsusing Ĥ(s)projectedwith Ṽ and Wcomputedbyrationalinterpolation S = S ={s : Ĥ(s) = Ĥ } S = S ={s : R C (s)ĥ(s) = R S = S ={s : ˆǫ(s) = R CĤ R B } BalancedTruncation RationalKrylov:interpolationpoints S= S={10 4 j,..., 10 5 j}, multiplicity= [4 D. Boley and G. Golub, The nonsymmetric lanczos algorithm and controllability, Syst. Contr. Lett., vol. 16, pp , [5 Y. Saad and H. A. van der Vorst, Iterative solution of linear systems inthe20thcentury, J.Comput.Appl.Math,vol.66,no.0,pp.1 33, [6 A. Antoulas, D. Sorensen, and y. S. Gugercin, A survey of model reduction methods for large-scale systems, Contemp. Math, vol. 280, pp , [7 D. Boley, Krylov space methods on state-space control models, Technical report TR92-18,University of Minnesota, [8 A. Bultheel and M. V. Barel, Padé techniques for model reduction inlinearsystemtheory:asurvey, J.Comput.Appl.Math,vol.14, pp , June [9 I. Jaimoukha and E. Kasenally, Oblique projection methods for largescalemodelreduction, SIAMJ.MatrixAnal.Appl,vol.16,no.2, pp , [10, Krylov subspace methods for solving large lyapunov equations, SIAMJ.MatrixAnal.Appl,vol.31,no.1,pp ,1997. [11, Implicitly restarted krylov subspace methods for stable partial realizations, SIMAX, vol. 18, no. 3, pp , [12 Y. Saad, Numerical solution oflarge Lyapunovequations, in MTNS- 89,vol.3,1990,pp [13 A. Ruhe, The rational krylov algorithm for nonsymmetric eigenvalue problems III: complex shifts for real matrices, BIT, vol. 34, pp , November [14, Rational krylov: A practical algorithm for large sparse nonsymmetric matrix pencils, SIAM J. Sci. Comput., vol. 19, pp , [15 K. Gallivan, E. Grimme, and P. V. Dooren, A rational lanczos algorithm for model reduction, Numer. Algorithms, vol. 12, pp , [16 E. Grimme, K. Gallivan, and P. V. Dooren, A rational lanczos algorithm for model reduction II: Interpolation point selection, Technical report, University of Illinois at UrbanaChampaign., [17 K. Gallivan, Sylvester equations and projection-based model reduction, J.Comput.Appl.Math,vol.162,pp ,2004. [18 H.J.Lee, C.C.Chu, and W.S.Feng, An adaptive-order rational arnoldi method for model-order reductions of linear time-invariant systems. Linear Algebra and Its Applications, vol. 415, pp , [19 V.Papakos and I. Jaimoukha, Implicitly restarted lanczos algorithm for model reduction, Proc. 40th IEEE Conf. Decision Control, Orlando, FL, pp , [20 S. Gugercin and A.C.Antoulas, An H 2 error expression for the lanczos procedure, in Proc. IEEE Conference on Decision and Control,

Krylov Techniques for Model Reduction of Second-Order Systems

Krylov Techniques for Model Reduction of Second-Order Systems Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of

More information

BALANCING AS A MOMENT MATCHING PROBLEM

BALANCING AS A MOMENT MATCHING PROBLEM BALANCING AS A MOMENT MATCHING PROBLEM T.C. IONESCU, J.M.A. SCHERPEN, O.V. IFTIME, AND A. ASTOLFI Abstract. In this paper, we treat a time-domain moment matching problem associated to balanced truncation.

More information

Inexact Solves in Krylov-based Model Reduction

Inexact Solves in Krylov-based Model Reduction Inexact Solves in Krylov-based Model Reduction Christopher A. Beattie and Serkan Gugercin Abstract We investigate the use of inexact solves in a Krylov-based model reduction setting and present the resulting

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 12-15, 2005 WeC10.4 An iterative SVD-Krylov based method for model reduction

More information

H 2 optimal model reduction - Wilson s conditions for the cross-gramian

H 2 optimal model reduction - Wilson s conditions for the cross-gramian H 2 optimal model reduction - Wilson s conditions for the cross-gramian Ha Binh Minh a, Carles Batlle b a School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Dai

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Approximation of the Linearized Boussinesq Equations

Approximation of the Linearized Boussinesq Equations Approximation of the Linearized Boussinesq Equations Alan Lattimer Advisors Jeffrey Borggaard Serkan Gugercin Department of Mathematics Virginia Tech SIAM Talk Series, Virginia Tech, April 22, 2014 Alan

More information

H 2 -optimal model reduction of MIMO systems

H 2 -optimal model reduction of MIMO systems H 2 -optimal model reduction of MIMO systems P. Van Dooren K. A. Gallivan P.-A. Absil Abstract We consider the problem of approximating a p m rational transfer function Hs of high degree by another p m

More information

Order reduction of large scale second-order systems using Krylov subspace methods

Order reduction of large scale second-order systems using Krylov subspace methods Linear Algebra and its Applications 415 (2006) 385 405 www.elsevier.com/locate/laa Order reduction of large scale second-order systems using Krylov subspace methods Behnam Salimbahrami, Boris Lohmann Lehrstuhl

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Model Reduction for Unstable Systems

Model Reduction for Unstable Systems Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Model reduction via tangential interpolation

Model reduction via tangential interpolation Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time

More information

The rational Krylov subspace for parameter dependent systems. V. Simoncini

The rational Krylov subspace for parameter dependent systems. V. Simoncini The rational Krylov subspace for parameter dependent systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Given the continuous-time system Motivation. Model

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Krylov-based model reduction of second-order systems with proportional damping

Krylov-based model reduction of second-order systems with proportional damping Krylov-based model reduction of second-order systems with proportional damping Christopher A Beattie and Serkan Gugercin Abstract In this note, we examine Krylov-based model reduction of second order systems

More information

Model reduction of interconnected systems

Model reduction of interconnected systems Model reduction of interconnected systems A Vandendorpe and P Van Dooren 1 Introduction Large scale linear systems are often composed of subsystems that interconnect to each other Instead of reducing the

More information

Chris Beattie and Serkan Gugercin

Chris Beattie and Serkan Gugercin Krylov-based model reduction of second-order systems with proportional damping Chris Beattie and Serkan Gugercin Department of Mathematics, Virginia Tech, USA 44th IEEE Conference on Decision and Control

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction

More information

Krylov-based model reduction of second-order systems with proportional damping

Krylov-based model reduction of second-order systems with proportional damping Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 005 Seville, Spain, December 1-15, 005 TuA05.6 Krylov-based model reduction of second-order systems

More information

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini Adaptive rational Krylov subspaces for large-scale dynamical systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria@dm.unibo.it joint work with Vladimir Druskin, Schlumberger Doll

More information

Projection of state space realizations

Projection of state space realizations Chapter 1 Projection of state space realizations Antoine Vandendorpe and Paul Van Dooren Department of Mathematical Engineering Université catholique de Louvain B-1348 Louvain-la-Neuve Belgium 1.0.1 Description

More information

S. Gugercin and A.C. Antoulas Department of Electrical and Computer Engineering Rice University

S. Gugercin and A.C. Antoulas Department of Electrical and Computer Engineering Rice University Proceedings of the 39" IEEE Conference on Decision and Control Sydney, Australia December, 2000 A Comparative Study of 7 Algorithms for Model Reduct ion' S. Gugercin and A.C. Antoulas Department of Electrical

More information

Fluid flow dynamical model approximation and control

Fluid flow dynamical model approximation and control Fluid flow dynamical model approximation and control... a case-study on an open cavity flow C. Poussot-Vassal & D. Sipp Journée conjointe GT Contrôle de Décollement & GT MOSAR Frequency response of an

More information

Rational Krylov Methods for Model Reduction of Large-scale Dynamical Systems

Rational Krylov Methods for Model Reduction of Large-scale Dynamical Systems Rational Krylov Methods for Model Reduction of Large-scale Dynamical Systems Serkan Güǧercin Department of Mathematics, Virginia Tech, USA Aerospace Computational Design Laboratory, Dept. of Aeronautics

More information

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems Max Planck Institute Magdeburg Preprints Mian Ilyas Ahmad Ulrike Baur Peter Benner Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER

More information

Inexact Solves in Interpolatory Model Reduction

Inexact Solves in Interpolatory Model Reduction Inexact Solves in Interpolatory Model Reduction Sarah Wyatt Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Model reduction of large-scale systems by least squares

Model reduction of large-scale systems by least squares Model reduction of large-scale systems by least squares Serkan Gugercin Department of Mathematics, Virginia Tech, Blacksburg, VA, USA gugercin@mathvtedu Athanasios C Antoulas Department of Electrical and

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 1964 1986 www.elsevier.com/locate/laa An iterative SVD-Krylov based method for model reduction of large-scale dynamical

More information

Model Reduction of Inhomogeneous Initial Conditions

Model Reduction of Inhomogeneous Initial Conditions Model Reduction of Inhomogeneous Initial Conditions Caleb Magruder Abstract Our goal is to develop model reduction processes for linear dynamical systems with non-zero initial conditions. Standard model

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL MOTIVATING EXAMPLE INVERTED PENDULUM

AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL MOTIVATING EXAMPLE INVERTED PENDULUM Controls Lab AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL Eduardo Gildin (UT ICES and Rice Univ.) with Thanos Antoulas (Rice ECE) Danny Sorensen (Rice

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

MODEL-order reduction is emerging as an effective

MODEL-order reduction is emerging as an effective IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL. 52, NO. 5, MAY 2005 975 Model-Order Reduction by Dominant Subspace Projection: Error Bound, Subspace Computation, and Circuit Applications

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

On the numerical solution of large-scale linear matrix equations. V. Simoncini

On the numerical solution of large-scale linear matrix equations. V. Simoncini On the numerical solution of large-scale linear matrix equations V. Simoncini Dipartimento di Matematica, Università di Bologna (Italy) valeria.simoncini@unibo.it 1 Some matrix equations Sylvester matrix

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Passive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form

Passive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form Passive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form Boyuan Yan, Sheldon X.-D. Tan,PuLiu and Bruce McGaughy Department of Electrical Engineering, University of

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

On the Ritz values of normal matrices

On the Ritz values of normal matrices On the Ritz values of normal matrices Zvonimir Bujanović Faculty of Science Department of Mathematics University of Zagreb June 13, 2011 ApplMath11 7th Conference on Applied Mathematics and Scientific

More information

On the solution of large Sylvester-observer equations

On the solution of large Sylvester-observer equations NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 200; 8: 6 [Version: 2000/03/22 v.0] On the solution of large Sylvester-observer equations D. Calvetti, B. Lewis 2, and L. Reichel

More information

A Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils

A Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils A Structure-Preserving Method for Large Scale Eigenproblems of Skew-Hamiltonian/Hamiltonian (SHH) Pencils Yangfeng Su Department of Mathematics, Fudan University Zhaojun Bai Department of Computer Science,

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Optimal Approximation of Dynamical Systems with Rational Krylov Methods. Department of Mathematics, Virginia Tech, USA

Optimal Approximation of Dynamical Systems with Rational Krylov Methods. Department of Mathematics, Virginia Tech, USA Optimal Approximation of Dynamical Systems with Rational Krylov Methods Serkan Güǧercin Department of Mathematics, Virginia Tech, USA Chris Beattie and Virginia Tech. jointly with Thanos Antoulas Rice

More information

Model Order Reduction for Parameter-Varying Systems

Model Order Reduction for Parameter-Varying Systems Model Order Reduction for Parameter-Varying Systems Xingang Cao Promotors: Wil Schilders Siep Weiland Supervisor: Joseph Maubach Eindhoven University of Technology CASA Day November 1, 216 Xingang Cao

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Institut für Mathematik

Institut für Mathematik U n i v e r s i t ä t A u g s b u r g Institut für Mathematik Serkan Gugercin, Tatjana Stykel, Sarah Wyatt Model Reduction of Descriptor Systems by Interpolatory Projection Methods Preprint Nr. 3/213 29.

More information

IDR(s) as a projection method

IDR(s) as a projection method Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Structure-Preserving Model Reduction

Structure-Preserving Model Reduction Structure-Preserving Model Reduction Ren-Cang Li 1 and Zhaojun Bai 2 1 Department of Mathematics, University of Kentucky, Lexington, KY 40506, USA rcli@ms.uky.edu 2 Department of Computer Science and Department

More information

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner. Passivity Preserving Model Reduction for Large-Scale Systems Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik Sonderforschungsbereich 393 S N benner@mathematik.tu-chemnitz.de SIMULATION

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

ADAPTIVE RATIONAL KRYLOV SUBSPACES FOR LARGE-SCALE DYNAMICAL SYSTEMS. 1. Introduction. The time-invariant linear system

ADAPTIVE RATIONAL KRYLOV SUBSPACES FOR LARGE-SCALE DYNAMICAL SYSTEMS. 1. Introduction. The time-invariant linear system ADAPTIVE RATIONAL KRYLOV SUBSPACES FOR LARGE-SCALE DYNAMICAL SYSTEMS V. DRUSKIN AND V. SIMONCINI Abstract. The rational Krylov space is recognized as a powerful tool within Model Order Reduction techniques

More information

arxiv: v1 [math.na] 12 Jan 2012

arxiv: v1 [math.na] 12 Jan 2012 Interpolatory Weighted-H 2 Model Reduction Branimir Anić, Christopher A. Beattie, Serkan Gugercin, and Athanasios C. Antoulas Preprint submitted to Automatica, 2 January 22 arxiv:2.2592v [math.na] 2 Jan

More information

Finite-choice algorithm optimization in Conjugate Gradients

Finite-choice algorithm optimization in Conjugate Gradients Finite-choice algorithm optimization in Conjugate Gradients Jack Dongarra and Victor Eijkhout January 2003 Abstract We present computational aspects of mathematically equivalent implementations of the

More information

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations Krylov Subspace Type Methods for Solving Proected Generalized Continuous-Time Lyapunov Equations YUIAN ZHOU YIQIN LIN Hunan University of Science and Engineering Institute of Computational Mathematics

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Serkan Gugercin Department of Mathematics, Virginia Tech., Blacksburg, VA, USA, 24061-0123 gugercin@math.vt.edu

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Iterative methods for symmetric eigenvalue problems

Iterative methods for symmetric eigenvalue problems s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient

More information

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite

More information

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Klajdi Sinani Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University

More information

Passive reduced-order modeling via Krylov-subspace methods

Passive reduced-order modeling via Krylov-subspace methods Passive reduced-order modeling via Krylov-subspace methods Roland W. Freund Bell Laboratories Room 2C 525 700 Mountain Avenue Murray Hill, New Jersey 07974 0636, U.S.A. freund@research.bell-labs.com Keywords:

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Case study: Approximations of the Bessel Function

Case study: Approximations of the Bessel Function Case study: Approximations of the Bessel Function DS Karachalios, IV Gosea, Q Zhang, AC Antoulas May 5, 218 arxiv:181339v1 [mathna] 31 Dec 217 Abstract The purpose of this note is to compare various approximation

More information

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis RANA03-02 January 2003 Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis by J.Rommes, H.A. van der Vorst, EJ.W. ter Maten Reports on Applied and Numerical Analysis Department

More information

Rational Krylov methods for linear and nonlinear eigenvalue problems

Rational Krylov methods for linear and nonlinear eigenvalue problems Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational

More information

Computing Transfer Function Dominant Poles of Large Second-Order Systems

Computing Transfer Function Dominant Poles of Large Second-Order Systems Computing Transfer Function Dominant Poles of Large Second-Order Systems Joost Rommes Mathematical Institute Utrecht University rommes@math.uu.nl http://www.math.uu.nl/people/rommes joint work with Nelson

More information

c 2017 Society for Industrial and Applied Mathematics

c 2017 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 38, No. 2, pp. 401 424 c 2017 Society for Industrial and Applied Mathematics PRECONDITIONED MULTISHIFT BiCG FOR H 2 -OPTIMAL MODEL REDUCTION MIAN ILYAS AHMAD, DANIEL B.

More information

Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods

Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods Jens Peter M. Zemke Minisymposium on Numerical Linear Algebra Technical

More information

Gramian Based Model Reduction of Large-scale Dynamical Systems

Gramian Based Model Reduction of Large-scale Dynamical Systems Gramian Based Model Reduction of Large-scale Dynamical Systems Paul Van Dooren Université catholique de Louvain Center for Systems Engineering and Applied Mechanics B-1348 Louvain-la-Neuve, Belgium vdooren@csam.ucl.ac.be

More information

COMPUTING DOMINANT POLES OF TRANSFER FUNCTIONS

COMPUTING DOMINANT POLES OF TRANSFER FUNCTIONS COMPUTING DOMINANT POLES OF TRANSFER FUNCTIONS GERARD L.G. SLEIJPEN AND JOOST ROMMES Abstract. The transfer function describes the response of a dynamical system to periodic inputs. Dominant poles are

More information

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012 University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR-202-07 April 202 LYAPUNOV INVERSE ITERATION FOR COMPUTING A FEW RIGHTMOST

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS

EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS V. SIMONCINI Abstract. The Extended Krylov Subspace has recently received considerable attention as a powerful tool for matrix function evaluations

More information

Solving large sparse eigenvalue problems

Solving large sparse eigenvalue problems Solving large sparse eigenvalue problems Mario Berljafa Stefan Güttel June 2015 Contents 1 Introduction 1 2 Extracting approximate eigenpairs 2 3 Accuracy of the approximate eigenpairs 3 4 Expanding the

More information

Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents

Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents Max Planck Institute Magdeburg Preprints Petar Mlinarić Sara Grundel Peter Benner Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents MAX PLANCK INSTITUT

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Wavelet-Based Passivity Preserving Model Order Reduction for Wideband Interconnect Characterization

Wavelet-Based Passivity Preserving Model Order Reduction for Wideband Interconnect Characterization Wavelet-Based Passivity Preserving Model Order Reduction for Wideband Interconnect Characterization Mehboob Alam, Arthur Nieuwoudt, and Yehia Massoud Rice Automated Nanoscale Design Group Rice University,

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information