Factorized Solution of Sylvester Equations with Applications in Control

Size: px
Start display at page:

Download "Factorized Solution of Sylvester Equations with Applications in Control"

Transcription

1 Factorized Solution of Sylvester Equations with Applications in Control Peter Benner Abstract Sylvester equations play a central role in many areas of applied mathematics and in particular in systems and control theory. Here we will show how low-rank solutions for stable Sylvester equations can be computed based on the matrix sign function method. We discuss applications in model reduction as well as in observer design. Numerical experiments demonstrate the efficiency of the proposed method. Keywords. Sylvester equation, matrix sign function, model reduction, observer design. 1 Introduction Consider the Sylvester equation AX + XB + C = 0, (1) where A R n n, B R m m, C R n m, and X R n m is the sought-after solution. Equation (1) has a unique solution if and only if α+β 0 for all α σ (A) and β σ (B) [22, where σ (Z) denotes the spectrum of the matrix Z. In particular, this property holds for stable Sylvester equations, where both σ (A) and σ (B) lie in the open left half plane. The anti-stable case, where σ (A) and σ (B) are contained in the open right half plane, is trivially turned into a stable one by multiplying (1) by 1. Sylvester equations have numerous applications in control theory, signal processing, filtering, model reduction, image restoration, decoupling techniques for ordinary and partial differential equations, implementation of implicit numerical methods for ordinary differential equations, and block-diagonalization of matrices, see, e.g., [3, 13, 14, 9, 12, 18, 27 to name only a few references. Many of these applications lead to stable Sylvester equations. Standard solution methods for Sylvester equations of the form (1) are the Bartels- Stewart method [5 and the Hessenberg-Schur method [13, 17. The methods are based 1 Supported by the DFG Research Center Mathematics for key technologies (FZT 86) in Berlin. 2 Fakultät für Mathematik, Technische Universität Chemnitz, Chemnitz, Germany; benner@mathematik.tu-chemnitz.de. 1

2 on transforming the coefficient matrices to Schur or Hessenberg form and then solving the corresponding linear system of equations directly by a backsubstitution process. Therefore, these methods are classified as direct methods. Several iterative schemes to solve Sylvester equations have also been proposed; for methods focusing on large sparse systems see, e.g., [9, 20, 28. The focus in this paper is on a special class of iterative schemes to solve stable Sylvester equations related to computing the matrix sign function. The basic Newton iteration classically used in this context was first proposed for (1) in [25; see also [6, 19. It can basically be described by the recursive application of the rational function f(z) = z + 1 to a suitably chosen matrix. Many other polynomial and rational iteration z functions can be employed, though. Here, we will focus on a new and special variant of the standard iteration which allows to compute the solution of the Sylvester equation in factorized form directly, given that the constant term C is factored as C = FG, where F R n p, G R p m. Equations of this type arise for instance in model reduction and observer design as discussed in Section 3. The new method is described in Section 2. Numerical experiments reporting the numerical accuracy and performance as compared to standard methods are given in Section 4. Some concluding remarks follow in Section 5. 2 Factorized Solution of Stable Sylvester Equations The following basic property of the solutions of Sylvester equations is important in many of the above-mentioned applications and also lays the foundations for the numerical algorithms considered [ here. Namely, if X is a solution of (1), the similarity transformation In X defined by, can be used to block-diagonalize the block upper triangular matrix 0 I m H = [ A C 0 B as follows: [ 1 [ [ In X A C In X = 0 I m 0 B 0 I m [ [ [ In X A C In X = 0 I m 0 B 0 I m, (2) [ A 0 0 B Using the matrix sign function of H and the relation given in (3), Roberts derives in [25 the following expression for the solution of the Sylvester equation (1): [ 1 0 X 2 (sign (H) + I n+m) =. (4) 0 I This relation forms the basis of the numerical algorithms considered since it states that we can solve (1) by computing the matrix sign function of H from (2). Therefore, we may apply the iterative schemes proposed for sign function computations in order to solve (1). In this section we review the most frequently used iterative scheme for computing the sign function and adapt this iteration to the solution of the Sylvester equation. 2. (3)

3 2.1 The Newton iteration The well-known Newton iteration for computing the sign function of a square matrix Z, with no eigenvalues on the imaginary axis, is given by Z 0 := Z, Z k+1 := 1 2 ( ) Zk + Z 1 k, k = 0, 1, 2,... (5) Roberts [25 already observed that this iteration, applied to H from (2), can be written in terms of separate iterations for A, B, and C: ga 0 := A, ( ) Ak + A, A k+1 := 1 2 k B 0 := B, ( ) B k+1 := 1 Bk + B 1 2 k, k = 0, 1, 2,..., (6) C 0 := C, 1 k ( C k+1 := 1 Ck + A 1 2 k C kb ), thus saving a considerable computational cost and work space. The iterations for A and B compute the matrix sign functions of A and B, respectively. Hence, because of the stability of A and B, A := lim A k = sign (A) = I n, k (7) B := lim B k = sign (B) = I m, k (8) and if we define C := lim k C k, then X = C /2 is the solution of the Sylvester equation (1). The iterations for A and B can be implemented in the way suggested by Byers [8, i.e., Z k+1 := Z k 1 2 ( ) Zk Z 1 k, with Z {A,B}. This iteration treats the update of Z k as a correction to the approximation Z k. This can sometimes improve the accuracy of the computed iterates in the final stages of the iteration, when the limiting accuracy is almost reached. In some ill-conditioned problems, the usual iteration may stagnate without satisfying the stopping criterion while the defect correction iteration will satisfy the criterion. Furthermore, the convergence of both iterations can be improved using an acceleration strategy. Here we use the determinantal scaling suggested in [8 that is given by Z k := c k Z k, c k = det (Z k ) 1 p, where p is assumed to be the dimension of the square matrix Z and det (Z k ) denotes the determinant of Z k. Note that the determinant is obtained as a by-product during the inversion of the matrix at no additional cost. The determinantal scaling is easily adapted to be based on the determinants of A k and B k for the corresponding iterations. Hence this scaling differs from the one that would be used if the same strategy would be used for the iteration (5) applied to H from (2). 3

4 A simple stopping criterion for (6) is obtained from (7) and (8). This suggests to stop the iteration if the relative errors for the computed approximations to the sign functions of A and B are both small enough, i.e., if max { A k + I n, B k + I m } tol, (9) where tol is the tolerance threshold. One might choose tol = cε for the machine precision ε and, for instance, c = n or c = 10 n. However, as the terminal accuracy can not be reached, in order to avoid stagnation it is better to choose tol = ε and to perform 1 2 additional iterations once this criterion is satisfied. Due to the quadratic convergence of the Newton iteration (5), this is usually enough to obtain the attainable accuracy. The computational cost of (6) is dominated by forming the inverses of A k and B k, respectively, and the solution of two linear systems for the sequence C k. If we consider the factorizations of A k and B k to be available as a by-product of the inversion, this adds up to 2(n 3 + nm(n + m) + m 3 ) flops (floating-point arithmetic operations) as compared to 2(n+m) 3 flops for a general Newton iteration step for an (n+m) (n+m) matrix. Thus, e.g., if n = m, iteration (6) requires only half of the computational cost of the general iteration (5). We would obtain the same computational cost had we computed the inverse matrices by means of a Gauss-Jordan elimination procedure and the next matrix in the sequence C k as two matrix products. 2.2 The factorized iteration Here we only consider the C k -iterates for the Sylvester equation (1) with factorized righthand side, i.e., C = FG, where F R n p, G R p m. The A k - and B k -iterations remain unaltered. We can re-write th C k -iteration as follows: F 0 := F, F k+1 = [ F k, A 1 k F k [ G 0 := G, G k+1 = G k G k B 1 k Even though this iteration is much cheaper during the initial steps if p n,m, this advantage is lost in later iterations since the number of columns in F k+1 and the number of rows in G k+1 is doubled in each iteration step. This can be avoided by applying a similar technique as used in [7 for factorized solution of Lyapunov equations. Let F k R n p k and G k R pk n. We first compute a rank-revealing QR factorization [11 of G k+1 as defined above, i.e., [ [ R1 0 G k G k B 1 = URP, R = k where U is orthogonal, P is a permutation matrix, and R is upper triangular with R 1 R r m of full row-rank. Then we compute a rank-revealing QR factorization of F k+1 U, i.e.,,. [ [ Fk, A 1 k F T1 k U = V TQ, T = 0 4,,

5 where V is orthogonal, Q is a permutation matrix, and T is upper triangular with T 1 R t 2p j of full row-rank. Partitioning V = [V 1, V 2 with V 1 R n t, and computing we then obtain as new iterates Then with C k+1 from (6) we have Setting [T 11, T 12 := T 1 Q, T 11 R t r, F k+1 := V 1 T 11, G k+1 := R 1 P. Y := 1 2 lim k F k, C k+1 = F k+1 G k+1. Z := 1 2 lim k G k, we get the solution X of (1) in factored from, X = Y Z. If X has low numerical rank, the factors Y and Z will have the corresponding number of columns and rows, respectively, and the storage and computation time needed for the factorized solution will be much lower than for the original iteration (6). 3 Applications In this section we discuss two applications of the new algorithm to solve Sylvester equations with factorized right-hand side. Both applications considered here trdult from systems and control theory; applications of Sylvester equations with factorized right-hand side in image restoration can be found in [ Observer design Given the continuous linear time-invariant (LTI) system ẋ(t) = Ax(t) + Bu(t), t > 0, x(0) = x 0, y(t) = Cx(t), t 0, (10) with A R n n, B R n m, and C R p n, a state observer is a function z : [0, ) R n such that for some nonsingular matrix Z R n n and e(t) := z(t) Zx(t), we have lim e(t) = 0. t The classical idea of Luenberger [23 to construct a state observer is to find another dynamical system ż(t) = Hz(t) + Fy(t) + Gu(t) (11) 5

6 with solution z(t). If (A,C) is observable and (H,F) is controllable, then a unique full-rank solution of thesylvester observer equation HX XA + FC = 0 exists and with G := XB, the solution z(t) of (11) is a state observer for any initial values x 0, z(0), and any input function u(t). If H and F are given, this equation has the discussed structure of low-rank right-hand side as usually, the number p of outputs of the system is much smaller than the dimension n of the state-space. 3.2 Model reduction using the cross-gramian If we consider again the LTI system (10), then the task in model reduction is to find another LTI system ˆx(t) = ˆx(t) + ˆBu(t), t > 0 ˆx(0) = ˆx 0, ŷ(t) = Ĉˆx(t), t 0, (12) with  Rr r, ˆB R r m, Ĉ R p r, so that the state-space dimension r of the system (12) is much less than n and the output error y(t) ŷ(t), obtained by applying the same input function u(t) to both systems, is small. One of the classical approaches to model reduction is balanced truncation, see, e.g., [4, 24, 29 and the references therein. This requires the solution of the two Lyapunov equations AW c + W c A T + BB T = 0, A T W o + W o A + C T C = 0. (13) Balanced truncation model reduction methods are then based on invariant or singular subspaces of the matrix product W c W o or its square root. In some situations, the product W c W o is the square root of the solution of the Sylvester equation AX + XA + BC = 0. (14) The solution X of (14) is called the cross-gramian of the system (10). Of course, for (14) being well-defined, the system must be square, i.e., p = m. Then we have X 2 = W c W o if the system is symmetric, which is trivially the case if A = A T and C = B T (in that case, both equations in (13) equal (14)) [15; the system is a single-input/single-output (SISO) system, i.e., p = m = 1 [16. In both cases, instead of solving (13) it is more efficient to use (14). The new algorithm presented in the previous section is ideally suited for this purpose, particularly in the SISO case, as p = m = 1 is the case yielding the highest efficiency. Moreover, the B k -iterates in (6) need not be computed as they equal the A k s. This further reduces the computational cost of this approach significantly. Also note that the cross-gramian carries information of the LTI system and its internally balanced realization if it is not the product of the controllability and observability Gramian and can still be used for model reduction; see [16, 2. 6

7 4 Numerical Results All the experiments presented in this section were performed on an Intel Pentium M processor at 1.4 GHz with 512 MBytes of RAM using Matlab Version (R13) and the Matlab Control Toolbox Version Matlab uses ieee double-precision floating-point arithmetic with machine precision (ε ). Example 4.1 In this example we evaluate the accuracy and computational performance of the factorized Sylvester solver and the Bartels-Stewart method as implemented in the Matlab Control Toolbox function lyap. We construct nonsymmetric stable matrices A,B R n n with eigenvalues λ j R equally distributed between 1 and 1 as follows: n A = U T ( diag(λ 1,...,λ n ) + e 1 e T n) U, B = V T ( diag(λ 1,...,λ n ) + e 1 e T n) V where U,V R n n are the orthogonal factors of the QR decomposition of random n n- matrices. The matrices F R n 1 and G R 1 n have normally distributed random entries. Figure 1 shows the residuals and execution times obtained by the two methods. Here, the accuracy is comparable while the execution time of the new method is clearly superior Residuals sign function Bartels Stewart sign function Bartels Stewart CPU Times 140 A X + X A + BC F sec n n Figure 1: Example 4.1, residuals (left) and performance (measured by CPU time, right) of the factorized Sylvester solver and the Bartels-Stewart method for solving a Sylvester equation with factorized right-hand side. Example 4.2 Here we test the factorized sign function iteration for the cross-gramian of a SISO system resulting from the semi-discretization of a control problem for a 1D heat equation. This is a test example frequently encountered in the literature [1, 10, 21, 26 and 7

8 an appropriate test case here The numerical rank of the Gramians of the system is low. Execution times and residuals for the new algorithm are compared to the Matlab solver lyap for several system orders n. Again the obtained results are very promising Residuals sign function Bartels Stewart sign function Bartels Stewart CPU Times A X + X A + BC F sec n n Figure 2: Example 4.2, residuals (left) and performance (measured by CPU time, right) of the factorized Sylvester solver and the Bartels-Stewart method for computing the cross- Gramian. 5 Conclusions We have proposed a new method for solving stable Sylvester equations right-hand side given in factored form as they arise in model reduction problems as well as in observer design. The obtained numerical accuracy as well as the execution time of the new method appear to be favorable when compared to standard solvers. It remains an open problem to give a theoretical foundation to justify the good accuracy, in particular when using the factorized form of the solution. References [1 J. Abels and P. Benner. CAREX a collection of benchmark examples for continuous-time algebraic Riccati equations (version 2.0). SLICOT Working Note , November Available from [2 R.W. Aldhaheri. Model order reduction via real Schur-form decomposition. Internat. J. Control, 53(3): ,

9 [3 F.A. Aliev and V.B. Larin. Optimization of Linear Control Systems: Analytical Methods and Computational Algorithms, volume 8 of Stability and Control: Theory, Methods and Applications. Gordon and Breach, [4 A.C. Antoulas. Lectures on the Approximation of Large-Scale Dynamical Systems. SIAM Publications, Philadelphia, PA, to appear. [5 R.H. Bartels and G.W. Stewart. Solution of the matrix equation AX + XB = C: Algorithm 432. Comm. ACM, 15: , [6 A. N. Beavers and E. D. Denman. A new solution method for the Lyapunov matrix equations. SIAM J. Appl. Math., 29: , [7 P. Benner and E.S. Quintana-Ortí. Solving stable generalized Lyapunov equations with the matrix sign function. Numer. Algorithms, 20(1):75 100, [8 R. Byers. Solving the algebraic Riccati equation with the matrix sign function. Linear Algebra Appl., 85: , [9 D. Calvetti and L. Reichel. Application of ADI iterative methods to the restoration of noisy images. SIAM J. Matrix Anal. Appl., 17: , [10 Y. Chahlaoui and P. Van Dooren. A collection of benchmark examples for model reduction of linear time invariant dynamical systems. SLICOT Working Note , February Available from [11 T. Chan. Rank revealing QR factorizations. Linear Algebra Appl., 88/89:67 82, [12 L. Dieci, M.R. Osborne, and R.D. Russell. A Riccati transformation method for solving linear bvps. I: Theoretical aspects. SIAM J. Numer. Anal., 25(5): , [13 W.H. Enright. Improving the efficiency of matrix operations in the numerical solution of stiff ordinary differential equations. ACM Trans. Math. Softw., 4: , [14 M.A. Epton. Methods for the solution of AXD BXC = E and its application in the numerical solution of implicit ordinary differential equations. BIT, 20: , [15 K.V. Fernando and H. Nicholson. On a fundamental property of the cross-gramian matrix. IEEE Trans. Circuits Syst., CAS-31(5): , [16 K.V. Fernando and H. Nicholson. On the structure of balanced and other principal representations of linear systems. IEEE Trans. Automat. Control, AC-28(2): , [17 G. H. Golub, S. Nash, and C. F. Van Loan. A Hessenberg Schur method for the problem AX + XB = C. IEEE Trans. Automat. Control, AC-24: ,

10 [18 G.H. Golub and C.F. Van Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, third edition, [19 W.D. Hoskins, D.S. Meek, and D.J. Walton. The numerical solution of the matrix equation XA + AY = F. BIT, 17: , [20 D.Y. Hu and L. Reichel. Application of ADI iterative methods to the restoration of noisy images. Linear Algebra Appl., 172: , [21 D. Kressner, V. Mehrmann, and T. Penzl. CTDSX - a collection of benchmark examples for state-space realizations of time-invariant continuous-time systems. SLICOT Working Note , November Available from [22 P. Lancaster and M. Tismenetsky. The Theory of Matrices. Academic Press, Orlando, 2nd edition, [23 D.G. Luenberger. Observing the state of a linear system. IEEE Trans. Mil. Electron., MIL-8:74 80, [24 G. Obinata and B.D.O. Anderson. Model Reduction for Control System Design. Communications and Control Engineering Series. Springer-Verlag, London, UK, [25 J.D. Roberts. Linear model reduction and solution of the algebraic Riccati equation by use of the sign function. Internat. J. Control, 32: , (Reprint of Technical Report No. TR-13, CUED/B-Control, Cambridge University, Engineering Department, 1971). [26 I.G. Rosen and C. Wang. A multi level technique for the approximate solution of operator Lyapunov and algebraic Riccati equations. SIAM J. Numer. Anal., 32(2): , [27 V. Sima. Algorithms for Linear-Quadratic Optimization, volume 200 of Pure and Applied Mathematics. Marcel Dekker, Inc., New York, NY, [28 E.L. Wachspress. Iterative solution of the Lyapunov matrix equation. Appl. Math. Letters, 107:87 90, [29 K. Zhou, J.C. Doyle, and K. Glover. Robust and Optimal Control. Prentice-Hall, Upper Saddle River, NJ,

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION www.tu-chemnitz.de/~benner

More information

Sonderforschungsbereich 393

Sonderforschungsbereich 393 Sonderforschungsbereich 393 Parallele Numerische Simulation für Physik und Kontinuumsmechanik Peter Benner Enrique S. Quintana-Ortí Gregorio Quintana-Ortí Solving Stable Sylvester Equations via Rational

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

Balancing-Related Model Reduction for Large-Scale Systems

Balancing-Related Model Reduction for Large-Scale Systems Balancing-Related Model Reduction for Large-Scale Systems Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz D-09107 Chemnitz benner@mathematik.tu-chemnitz.de

More information

Low Rank Solution of Data-Sparse Sylvester Equations

Low Rank Solution of Data-Sparse Sylvester Equations Low Ran Solution of Data-Sparse Sylvester Equations U. Baur e-mail: baur@math.tu-berlin.de, Phone: +49(0)3031479177, Fax: +49(0)3031479706, Technische Universität Berlin, Institut für Mathemati, Straße

More information

Parallel Model Reduction of Large Linear Descriptor Systems via Balanced Truncation

Parallel Model Reduction of Large Linear Descriptor Systems via Balanced Truncation Parallel Model Reduction of Large Linear Descriptor Systems via Balanced Truncation Peter Benner 1, Enrique S. Quintana-Ortí 2, Gregorio Quintana-Ortí 2 1 Fakultät für Mathematik Technische Universität

More information

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations Peter Benner Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

More information

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Jens Saak joint work with Peter Benner (MiIT) Professur Mathematik in Industrie und Technik (MiIT) Fakultät für Mathematik

More information

H 2 optimal model reduction - Wilson s conditions for the cross-gramian

H 2 optimal model reduction - Wilson s conditions for the cross-gramian H 2 optimal model reduction - Wilson s conditions for the cross-gramian Ha Binh Minh a, Carles Batlle b a School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Dai

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner. Passivity Preserving Model Reduction for Large-Scale Systems Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik Sonderforschungsbereich 393 S N benner@mathematik.tu-chemnitz.de SIMULATION

More information

Solving projected generalized Lyapunov equations using SLICOT

Solving projected generalized Lyapunov equations using SLICOT Solving projected generalized Lyapunov equations using SLICOT Tatjana Styel Abstract We discuss the numerical solution of projected generalized Lyapunov equations. Such equations arise in many control

More information

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order Filomat 29:8 2015, 1831 1837 DOI 10.2298/FIL1508831B Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Positive Solution to a Generalized

More information

A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation.

A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation. 1 A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation João Carvalho, DMPA, Universidade Federal do RS, Brasil Karabi Datta, Dep MSc, Northern Illinois University, DeKalb, IL

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Krylov Techniques for Model Reduction of Second-Order Systems

Krylov Techniques for Model Reduction of Second-Order Systems Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of

More information

Model Reduction for Dynamical Systems

Model Reduction for Dynamical Systems Otto-von-Guericke Universität Magdeburg Faculty of Mathematics Summer term 2015 Model Reduction for Dynamical Systems Lecture 8 Peter Benner Lihong Feng Max Planck Institute for Dynamics of Complex Technical

More information

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner.

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner. The Newton-ADI Method for Large-Scale Algebraic Riccati Equations Mathematik in Industrie und Technik Fakultät für Mathematik Peter Benner benner@mathematik.tu-chemnitz.de Sonderforschungsbereich 393 S

More information

An Accelerated Jacobi-gradient Based Iterative Algorithm for Solving Sylvester Matrix Equations

An Accelerated Jacobi-gradient Based Iterative Algorithm for Solving Sylvester Matrix Equations Filomat 31:8 (2017), 2381 2390 DOI 10.2298/FIL1708381T Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat An Accelerated Jacobi-gradient

More information

Direct Methods for Matrix Sylvester and Lyapunov Equations

Direct Methods for Matrix Sylvester and Lyapunov Equations Direct Methods for Matrix Sylvester and Lyapunov Equations D. C. Sorensen and Y. Zhou Dept. of Computational and Applied Mathematics Rice University Houston, Texas, 77005-89. USA. e-mail: {sorensen,ykzhou}@caam.rice.edu

More information

Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R

Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R Initializing Newton's method for discrete-time algebraic Riccati equations using the buttery SZ algorithm Heike Fabender Zentrum fur Technomathematik Fachbereich - Mathematik und Informatik Universitat

More information

Balanced Truncation Model Reduction of Large and Sparse Generalized Linear Systems

Balanced Truncation Model Reduction of Large and Sparse Generalized Linear Systems Balanced Truncation Model Reduction of Large and Sparse Generalized Linear Systems Jos M. Badía 1, Peter Benner 2, Rafael Mayo 1, Enrique S. Quintana-Ortí 1, Gregorio Quintana-Ortí 1, A. Remón 1 1 Depto.

More information

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations Krylov Subspace Type Methods for Solving Proected Generalized Continuous-Time Lyapunov Equations YUIAN ZHOU YIQIN LIN Hunan University of Science and Engineering Institute of Computational Mathematics

More information

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q

p q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q A NUMERICAL METHOD FOR COMPUTING AN SVD-LIKE DECOMPOSITION HONGGUO XU Abstract We present a numerical method to compute the SVD-like decomposition B = QDS 1 where Q is orthogonal S is symplectic and D

More information

SKEW-HAMILTONIAN AND HAMILTONIAN EIGENVALUE PROBLEMS: THEORY, ALGORITHMS AND APPLICATIONS

SKEW-HAMILTONIAN AND HAMILTONIAN EIGENVALUE PROBLEMS: THEORY, ALGORITHMS AND APPLICATIONS SKEW-HAMILTONIAN AND HAMILTONIAN EIGENVALUE PROBLEMS: THEORY, ALGORITHMS AND APPLICATIONS Peter Benner Technische Universität Chemnitz Fakultät für Mathematik benner@mathematik.tu-chemnitz.de Daniel Kressner

More information

A Continuation Approach to a Quadratic Matrix Equation

A Continuation Approach to a Quadratic Matrix Equation A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September

More information

Model reduction via tangential interpolation

Model reduction via tangential interpolation Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time

More information

Numerical Solution of Differential Riccati Equations on Hybrid CPU-GPU Platforms

Numerical Solution of Differential Riccati Equations on Hybrid CPU-GPU Platforms Numerical Solution of Differential Riccati Equations on Hybrid CPU-GPU Platforms Peter Benner 1, Pablo Ezzatti 2, Hermann Mena 3, Enrique S. Quintana-Ortí 4, Alfredo Remón 4 1 Fakultät für Mathematik,

More information

SOLUTION OF ALGEBRAIC RICCATI EQUATIONS ARISING IN CONTROL OF PARTIAL DIFFERENTIAL EQUATIONS

SOLUTION OF ALGEBRAIC RICCATI EQUATIONS ARISING IN CONTROL OF PARTIAL DIFFERENTIAL EQUATIONS SOLUTION OF ALGEBRAIC RICCATI EQUATIONS ARISING IN CONTROL OF PARTIAL DIFFERENTIAL EQUATIONS Kirsten Morris Department of Applied Mathematics University of Waterloo Waterloo, CANADA N2L 3G1 kmorris@uwaterloo.ca

More information

Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation

Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation , July 1-3, 15, London, U.K. Model Order Reduction of Continuous LI Large Descriptor System Using LRCF-ADI and Square Root Balanced runcation Mehrab Hossain Likhon, Shamsil Arifeen, and Mohammad Sahadet

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

An Exact Line Search Method for Solving Generalized Continuous-Time Algebraic Riccati Equations

An Exact Line Search Method for Solving Generalized Continuous-Time Algebraic Riccati Equations IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO., JANUARY 998 0 An Exact Line Search Method for Solving Generalized Continuous-Time Algebraic Riccati Equations Peter Benner and Ralph Byers Abstract

More information

Stability and Inertia Theorems for Generalized Lyapunov Equations

Stability and Inertia Theorems for Generalized Lyapunov Equations Published in Linear Algebra and its Applications, 355(1-3, 2002, pp. 297-314. Stability and Inertia Theorems for Generalized Lyapunov Equations Tatjana Stykel Abstract We study generalized Lyapunov equations

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,

More information

Accelerating Model Reduction of Large Linear Systems with Graphics Processors

Accelerating Model Reduction of Large Linear Systems with Graphics Processors Accelerating Model Reduction of Large Linear Systems with Graphics Processors P. Benner 1, P. Ezzatti 2, D. Kressner 3, E.S. Quintana-Ortí 4, Alfredo Remón 4 1 Max-Plank-Institute for Dynamics of Complex

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Parallel Solution of Large-Scale and Sparse Generalized algebraic Riccati Equations

Parallel Solution of Large-Scale and Sparse Generalized algebraic Riccati Equations Parallel Solution of Large-Scale and Sparse Generalized algebraic Riccati Equations José M. Badía 1, Peter Benner 2, Rafael Mayo 1, and Enrique S. Quintana-Ortí 1 1 Depto. de Ingeniería y Ciencia de Computadores,

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

Automatica, 33(9): , September 1997.

Automatica, 33(9): , September 1997. A Parallel Algorithm for Principal nth Roots of Matrices C. K. Koc and M. _ Inceoglu Abstract An iterative algorithm for computing the principal nth root of a positive denite matrix is presented. The algorithm

More information

Contour integral solutions of Sylvester-type matrix equations

Contour integral solutions of Sylvester-type matrix equations Contour integral solutions of Sylvester-type matrix equations Harald K. Wimmer Mathematisches Institut, Universität Würzburg, 97074 Würzburg, Germany Abstract The linear matrix equations AXB CXD = E, AX

More information

BDF Methods for Large-Scale Differential Riccati Equations

BDF Methods for Large-Scale Differential Riccati Equations BDF Methods for Large-Scale Differential Riccati Equations Peter Benner Hermann Mena Abstract We consider the numerical solution of differential Riccati equations. We review the existing methods and investigate

More information

On ADI Method for Sylvester Equations

On ADI Method for Sylvester Equations On ADI Method for Sylvester Equations Ninoslav Truhar Ren-Cang Li Technical Report 2008-02 http://www.uta.edu/math/preprint/ On ADI Method for Sylvester Equations Ninoslav Truhar Ren-Cang Li February 8,

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

The rational Krylov subspace for parameter dependent systems. V. Simoncini

The rational Krylov subspace for parameter dependent systems. V. Simoncini The rational Krylov subspace for parameter dependent systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Given the continuous-time system Motivation. Model

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS

GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS AZIM RIVAZ 1 AND FATEMEH SALARY POUR SHARIF ABAD 2 1,2 DEPARTMENT OF MATHEMATICS, SHAHID BAHONAR

More information

Recursive Solutions of the Matrix Equations X + A T X 1 A = Q and X A T X 1 A = Q

Recursive Solutions of the Matrix Equations X + A T X 1 A = Q and X A T X 1 A = Q Applied Mathematical Sciences, Vol. 2, 2008, no. 38, 1855-1872 Recursive Solutions of the Matrix Equations X + A T X 1 A = Q and X A T X 1 A = Q Nicholas Assimakis Department of Electronics, Technological

More information

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems LESLIE FOSTER and RAJESH KOMMU San Jose State University Existing routines, such as xgelsy or xgelsd in LAPACK, for

More information

On the Iwasawa decomposition of a symplectic matrix

On the Iwasawa decomposition of a symplectic matrix Applied Mathematics Letters 20 (2007) 260 265 www.elsevier.com/locate/aml On the Iwasawa decomposition of a symplectic matrix Michele Benzi, Nader Razouk Department of Mathematics and Computer Science,

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 12-15, 2005 WeC10.4 An iterative SVD-Krylov based method for model reduction

More information

DIRECT METHODS FOR MATRIX SYLVESTER AND LYAPUNOV EQUATIONS

DIRECT METHODS FOR MATRIX SYLVESTER AND LYAPUNOV EQUATIONS DIRECT METHODS FOR MATRIX SYLVESTER AND LYAPUNOV EQUATIONS DANNY C. SORENSEN AND YUNKAI ZHOU Received 12 December 2002 and in revised form 31 January 2003 We revisit the two standard dense methods for

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Model Order Reduction for Systems with Non-Rational Transfer Function Arising in Computational Electromagnetics

Model Order Reduction for Systems with Non-Rational Transfer Function Arising in Computational Electromagnetics Model Order Reduction for Systems with Non-Rational Transfer Function Arising in Computational Electromagnetics Lihong Feng and Peter Benner Abstract We consider model order reduction of a system described

More information

Model reduction for linear systems by balancing

Model reduction for linear systems by balancing Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,

More information

SOLVING DISCRETE-TIME ALGEBRAIC RICCATI EQUATIONS USING MODIFIED NEWTON S METHOD

SOLVING DISCRETE-TIME ALGEBRAIC RICCATI EQUATIONS USING MODIFIED NEWTON S METHOD PHYSCON 2013, San Luis Potosí, México, 26 29th August, 2013 SOLVING DISCRETE-TIME ALGEBRAIC RICCATI EQUATIONS USING MODIFIED NEWTON S METHOD Vasile Sima Advanced Research National Institute for Research

More information

On the solution of large Sylvester-observer equations

On the solution of large Sylvester-observer equations NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 200; 8: 6 [Version: 2000/03/22 v.0] On the solution of large Sylvester-observer equations D. Calvetti, B. Lewis 2, and L. Reichel

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Inexact Solves in Krylov-based Model Reduction

Inexact Solves in Krylov-based Model Reduction Inexact Solves in Krylov-based Model Reduction Christopher A. Beattie and Serkan Gugercin Abstract We investigate the use of inexact solves in a Krylov-based model reduction setting and present the resulting

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

On the numerical solution of large-scale linear matrix equations. V. Simoncini

On the numerical solution of large-scale linear matrix equations. V. Simoncini On the numerical solution of large-scale linear matrix equations V. Simoncini Dipartimento di Matematica, Università di Bologna (Italy) valeria.simoncini@unibo.it 1 Some matrix equations Sylvester matrix

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-92, 2004. Copyright 2004,. ISSN 1068-9613. ETNA STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Ý Abstract. For any symmetric

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini Order reduction numerical methods for the algebraic Riccati equation V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1 The problem Find X

More information

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Christian Mehl Technische Universität Berlin, Institut für Mathematik, MA 4-5, 10623 Berlin, Germany, Email: mehl@math.tu-berlin.de

More information

Passivation of LTI systems

Passivation of LTI systems Passivation of LTI systems Christian Schröder Tatjana Stykel Abstract In this paper we consider a passivation procedure for linear time-invariant systems. This procedure is based on the spectral properties

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Model reduction of interconnected systems

Model reduction of interconnected systems Model reduction of interconnected systems A Vandendorpe and P Van Dooren 1 Introduction Large scale linear systems are often composed of subsystems that interconnect to each other Instead of reducing the

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Cyclic reduction and index reduction/shifting for a second-order probabilistic problem

Cyclic reduction and index reduction/shifting for a second-order probabilistic problem Cyclic reduction and index reduction/shifting for a second-order probabilistic problem Federico Poloni 1 Joint work with Giang T. Nguyen 2 1 U Pisa, Computer Science Dept. 2 U Adelaide, School of Math.

More information

BALANCING-RELATED MODEL REDUCTION FOR LARGE-SCALE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR LARGE-SCALE SYSTEMS FOR LARGE-SCALE SYSTEMS Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Recent Advances in Model Order Reduction Workshop TU Eindhoven, November 23,

More information

Stable iterations for the matrix square root

Stable iterations for the matrix square root Numerical Algorithms 5 (997) 7 4 7 Stable iterations for the matrix square root Nicholas J. Higham Department of Mathematics, University of Manchester, Manchester M3 9PL, England E-mail: higham@ma.man.ac.u

More information

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Exploiting off-diagonal rank structures in the solution of linear matrix equations Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

On GPU Acceleration of Common Solvers for (Quasi-) Triangular Generalized Lyapunov Equations

On GPU Acceleration of Common Solvers for (Quasi-) Triangular Generalized Lyapunov Equations Max Planck Institute Magdeburg Preprints Martin Köhler Jens Saak On GPU Acceleration of Common Solvers for (Quasi-) Triangular Generalized Lyapunov Equations MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER

More information

Parametrische Modellreduktion mit dünnen Gittern

Parametrische Modellreduktion mit dünnen Gittern Parametrische Modellreduktion mit dünnen Gittern (Parametric model reduction with sparse grids) Ulrike Baur Peter Benner Mathematik in Industrie und Technik, Fakultät für Mathematik Technische Universität

More information

Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems

Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems MatTriad 2015 Coimbra September 7 11, 2015 Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems Peter Benner Max Planck Institute for Dynamics of Complex Technical

More information

Computation of a canonical form for linear differential-algebraic equations

Computation of a canonical form for linear differential-algebraic equations Computation of a canonical form for linear differential-algebraic equations Markus Gerdin Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping,

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis J. Stoer R. Bulirsch Introduction to Numerical Analysis Second Edition Translated by R. Bartels, W. Gautschi, and C. Witzgall With 35 Illustrations Springer Contents Preface to the Second Edition Preface

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices Part 2: block displacement structure algorithms JC Zúñiga and D Henrion Abstract Motivated by some control

More information

Consistency and efficient solution of the Sylvester equation for *-congruence

Consistency and efficient solution of the Sylvester equation for *-congruence Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 57 2011 Consistency and efficient solution of the Sylvester equation for *-congruence Fernando de Teran Froilan M. Dopico Follow

More information

On the -Sylvester Equation AX ± X B = C

On the -Sylvester Equation AX ± X B = C Department of Applied Mathematics National Chiao Tung University, Taiwan A joint work with E. K.-W. Chu and W.-W. Lin Oct., 20, 2008 Outline 1 Introduction 2 -Sylvester Equation 3 Numerical Examples 4

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information