S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

Size: px
Start display at page:

Download "S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz"

Transcription

1 Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION benner@mathematik.tu-chemnitz.de Hagen, 28. Oktober 2004

2 Outline Outline Linear matrix equations Applications Direct Methods: Hessenberg-Schur, Bartels-Stewart, Hammarling Sign function method Low-rank approximation Large, sparse Lyapunov equations ADI and Smith iterations H-matrix sign function method Conclusions SIMULATIONSN Peter Benner 2

3 Linear matrix equations Linear Matrix Equations equations without symmetry Sylvester equation AX + XB = W discrete Sylvester equation AXB X = W with data A R n n, B R m m, W R n m and unknown X R n m. equations with symmetry Lyapunov equation AX + XA T = W Stein equation (discrete Lyapunov equation) AXA T X = W with data A R n n, W = W T R n n and unknown X R n n. Here: focus on Sylvester and Lyapunov equations; analogous results and methods for discrete versions exist. SIMULATIONSN Peter Benner 3

4 Linear matrix equations Theoretical Background Using the Kronecker (tensor) product, AX + XB = W is equivalent to ( (Im A) + ( B T I n )) vec (X) = vec (W ). SIMULATIONSN Peter Benner 4

5 Linear matrix equations Theoretical Background Using the Kronecker (tensor) product, AX + XB = W is equivalent to ( (Im A) + ( B T I n )) vec (X) = vec (W ). Hence, Sylvester equation has a unique solution M := (I m A) + ( ) B T I n is invertible. SIMULATIONSN Peter Benner 4

6 Linear matrix equations Theoretical Background Using the Kronecker (tensor) product, AX + XB = W is equivalent to ( (Im A) + ( B T I n )) vec (X) = vec (W ). Hence, Sylvester equation has a unique solution M := (I m A) + ( ) B T I n is invertible. 0 σ (M) = σ ( (I m A) + (B T I n ) ) = {λ j + µ k, λ j σ (A), µ k σ (B)}. SIMULATIONSN Peter Benner 4

7 Linear matrix equations Theoretical Background Using the Kronecker (tensor) product, AX + XB = W is equivalent to ( (Im A) + ( B T I n )) vec (X) = vec (W ). Hence, Sylvester equation has a unique solution M := (I m A) + ( ) B T I n is invertible. 0 σ (M) = σ ( (I m A) + (B T I n ) ) = {λ j + µ k, λ j σ (A), µ k σ (B)}. σ (A) σ ( B) = SIMULATIONSN Peter Benner 4

8 Linear matrix equations Complexity Issues Solving the Sylvester equation via the linear system of equations AX + XB = W ( (Im A) + ( B T I n )) vec (X) = vec (W ) requires LU factorization of n m n m matrix; for n m, the complexity is O(n 6 ); storing n m unknowns; i.e., for n m we have n 2 storage requirement for X. Example: n = m = 1000 Gaussian elimination on a Pentium IV (2 GHz) would take YEARS. SIMULATIONSN Peter Benner 5

9 Applications Applications Stability analysis of linear dynamical systems (Lyapunov s first method): ẋ = Ax is asymptotically (exponentially, Lyapunov) stable AP + P A T + I n = 0 has solution X > 0. V (x) := x T P x is a Lyapunov function for ẋ = Ax. Feedback stabilization for ẋ = Ax + Bu. Model reduction based on balancing and truncation Block-diagonalization of matrices decoupling techniques for ordinary and partial differential equations Image restoration and registration Linear vibration analysis and damper optimization... SIMULATIONSN Peter Benner 6

10 Applications Model Reduction Given linear (control) system ẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t), x(t) R n, with A stable (σ (A) C ), find reduced-order model x(t) = Ã x(t) + Bu(t), ỹ(t) = C x(t), x(t) R l of degree l n such that y ỹ small. Balanced truncation: compute (Ã, B, C) = (ZAT, ZB, CT ) where Z := Σ 1/2 1 V T 1 R, T := ST U 1 Σ 1/2 1 are defined via the SVD»» SR T Σ1 0 V T = [U 1 U 2 ] 1 0 Σ 2 V T 2 and S, R satisfy the controllability and observability Lyapunov equations AS T S + S T SA T + BB T = 0, A T R T R + R T RA + C T C = 0. SIMULATIONSN Peter Benner 7

11 Direct methods Direct Methods Bartels-Stewart method for Sylvester and Lyapunov equation; Hessenberg-Schur method for Sylvester equations; Hammarling s method for stable Lyapunov equations AX + XA T + GG T = 0. All based on the fact that if A, B T are in Schur form, then M = ( (I m A) + ( B T I n )) is block-upper triangular. Hence, M x = b can be solved by back-substitution. Clever (recursive) implementation of back-substitution process requires nm(n + m) flops. For Sylvester equations, B in Hessenberg form is enough ( Hessenberg-Schur method). Hammarling s method modifies recursive process to compute Cholesky factor Y of X directly. All methods require Schur decomposition of A and Schur or Hessenberg decomposition of B need QR algorithm which requires 25n 3 flops for Schur decomposition. Not feasible for large-scale problems (n > 1000). SIMULATIONSN Peter Benner 8

12 Sign function method The Sign Function Method [Roberts 71] For Z R n n with σ (Z) ır = and Jordan canonical form Z = S 1 [ J J ] S = sign (Z) := S [ Ik 0 0 I n k ] S 1. (J ± = Jordan blocks corresponding to σ (Z) C ± ) sign (Z) is root of I n = use Newton s method to compute it: Z 0 Z, Z j+1 1 ( c j Z j + 1 ) Z 1 j, j = 1, 2,... 2 c j = sign (Z) = lim j Z j. (c j > 0 is scaling parameter for convergence acceleration and rounding error minimization.) SIMULATIONSN Peter Benner 9

13 Sign function method Solving Sylvester Equations with the Sign Function Method Consider Lyapunov equation AX + XB + W = 0, A, B stable. As sign ( S 1 ZS ) = S 1 sign (Z) S and [ ] [ ] [ ] [ In 0 A 0 In 0 A 0 = X I m W B X I m 0 B it follows that ([ ]) [ ] A 0 In 0 sign =. W B 2X I m ] Solution of Sylvester (Lyapunov) equation can be read off sign ([ A W ]) 0 B. SIMULATIONSN Peter Benner 10

14 Sign function method The Sign Function Method for Sylvester Equations Apply sign function Newton iteration Z j+1 (Z j + Z 1 j )/2 to Z = [ A W ] 0 B. = sign (Z) = lim j Z j = [ In 2X 0 I n ]. Newton iteration (with scaling) is equivalent to A 0 A, B 0 B, W 0 W for j = 0, 1, 2,... A j+1 1 ( Aj + c 2 2c ja 1 ) j, j B j+1 1 ( Bj + c 2 2c jb 1 ) j, j W j+1 1 ( Wj + c 2 2c ja 1 j W j B 1 ) j. j = X = 1 2 lim j W j Requires explicit inverses not feasible for large-scale problems (n > 1000). SIMULATIONSN Peter Benner 11

15 Direct methods vs. sign function Comparison Compare Bartels-Stewart method as implemented in the Matlab Control Toolbox function lyap, Hessenberg-Schur method as implemented in the SLICOT-based Matlab function slsylv sign function method for random Sylvester equations Bartels Stewart Hessenberg Schur sign function Bartels Stewart Hessenberg Schur sign function 40 Relative Residual CPU time (sec) Problem size n=m Problem size n=m SIMULATIONSN Peter Benner 12

16 Low-rank approximation Low-Rank Approximation Consider the stable Lyapunov equation A T X + XA + C T C = 0, A stable, C R p n. unique solution X 0 = X = Y T Y for Y R n Y n, n Y n. In almost all applications, p n. Often, X has low (numerical) rank. Example: random matrix A, stability margin 0.055, n = 500, m = 10, numerical rank = 31. λ k Eigenvalues in decreasing order eps k SIMULATIONSN Peter Benner 13

17 Low-rank approximation Computation of Y Standard approach: ] Y = R n n, Cholesky factor of X, n Y = n. [ Can be computed by Hammarling s method. Approach here: Y R rank(x) n, full rank factor of X, n Y n. Advantages: more reliable if Cholesky factor is numerically singular; more efficient if rank (X) n (in particular for subsequent computations as in balanced truncation model reduction); SIMULATIONSN Peter Benner 14

18 Factorized sign function method Sign Function Method Again Want factor Y of solution of A T X + XA + C T C = 0. For W 0 = C T 0 C 0 := C T C, C R p n obtain W j+1 = 1 ( Wj + c 2 2c ja T j j W j A 1 ) 1 j = 2c j [ C j c j C j A 1 j ] T [ C j c j C j A 1 j ]. = re-write W j iteration: C 0 := C, C j+1 := [ 1 2cj C j c j C j A 1 j ]. Problem: C j R p j n = C j+1 R 2p j n, i.e., the necessary workspace doubles in each iteration. Two approaches in order to limit work space... SIMULATIONSN Peter Benner 15

19 Factorized sign function method Compute Cholesky factor Y c of X Require p j n: for j > log 2 (n/p) compute QR factorization» 1 p 2cj C j c j C j A 1 j = U j» Ĉj 0, Ĉ j = h i R n n. = W j = ĈT j Ĉj, Y c = 1 2 lim j Ĉ j Compute full-rank factor Y f of X In every step compute rank-revealing QR factorization:» 1 p 2cj C j c j C j A 1 j ([ where R j+1 R p j+1 p j+1, p j+1 = rank = U j+1 " Rj+1 T j+1 0 S j+1 C j c j C j A 1 j # Π j+1, ]). Then C j+1 := [ R j+1 T j+1 ]Π j+1, W j+1 = C T j+1c j+1, Y f = 1 2 lim j C j SIMULATIONSN Peter Benner 16

20 Factorized sign function method Parallelization Newton iteration for sign function easy to parallelize need basic linear algebra (systems of linear equations, matrix inverse, matrix addition, matrix product). Use MPI, BLACS for communication, PBLAS and ScaLAPACK for numerical linear algebra portable code. Development of software library PLiC. Testing on PC Cluster (Linux) with 32 Intel Xeon-2.4GHz processors with workspace/processor: 1 GByte, Myrinet Switch, bandwidth 1.4 Gbit/sec, shows good efficiency and high scalability. SIMULATIONSN Peter Benner 17

21 Factorized solution of Sylvester equations Factorized Solution of Sylvester Equations Let A R n n, B R m m stable, F R n p, G R p m and consider AX + XB + F G = 0. Idea: often, rank (X) min{n, m} or X X ( ) with rank X n. rank (X) = r = Y, Z T R n r with X = Y Z (full-rank factorization, FRF ). Computing FRF with standard methods, Bartels-Stewart method Hessenberg-Schur method would require to compute X R n m and then use SVD or SVD-like factorization. Here: compute FRF directly from F, G using sign function iteration. SIMULATIONSN Peter Benner 18

22 Factorized solution of Sylvester equations Computing Full-Rank Factors of Solution = re-write (unscaled) W j iteration converging to 2X as F j+1 G j+1 = W j ( Wj + A 1 j W j B 1 ) 1 j = 2 ( Fj G j + (A 1 j F j )(G j B 1 j ) ), yielding F 0 := F, F j+1 = 1 2 [ Fj, A 1 j F j ], G 0 := G, G j+1 = 1 2 [ G j G j B 1 j ]. Again: F j R n p j = F j+1 R n 2p j, i.e., workspace doubles per iteration. In order to limit workspace during iteration, compute full-rank factorization in each iteration step directly from F j+1, G j+1. SIMULATIONSN Peter Benner 19

23 Factorized solution of Sylvester equations Keeping the Rank Small ALGORITHM»» G j R1 1. Compute rank-revealing QR factorization (RRQR) G j B 1 =: U Π j 0 G, with U orthogonal, Π G a permutation matrix, R 1 R r m upper triangular of full row-rank. h i» 2. Compute RRQR F j, A 1 T1 j F j U = V Π 0 F, with V orthogonal, Π F a permutation matrix, T 1 R t 2p j upper triangular of full row-rank. 3. Partition V = [ V 1, V 2 ], V 1 R n t, and compute [ T 11, T 12 ] := T 1 Π F, where T 11 R t r. 4. Set F j+1 := V 1 T 11, G j+1 := R 1 Π G. THEOREM W j+1 = F j+1 G j+1, Y := 1 lim F j and Z := 1 lim G j 2 j 2 j exist, and X = Y Z is FRF. SIMULATIONSN Peter Benner 20

24 Factorized solution of Sylvester equations Numerical Example Numerical examples n = 50 : 50 : 2000, n = 500 : 500 : 4000, m = 1, sep(a, B) = 2 n. Compare sign function based method and SLICOT-based implementation of the Hessenberg- Schur method. Use Matlab Release 14 on a Xeon 3GHz workstation with 8GB Ram iterations for sign function iteration. For n = 4000, Y, Z T R , i.e., 1.4MB instead of 122MB storage required for solution. A X + X B + CD F Residuals sign function Matlab/SLICOT A X + X B + F G F 10 8 Residuals sign function Matlab/SLICOT n n SIMULATIONSN Peter Benner 21

25 Factorized solution of Sylvester equations Timings Numerical examples sign function Matlab/SLICOT CPU Times sign function Matlab/SLICOT Execution Times 6000 sec CPU time [sec] n n SIMULATIONSN Peter Benner 22

26 Methods for Large, Sparse Lyapunov Equations Apply adapted splitting methods (Gauß-Seidel, SSOR, etc.) Apply adapted iterative solvers (GMRES, TFQMR, etc.) X V XV T for solution X of projected Lyapunov equation (V T AV ) X + X(V T AV ) T = V T W V, V R n l. Can be computed via Krylov subspace methods. None of these approaches is efficient in general. Idea: try to compute low-rank approximation to X via (approximation to) full-rank factor Y. SIMULATIONSN Peter Benner 23

27 ADI method ADI Method for Lyapunov Equations For A R n n stable, W R n w (w n), consider Lyapunov equation A T X + XA = BB T. ADI Iteration: [Wachspress 88] (A T + p k I)X (j 1)/2 = BB T X k 1 (A p k I) (A T T + p k I)X k = BB T X (j 1)/2 (A p k I) with parameters p k C and p k+1 = p k if p k R. For X 0 = 0 and proper choice of p k : lim X k = X superlinear. k Re-formulation using X k = Y k Y T k yields iteration for Y k... SIMULATIONSN Peter Benner 24

28 ADI method Set X k = Y k Y T k Factored ADI Iteration [Penzl 97, Li/Wang/White 99, B./Li/Penzl], some algebraic manipulations = V 1 p 2Re (p 1 )(A T + p 1 I) 1 B, Y 1 V 1 FOR j = 2, 3,... r V k `I (pk + p k 1 )(A T + p k I) 1 V k 1, Y k ˆ Y k 1 V k where Re (p k ) Re (p k 1 ) Y kmax = [ V 1... V kmax ] V k = C n w and Y kmax Yk T max X Note: Implementation in real arithmetic possible by combining two steps. SIMULATIONSN Peter Benner 25

29 ADI method Low-rank ADI in Action Numerical example Solve AX +XA T +BB T = 0. Finite differences method for 2D heat equation with boundary control Convergence history of low rank ADI iteration n = 10, 000, m = 1. Use lyapack implementation [Penzl 99]. Normalized residual norm Need k max = 57 iterations, i.e., Y kmax R 10, ca. 25 sec. on Centrino 1.4 GHz notebook Iteration steps SIMULATIONSN Peter Benner 26

30 Smith iteration Numerical example Smith Iteration For Stein equations F XF H X = G (1) with F < 1, the fix point iteration X k+1 = F X k F H G, k = 0, 1,... (X 0 = 0) converges to the unique solution X. Smith iteration for Lyapunov equations: if X is a solution of then it is a solution of (1) with AX + XA T = W, F = (I n + µa) 1 (I n µa), G = 2(µ + µ)(i n + µa) 1 W (I n + µa) H. Choosing µ such that σ (F ) { z < 1}, we can compute X as the limit of X k+1 = (I n + µa) 1 ( (I n µa)x k (I n µa) H + 2(µ + µ)w ) (I n + µa) H. SIMULATIONSN Peter Benner 27

31 Smith iteration Details and Extensions Numerical example If A is stable, any µ C can be used. Select µ in order to optimize convergence rate (ρ(f ) 2 ). For σ (A) R, µ = λ max λ min is optimal. [Varga 62, Rosen/Wang 95] Convergence can be accelerated by changing the shift µ in each iteration; in order to limit work needed for factorizations, use finite number l of different shifts and apply them cyclically, store and re-use the l different factorizations if workspace permits. (cyclic) Smith(l) method. [Penzl 00]. Low-rank version possible if W = BB T, mathematically equivalent to low rank ADI with parameters applied cyclically. [Penzl 00]. Number of columns in Y k (X k = Y k Y H ) can be reduced by column compression k technique. [Antoulas/Sorensen/Gugercin 03]. SIMULATIONSN Peter Benner 28

32 H-matrix sign function method H-Matrix Implementation of the Sign Function Method Recall: solution of the Lyapunov equation AX + XA T + W = 0 with the sign function method: A 0 = A, W 0 = BB T A j+1 = 1 2 (A j + A 1 j ) W j+1 = 1 2 (W j + A T j W j A 1 j ) involves the inversion, addition and multiplication of n n matrices complexity: O(n 3 ) Approximation of A in H-matrix format, use of the formatted H-matrix arithmetic complexity: O(n log 2 n). SIMULATIONSN Peter Benner 29

33 H-matrix sign function method Hierarchical Matrices: Short Introduction Hierarchical (H-)matrices are a data-sparse approximation of large, dense matrices arising from the discretization of non-local integral operators occurring in BEM, as inverses of FEM discretized elliptic differential operators, but can also be used to represent FEM matrices directly. Properties of H-matrices: only few data are needed for the representation of the matrix, matrix-vector multiplication can be performed in almost linear complexity (O(n log n)), building sums, products, inverses is of almost linear complexity. SIMULATIONSN Peter Benner 30

34 H-matrix sign function method Hierarchical Matrices: Construction Consider matrices over a product index set I I. Partition I I by the H-tree T I I, where a problem dependent admissibility condition is used to decide whether a block t s I I allows for a low rank approximation. Definition: Hierarchical matrices (H-matrices) The set of the hierarchical matrices is defined by H(T I I, k) := {M R I I rank(m t s ) k admissible leaves t s of T I I } Submatrices of M H(T I I, k) corresponding to inadmissible leaves are stored as dense blocks whereas those corresponding to admissible leaves are stored in factorized form as rank-k matrices, called R k -format. SIMULATIONSN Peter Benner 31

35 H-matrix sign function method Example Stiffness matrix of 2D heat equation with distributed control and isolation BC n = 1024 and k = 4 SIMULATIONSN Peter Benner 32

36 H-matrix sign function method Hierarchical Matrices: Formatted Arithmetic H(T I I, k) is not a linear subspace of R I I formatted arithmetics projection of the sum, product and inverse into H(T I I, k) 1. Formatted Addition ( ) with complexity N H H = O(nk 2 log n)) (for sparse T I I ) Corresponds to best approximation (in the Frobenius-norm). 2. Formatted Multiplication ( ) N H H = O(nk 2 log 2 n) (under some technical assumptions on T I I ) 3. Formatted inversion (Ĩnv) N H, g Inv = O(nk 2 log 2 n) (under some technical assumptions on T I I ) SIMULATIONSN Peter Benner 33

37 H-matrix sign function method H-Matrix Sign Function Method Sign function iteration with formatted H-matrix arithmetic: Ã 0 = (A) H, Ã j+1 = 1 2 (Ãj Ĩnv(A j)) Accuracy control for iterates k = O(log 1 δ + log 1 ρ )), where (Ã 1 j (Ã 1 j + Inv(Ãj)) (Ã 1 j Inv(Ãj)) 2 δ Inv(Ãj)) 2 ρ = forward error bound (assuming c j (δ + ρ) A 1 j 2 < 1 j): Ãj A j 2 c j (δ + ρ), where c 0 = Ã0 A 2 (δ + ρ) 1, c j+1 = 1 2 ( ) A 1 j c j + c j 1 c j (δ + ρ) A 1 j 2. SIMULATIONSN Peter Benner 34

38 H-matrix sign function method Numerical Performance Solve AX + XA T + BB T = 0. FEM discretization of 2D heat equation with boundary control. Accuracy and rank of computed factor n r AX+XA T +BB T 2 2 A 2 X 2 + BB T For n = 262, 144 (that is, 34 billion unknowns in X) we get r = 21 5MB for solution instead of 64GB! Workspace (KB) CPU time 10 8 Storage Requirements H matrix based sign function n log 2 (n) n 10 8 Computing Time H matrix based sign function n log 2 (n) n results by U. Baur SIMULATIONSN Peter Benner 35

39 Conclusions Conclusions Large-scale dense problems can be efficiently solved using parallel implementation of the sign function method. Large-scale dense, but data-sparse problems can be efficiently solved using H-matrix implementation of the sign function method. The most promising methods for large-scale sparse problems are low-rank Smith and ADI methods; selection of acceleration shifts remains a tricky issue. Krylov subspace methods and splitting methods are not competitive in general. To-Do: H-matrix sign function for Sylvester equations Low-rank ADI method for Sylvester equations Low-rank Smith method for Sylvester equations SIMULATIONSN Peter Benner 36

Parallel Model Reduction of Large Linear Descriptor Systems via Balanced Truncation

Parallel Model Reduction of Large Linear Descriptor Systems via Balanced Truncation Parallel Model Reduction of Large Linear Descriptor Systems via Balanced Truncation Peter Benner 1, Enrique S. Quintana-Ortí 2, Gregorio Quintana-Ortí 2 1 Fakultät für Mathematik Technische Universität

More information

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner.

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner. The Newton-ADI Method for Large-Scale Algebraic Riccati Equations Mathematik in Industrie und Technik Fakultät für Mathematik Peter Benner benner@mathematik.tu-chemnitz.de Sonderforschungsbereich 393 S

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

Model Reduction for Dynamical Systems

Model Reduction for Dynamical Systems Otto-von-Guericke Universität Magdeburg Faculty of Mathematics Summer term 2015 Model Reduction for Dynamical Systems Lecture 8 Peter Benner Lihong Feng Max Planck Institute for Dynamics of Complex Technical

More information

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Jens Saak joint work with Peter Benner (MiIT) Professur Mathematik in Industrie und Technik (MiIT) Fakultät für Mathematik

More information

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner. Passivity Preserving Model Reduction for Large-Scale Systems Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik Sonderforschungsbereich 393 S N benner@mathematik.tu-chemnitz.de SIMULATION

More information

Factorized Solution of Sylvester Equations with Applications in Control

Factorized Solution of Sylvester Equations with Applications in Control Factorized Solution of Sylvester Equations with Applications in Control Peter Benner Abstract Sylvester equations play a central role in many areas of applied mathematics and in particular in systems and

More information

Sonderforschungsbereich 393

Sonderforschungsbereich 393 Sonderforschungsbereich 393 Parallele Numerische Simulation für Physik und Kontinuumsmechanik Peter Benner Enrique S. Quintana-Ortí Gregorio Quintana-Ortí Solving Stable Sylvester Equations via Rational

More information

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations Peter Benner Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

In this article we discuss sparse matrix algorithms and parallel algorithms, Solving Large-Scale Control Problems

In this article we discuss sparse matrix algorithms and parallel algorithms, Solving Large-Scale Control Problems F E A T U R E Solving Large-Scale Control Problems EYEWIRE Sparsity and parallel algorithms: two approaches to beat the curse of dimensionality. By Peter Benner In this article we discuss sparse matrix

More information

Balanced Truncation Model Reduction of Large and Sparse Generalized Linear Systems

Balanced Truncation Model Reduction of Large and Sparse Generalized Linear Systems Balanced Truncation Model Reduction of Large and Sparse Generalized Linear Systems Jos M. Badía 1, Peter Benner 2, Rafael Mayo 1, Enrique S. Quintana-Ortí 1, Gregorio Quintana-Ortí 1, A. Remón 1 1 Depto.

More information

Low Rank Solution of Data-Sparse Sylvester Equations

Low Rank Solution of Data-Sparse Sylvester Equations Low Ran Solution of Data-Sparse Sylvester Equations U. Baur e-mail: baur@math.tu-berlin.de, Phone: +49(0)3031479177, Fax: +49(0)3031479706, Technische Universität Berlin, Institut für Mathemati, Straße

More information

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Exploiting off-diagonal rank structures in the solution of linear matrix equations Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)

More information

Accelerating Model Reduction of Large Linear Systems with Graphics Processors

Accelerating Model Reduction of Large Linear Systems with Graphics Processors Accelerating Model Reduction of Large Linear Systems with Graphics Processors P. Benner 1, P. Ezzatti 2, D. Kressner 3, E.S. Quintana-Ortí 4, Alfredo Remón 4 1 Max-Plank-Institute for Dynamics of Complex

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Balancing-Related Model Reduction for Large-Scale Systems

Balancing-Related Model Reduction for Large-Scale Systems Balancing-Related Model Reduction for Large-Scale Systems Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz D-09107 Chemnitz benner@mathematik.tu-chemnitz.de

More information

Direct Methods for Matrix Sylvester and Lyapunov Equations

Direct Methods for Matrix Sylvester and Lyapunov Equations Direct Methods for Matrix Sylvester and Lyapunov Equations D. C. Sorensen and Y. Zhou Dept. of Computational and Applied Mathematics Rice University Houston, Texas, 77005-89. USA. e-mail: {sorensen,ykzhou}@caam.rice.edu

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems

Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems MatTriad 2015 Coimbra September 7 11, 2015 Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems Peter Benner Max Planck Institute for Dynamics of Complex Technical

More information

Gramian-Based Model Reduction for Data-Sparse Systems CSC/ Chemnitz Scientific Computing Preprints

Gramian-Based Model Reduction for Data-Sparse Systems CSC/ Chemnitz Scientific Computing Preprints Ulrike Baur Peter Benner Gramian-Based Model Reduction for Data-Sparse Systems CSC/07-01 Chemnitz Scientific Computing Preprints Impressum: Chemnitz Scientific Computing Preprints ISSN 1864-0087 (1995

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations Krylov Subspace Type Methods for Solving Proected Generalized Continuous-Time Lyapunov Equations YUIAN ZHOU YIQIN LIN Hunan University of Science and Engineering Institute of Computational Mathematics

More information

Computing least squares condition numbers on hybrid multicore/gpu systems

Computing least squares condition numbers on hybrid multicore/gpu systems Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study

ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study -preconditioned for large - A case study Matthias Bollhöfer, André Eppler TU Braunschweig Institute Computational Mathematics Syrene-MOR Workshop, TU Hamburg October 30, 2008 2 / 20 Outline 1 2 Overview

More information

On the numerical solution of large-scale sparse discrete-time Riccati equations

On the numerical solution of large-scale sparse discrete-time Riccati equations Advances in Computational Mathematics manuscript No (will be inserted by the editor) On the numerical solution of large-scale sparse discrete-time Riccati equations Peter Benner Heike Faßbender Received:

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

1 Sylvester equations

1 Sylvester equations 1 Sylvester equations Notes for 2016-11-02 The Sylvester equation (or the special case of the Lyapunov equation) is a matrix equation of the form AX + XB = C where A R m m, B R n n, B R m n, are known,

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

Model reduction of coupled systems

Model reduction of coupled systems Model reduction of coupled systems Tatjana Stykel Technische Universität Berlin ( joint work with Timo Reis, TU Kaiserslautern ) Model order reduction, coupled problems and optimization Lorentz Center,

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Lecture 8: Fast Linear Solvers (Part 7)

Lecture 8: Fast Linear Solvers (Part 7) Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09

More information

On the numerical solution of large-scale sparse discrete-time Riccati equations CSC/ Chemnitz Scientific Computing Preprints

On the numerical solution of large-scale sparse discrete-time Riccati equations CSC/ Chemnitz Scientific Computing Preprints Peter Benner Heike Faßbender On the numerical solution of large-scale sparse discrete-time Riccati equations CSC/09-11 Chemnitz Scientific Computing Preprints Impressum: Chemnitz Scientific Computing Preprints

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

A POD Projection Method for Large-Scale Algebraic Riccati Equations

A POD Projection Method for Large-Scale Algebraic Riccati Equations A POD Projection Method for Large-Scale Algebraic Riccati Equations Boris Kramer Department of Mathematics Virginia Tech Blacksburg, VA 24061-0123 Email: bokr@vt.edu John R. Singler Department of Mathematics

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

BALANCING-RELATED MODEL REDUCTION FOR LARGE-SCALE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR LARGE-SCALE SYSTEMS FOR LARGE-SCALE SYSTEMS Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Recent Advances in Model Order Reduction Workshop TU Eindhoven, November 23,

More information

Parallel Solution of Large-Scale and Sparse Generalized algebraic Riccati Equations

Parallel Solution of Large-Scale and Sparse Generalized algebraic Riccati Equations Parallel Solution of Large-Scale and Sparse Generalized algebraic Riccati Equations José M. Badía 1, Peter Benner 2, Rafael Mayo 1, and Enrique S. Quintana-Ortí 1 1 Depto. de Ingeniería y Ciencia de Computadores,

More information

Numerical solution of large-scale Lyapunov equations, Riccati equations, and linear-quadratic optimal control problems

Numerical solution of large-scale Lyapunov equations, Riccati equations, and linear-quadratic optimal control problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2008; 15:755 777 Published online in Wiley InterScience (www.interscience.wiley.com)..622 Numerical solution of large-scale Lyapunov

More information

Preconditioned Parallel Block Jacobi SVD Algorithm

Preconditioned Parallel Block Jacobi SVD Algorithm Parallel Numerics 5, 15-24 M. Vajteršic, R. Trobec, P. Zinterhof, A. Uhl (Eds.) Chapter 2: Matrix Algebra ISBN 961-633-67-8 Preconditioned Parallel Block Jacobi SVD Algorithm Gabriel Okša 1, Marián Vajteršic

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis J. Stoer R. Bulirsch Introduction to Numerical Analysis Second Edition Translated by R. Bartels, W. Gautschi, and C. Witzgall With 35 Illustrations Springer Contents Preface to the Second Edition Preface

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

On the -Sylvester Equation AX ± X B = C

On the -Sylvester Equation AX ± X B = C Department of Applied Mathematics National Chiao Tung University, Taiwan A joint work with E. K.-W. Chu and W.-W. Lin Oct., 20, 2008 Outline 1 Introduction 2 -Sylvester Equation 3 Numerical Examples 4

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Parametrische Modellreduktion mit dünnen Gittern

Parametrische Modellreduktion mit dünnen Gittern Parametrische Modellreduktion mit dünnen Gittern (Parametric model reduction with sparse grids) Ulrike Baur Peter Benner Mathematik in Industrie und Technik, Fakultät für Mathematik Technische Universität

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Sparse BLAS-3 Reduction

Sparse BLAS-3 Reduction Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc

More information

MATH 5524 MATRIX THEORY Problem Set 4

MATH 5524 MATRIX THEORY Problem Set 4 MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.

More information

Matrix Equations and and Bivariate Function Approximation

Matrix Equations and and Bivariate Function Approximation Matrix Equations and and Bivariate Function Approximation D. Kressner Joint work with L. Grasedyck, C. Tobler, N. Truhar. ETH Zurich, Seminar for Applied Mathematics Manchester, 17.06.2009 Sylvester matrix

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

Fast Hessenberg QR Iteration for Companion Matrices

Fast Hessenberg QR Iteration for Companion Matrices Fast Hessenberg QR Iteration for Companion Matrices David Bindel Ming Gu David Garmire James Demmel Shivkumar Chandrasekaran Fast Hessenberg QR Iteration for Companion Matrices p.1/20 Motivation: Polynomial

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Modern Optimal Control

Modern Optimal Control Modern Optimal Control Matthew M. Peet Arizona State University Lecture 19: Stabilization via LMIs Optimization Optimization can be posed in functional form: min x F objective function : inequality constraints

More information

lecture 3 and 4: algorithms for linear algebra

lecture 3 and 4: algorithms for linear algebra lecture 3 and 4: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 30, 2016 Solving a system of linear equations

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

An Accelerated Jacobi-gradient Based Iterative Algorithm for Solving Sylvester Matrix Equations

An Accelerated Jacobi-gradient Based Iterative Algorithm for Solving Sylvester Matrix Equations Filomat 31:8 (2017), 2381 2390 DOI 10.2298/FIL1708381T Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat An Accelerated Jacobi-gradient

More information

Introduction to Applied Linear Algebra with MATLAB

Introduction to Applied Linear Algebra with MATLAB Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation

More information

DIRECT METHODS FOR MATRIX SYLVESTER AND LYAPUNOV EQUATIONS

DIRECT METHODS FOR MATRIX SYLVESTER AND LYAPUNOV EQUATIONS DIRECT METHODS FOR MATRIX SYLVESTER AND LYAPUNOV EQUATIONS DANNY C. SORENSEN AND YUNKAI ZHOU Received 12 December 2002 and in revised form 31 January 2003 We revisit the two standard dense methods for

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 6 Matrix Models Section 6.2 Low Rank Approximation Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Edgar

More information

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,

More information

Solving Large Scale Lyapunov Equations

Solving Large Scale Lyapunov Equations Solving Large Scale Lyapunov Equations D.C. Sorensen Thanks: Mark Embree John Sabino Kai Sun NLA Seminar (CAAM 651) IWASEP VI, Penn State U., 22 May 2006 The Lyapunov Equation AP + PA T + BB T = 0 where

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation

Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation , July 1-3, 15, London, U.K. Model Order Reduction of Continuous LI Large Descriptor System Using LRCF-ADI and Square Root Balanced runcation Mehrab Hossain Likhon, Shamsil Arifeen, and Mohammad Sahadet

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method Ilya B. Labutin A.A. Trofimuk Institute of Petroleum Geology and Geophysics SB RAS, 3, acad. Koptyug Ave., Novosibirsk

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition

Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition Applied Mathematical Sciences, Vol. 4, 2010, no. 1, 21-30 Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition Amer Kaabi Department of Basic Science

More information

Numerical Solution of Differential Riccati Equations on Hybrid CPU-GPU Platforms

Numerical Solution of Differential Riccati Equations on Hybrid CPU-GPU Platforms Numerical Solution of Differential Riccati Equations on Hybrid CPU-GPU Platforms Peter Benner 1, Pablo Ezzatti 2, Hermann Mena 3, Enrique S. Quintana-Ortí 4, Alfredo Remón 4 1 Fakultät für Mathematik,

More information

PRECONDITIONING IN THE PARALLEL BLOCK-JACOBI SVD ALGORITHM

PRECONDITIONING IN THE PARALLEL BLOCK-JACOBI SVD ALGORITHM Proceedings of ALGORITMY 25 pp. 22 211 PRECONDITIONING IN THE PARALLEL BLOCK-JACOBI SVD ALGORITHM GABRIEL OKŠA AND MARIÁN VAJTERŠIC Abstract. One way, how to speed up the computation of the singular value

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information