Matrix Equations and and Bivariate Function Approximation
|
|
- Robyn Harris
- 5 years ago
- Views:
Transcription
1 Matrix Equations and and Bivariate Function Approximation D. Kressner Joint work with L. Grasedyck, C. Tobler, N. Truhar. ETH Zurich, Seminar for Applied Mathematics Manchester,
2 Sylvester matrix equation linear matrix equation AX + XB = D (SYLV) with real n n coefficient matrices A, B, D. Applications: controllability/observability Gramians of linear systems (B = A T, D symmetric pos semidef) balanced truncation model reduction Newton methods for solving algebraic Riccati equation stability analysis 2D PDE with separable coefficients... D. Kressner p. 2
3 Outline Connection to bivariate functions X 1/(α + β) Approximation to X Approximation to 1/(α + β) Applications Singular value decay of X Convergence of Krylov subspace methods Convergence of extended Krylov subspace methods Extension to higher dimensions D. Kressner p. 3
4 Connection to bivariate functions D. Kressner p. 4
5 The diagonalizable case Assume A, B diagonalizable: P 1 AP = Λ A, Q 1 BQ = Λ B with Λ A = diag(α 1,...,α m ), Λ B = diag(β 1,...,β n ). Then AX + XA T = D Λ A X + XΛB = P 1 DQ =: D. with explicit solution x ij = d ij α i + β j D. Kressner p. 5
6 The diagonalizable case Assume A, B diagonalizable: P 1 AP = Λ A, Q 1 BQ = Λ B with Then Λ A = diag(α 1,...,α m ), Λ B = diag(β 1,...,β n ). AX + XA T = D Λ A X + XΛB = P 1 DQ =: D. with explicit solution d ij x ij = α i + β j X = D [ 1 ] α i + β j X is the Hadamard product between D and the bivariate function 1/(α + β) evaluated at Λ(A) Λ(B). D. Kressner p. 5
7 Separable approximations to bivariate functions Bivariate function of the form f(α)g(β) is called separable. 1/(α + β) is not separable but... D. Kressner p. 6
8 Separable approximations to bivariate functions Bivariate function of the form f(α)g(β) is called separable. 1/(α + β) is not separable but might be well approximated by short sums of separable functions: 1 r α + β s r(y, z) := f k (α)g k (β). k=1 D. Kressner p. 6
9 Separable approximations to bivariate functions Bivariate function of the form f(α)g(β) is called separable. 1/(α + β) is not separable but might be well approximated by short sums of separable functions: 1 r α + β s r(y, z) := f k (α)g k (β). k=1 Set X r = r f k (A)Dg k (B), then k=1 X r = D [ ] s r (α i,β j ) Hadamard product between D and s r (α,β) evaluated at Λ(A) Λ(B). D. Kressner p. 6
10 Approximation to X Approximation to 1/(α + β) AX + XB = D Let Λ(A) Ω A, Λ(B) Ω B and define r r X r = f k (A)Dg k (B), s r (α,β) = f k (α)g k (β), k=1 k=1 with functions f k : Ω A C, g k : Ω B C. X X r F κ(p)κ(q) 1 α + β s r(α,β) D F.,ΩA Ω B D. Kressner p. 7
11 Idea of proof If A = Λ A, B = Λ B are diagonal solution of Λ A X + XΛ B = D: d 11 d 12 α 1 +β 1 α 1 +β 2 X = d 21 d 22 α 2 +β 1 α 2 +β 2.. X X r = 1 P f α 1 +β k (α 1 )g k (β 1 ) 1 1 P f α 2 +β k (α 2 )g k (β 1 ) 1. 1 P f α 1 +β k (α 1 )g k (α 2 ) 2 1 P f α 2 +β k (α 2 )g k (α 2 ) 2. D D. Kressner p. 8
12 The non-diagonalizable case Consider contours Γ A,Γ B C enclosing Λ(A),Λ(B) with Γ A Γ B =. Then X = 1 1 4π 2 Γ B α + β (αi A) 1 D(βI B) 1 dαdβ Γ A solves AX + XB = D. D. Kressner p. 9
13 The non-diagonalizable case Consider contours Γ A,Γ B C enclosing Λ(A),Λ(B) with Γ A Γ B =. Then X = 1 1 4π 2 Γ B α + β (αi A) 1 D(βI B) 1 dαdβ Γ A solves AX + XB = D. Similarly, for r X r = f k (A)Dg k (A T ), s r (α,β) = we have k=1 X r = 1 4π 2 Γ A r f k (α)g k (β), k=1 Γ B s r (α,β)(αi A) 1 D(βI B) 1 dαdβ. D. Kressner p. 9
14 The non-diagonalizable case X X r F κ(ω A )κ(ω B ) 1 α + β s r(α,β) D F.,ΩA Ω B Constants : κ(ω A ) = 1 (αi A) 1 2 dα, 2π Γ A κ(ω B ) = 1 (βi B) 1 2 dβ. 2π Γ B See discussion in [Beattie, Embree, Sorensen, SIAM Review 05] on κ(ω A ) and κ(ω B ). D. Kressner p. 10
15 General notion of bivariate matrix functions? Consider two matrices A, B and contours Γ A,Γ B C enclosing Λ(A),Λ(B). For any function f(α,β) analytic on Γ A Γ B, one could define f(a, B)[D] = 1 4π 2 f(α,β)(αi A) 1 D(βI B) 1 dαdβ, Γ B Γ A as the evaluation of f at (A, B) applied to C. Such an abstract framework might be of use in other applications (Stein equations, Fréchet derivative of matrix functions). D. Kressner p. 11
16 Applications Singular value decay D. Kressner p. 12
17 Singular value decay Approximation results X X r F 1 α + β s r(α,β) D F.,ΩA Ω B If D has rank one X r = f k (A)Dg k (B) has rank r. σ r+1 (X) 1 α + β s r(α,β) D 2,ΩA Ω B No restriction on the choice of functions f k, g k. D. Kressner p. 13
18 Example: The symmetric case Assume A, B symmetric with eigenvalues contained in interval [λ min,λ max ] with λ min λ max < 0. [Braess 86], [Braess/Hackbusch 05]: Let s r (α,β) = r ω k exp(γ k α) exp(γ k β), k=1 ω k < 0,γ k > 0 s.t. [ ] 1/(α + β) s r (α,β) [λmin,λ max] 2 8 λ max exp rπ 2 log(8 λ. min λ max ) D. Kressner p. 14
19 Example: The symmetric case If D has rank one: [ σ r+1 (X) 8 A 1 rπ 2 ] 2 exp D 2. log(8κ(a)) This matches an existing bound in [Sabino 06] for Lyapunov equations. Results for complex Λ(A), Λ(B) can be obtained by sinc interpolation [Grasedyck/Hackbusch/Khoromskij 02, Grasedyck/K. 09]. Isolated eigenvalues close to zero can be matched by exact polynomial interpolation. D. Kressner p. 15
20 Lyapunov equation: Singular value decay Other existing decay bounds: [Penzl 00]: r 1 σ r+1 (X) κ (2j+1)/(2k) 1 X 2 κ (2j+1)/(2k) + 1 j=0 [Antoulas/Sorensen/Zhou 02]: r 1 λ r λ j σ r (X) λ r + λ j j=1 [Grasedyck/Hackbusch/Khoromskij 02]: σ r+1 (X) X 2 2 ( ( = exp O 2 exp( r).. r log κ(a) )). D. Kressner p. 16
21 Example: A R second-difference operator and D = bb T with b = [1,...,1] T GHK bound Penzl s bound ASZ bound exponential bound singular values D. Kressner p. 17
22 Applications Subspace projection methods D. Kressner p. 18
23 Basic idea of subspace projection methods Approximate solution to AX + XA T = bb T using a subspace U R n that contains Krylov subspace for A: K r (A, b) = span{b, Ab, A 2 b,...,a r 1 b} D. Kressner p. 19
24 Basic idea of subspace projection methods Approximate solution to AX + XA T = bb T using a subspace U R n that contains Krylov subspace for A: K r (A, b) = span{b, Ab, A 2 b,...,a r 1 b} Krylov subspace for A 1 : K r (A 1, b) = span{b, A 1 b,...,a r+1 b} D. Kressner p. 19
25 Basic idea of subspace projection methods Approximate solution to AX + XA T = bb T using a subspace U R n that contains Krylov subspace for A: K r (A, b) = span{b, Ab, A 2 b,...,a r 1 b} Krylov subspace for A 1 : K r (A 1, b) = span{b, A 1 b,...,a r+1 b} Krylov subspaces for (A σi) 1 : K r ((A σi) 1, b) = span{b,(a σi) 1 b,...,(a σi) r+1 b} D. Kressner p. 19
26 Convergence of Arnoldi for Lyapunov 2D instationary heat equation on the unit square with homogeneous kind boundary conditions; finite difference discretization Approximation error Arnoldi steps Linear convergence observed. D. Kressner p. 20
27 Convergence does not capture singular value decay Approximation error from Arnoldi method Singular values of X Example: svd(x) tells there is rank 12 approximation with error 10 8 ; Arnoldi yields only rank 84 approximation with error D. Kressner p. 21
28 1 Arnoldi 10 0 Extended Arnoldi Rational Arnoldi (6 shifts) Rational Arnoldi (13 shifts) Convergence for Steel example (Oberwolfach collection).
29 Galerkin condition From now on: A is symmetric and Λ(A) [λ min,λ max ] with λ min λ max < 0. Approximation X r satisfies Galerkin condition Equivalent to AX r + X r A + bb T U U. X X r A = min Z U U X Z A, in the weighted Frobenius norm C A := vec(c) (I A+A I). D. Kressner p. 23
30 Convergence of Arnoldi method If U = K r (A, b), every Z U U takes the form Hence, Z = r 1 i,j=0 γ ij A i bb T A j, γ ij R. X X r A A 2 min 1 r 1 γ ij R α + β γ ij α i β j [λmin,λ max] 2 b 2 2. i,j=1 D. Kressner p. 24
31 Convergence of Arnoldi method Error for separable polynomial approximation of 1/(α + β): 1 κ + 1 λ max ( κ 1) r κ ( κ + 1) r with κ = ( λ min + λ max )/(2 λ min ). Proof based on careful analysis of Chebyshev approximation to exponentials. D. Kressner p. 25
32 Convergence of Arnoldi method Error for separable polynomial approximation of 1/(α + β): 1 κ + 1 λ max ( κ 1) r κ ( κ + 1) r with κ = ( λ min + λ max )/(2 λ min ). Proof based on careful analysis of Chebyshev approximation to exponentials. Convergence bound for Arnoldi method: X X r A A 2 A 1 2 b 2 2 κ + 1 κ ( κ 1) r ( κ + 1) r, with κ κ(a)/2. Matches a bound in [Simoncini/Druskin 09]. D. Kressner p. 25
33 Convergence of Arnoldi method Observed convergence and convergence bound. D. Kressner p. 26
34 Convergence of Extended Arnoldi method Observed convergence for U = span ( K r (A, b) K r (A 1, b) ). D. Kressner p. 27
35 Convergence of Extended Arnoldi method U = span ( K r (A, b) K r (A 1, b) ) = {p(a)b : p L r } where L r denotes the set of Laurent polynomials γ r α r + + γ 1 α 1 + γ 0 + γ 1 α + + γ r 1 α r 1. X X r A A 2 min 1 r 1 γ ij R α + β γ ij α i β j b 2 2. i,j= r D. Kressner p. 28
36 Convergence of extended Arnoldi method Error for separable Laurent polynomial approximation of 1/(α + β): 1 λ max κ1/4 + 1 κ 1/4 (κ1/4 1) r (κ 1/4 + 1) r with κ = ( λ min + λ max )/(2 λ max ). Proof based on polynomial approximation: If p is polynomial, p(α + γ/α) is Laurent polynomial. D. Kressner p. 29
37 Convergence of extended Arnoldi method Error for separable Laurent polynomial approximation of 1/(α + β): 1 λ max κ1/4 + 1 κ 1/4 (κ1/4 1) r (κ 1/4 + 1) r with κ = ( λ min + λ max )/(2 λ max ). Proof based on polynomial approximation: If p is polynomial, p(α + γ/α) is Laurent polynomial. Convergence bound for extended Arnoldi method: X X r A A 2 A 1 2 b 2 2 κ1/4 + 1 κ 1/4 with κ κ(a)/2 [K./Tobler 09]. (κ1/4 1) r (κ 1/4 + 1) r, D. Kressner p. 29
38 High-dimensional Extensions D. Kressner p. 30
39 A toy PDE u = f in Ω, u Ω = 0, Tensorize 1D FE bases V = {v 1 (x 1 ),..., v m (x 1 )} { W = {w 1 (x 2 ),..., w n (x 2 )} V W = αij } v i (x 1 )w j (x 2 ) Galerkin discretization (K V M W + M V K W )u = f, with 1D mass/stiffness matrices M V, M W, K V, K W ; denotes Kronecker product. D. Kressner p. 31
40 reshape u u = u 1 u 2 MATLAB: U = reshape(u,m,n);.,u j R m U = [u 1, u 2,...] (K V M W )u K V UM W. D. Kressner p. 32
41 A toy PDE u = f in Ω, u Ω = 0, Tensorize 1D FE bases V = {v 1 (x 1 ),..., v m (x 1 )} { W = {w 1 (x 2 ),..., w n (x 2 )} V W = αij } v i (x 1 )w j (x 2 ) Galerkin discretization K V UM W + M V UK W = F, with 1D mass/stiffness matrices M V, M W, K V, K W ; u = vec(u), f = vec(f). D. Kressner p. 33
42 A toy PDE u = f in Ω, u Ω = 0, Tensorize 1D FE bases V = {v 1 (x 1 ),..., v m (x 1 )} { W = {w 1 (x 2 ),..., w n (x 2 )} V W = αij } v i (x 1 )w j (x 2 ) Galerkin discretization ( 1/2 M V K V M 1/2 ) ( 1/2 V X + X M W K W M 1/2 ) 1/2 W = M V FM 1/2 W, with 1D mass/stiffness matrices M V, M W, K V, K W ; X = M 1/2 V UM1/2 W, f = vec(f). D. Kressner p. 34
43 The toy PDE in 3D u = f in Ω, u Ω = 0, Tensorize 1D FE bases V = {v 1 (x 1 ),..., v m (x 1 )} W = {w 1 (x 2 ),..., w n (x 2 )} Z = {z 1 (x 3 ),..., z n (x 3 )} V W Z = { αijk v i (x 1 )w j (x 2 )z k (x 3 )} Galerkin discretization (K V M W M Z + M V K W M Z + M V M W K Z )u = f, with 1D mass/stiffness matrices M V, M W, M Z, K V, K W, K Z. D. Kressner p. 35
44 Higher-dimensional matrix equations Kroneckerized Sylvester equation (A 1 I + I A 2 )x = b. Generalization to dimension d = 3: (A 1 I I + I A 2 I + I I A 3 )x = b. Generalization to arbitrary dimension: Ax = b, A = d I I A j I I. j=1 D. Kressner p. 36
45 Properties Ax = b, A = d I I A j I I. j=1 If A j R n n then A R nd n d. Curse of dimension! Λ(A) = {λ (1) + + λ (d) : λ (j) Λ(A j )}. If κ(a j ) κ then κ(a) = κ. If b = b 1 b 2 b d (rank-1 tensor) and all A j spd solution x can be well approximated by short sum of rank-1 tensors [Grasedyck 04, K./Tobler 09]. D. Kressner p. 37
46 Tensor Krylov subspace method Arnoldi method for Lyapunov equations extends to d > 2 for b = b 1 b d. Works with K r (A j, b j ) (Krylov subspace in each individual direction). Approximates x by x r satisfying Galerkin condition If all A j spd: Ax r b span(k r (A 1, b 1 ) K r (A d, b d )). x x r 2 ( κ 1 κ + 1 ) r, κ κ(a) d. D. Kressner p. 38
47 CG vs. Tensor Krylov Comp. Complexity Storage requ. CG O(r 2 n d ) O(rn d ) Tensor Krylov O(r 2 dn) O(rdn) D. Kressner p. 39
48 10 2 dim = 5 dim = dim = Example: Laplace on [0, 1] d, n = 1024, n Tensor Krylov requires 300 iterations and 1h to attain residual 10 6 for d = 50.
49 Conclusions Singular value decay/projection methods for Sylvester equations bivariate function approximation. Low tensor rank approximation/projection methods for linear systems with tensor product structure multivariate function approximation. See for further details. D. Kressner p. 41
50 Workshop on Matrix Equations Fr, 9th October 2009 TU Braunschweig, Germany Organizers: Peter Benner, TU Chemnitz Heike Faßbender, TU Braunschweig Lars Grasedyck, MPI Leipzig Daniel Kressner, ETH Zurich Contact:
Krylov subspace methods for linear systems with tensor product structure
Krylov subspace methods for linear systems with tensor product structure Christine Tobler Seminar for Applied Mathematics, ETH Zürich 19. August 2009 Outline 1 Introduction 2 Basic Algorithm 3 Convergence
More informationOn the numerical solution of large-scale linear matrix equations. V. Simoncini
On the numerical solution of large-scale linear matrix equations V. Simoncini Dipartimento di Matematica, Università di Bologna (Italy) valeria.simoncini@unibo.it 1 Some matrix equations Sylvester matrix
More informationMODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS
MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology
More informationAdaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini
Adaptive rational Krylov subspaces for large-scale dynamical systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria@dm.unibo.it joint work with Vladimir Druskin, Schlumberger Doll
More informationThe rational Krylov subspace for parameter dependent systems. V. Simoncini
The rational Krylov subspace for parameter dependent systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Given the continuous-time system Motivation. Model
More informationExploiting off-diagonal rank structures in the solution of linear matrix equations
Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)
More informationA Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations
A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations Peter Benner Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory
More informationThe Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner.
The Newton-ADI Method for Large-Scale Algebraic Riccati Equations Mathematik in Industrie und Technik Fakultät für Mathematik Peter Benner benner@mathematik.tu-chemnitz.de Sonderforschungsbereich 393 S
More informationKrylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms
Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische
More informationBALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS
BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007
More informationLow Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:
More informationNumerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems
MatTriad 2015 Coimbra September 7 11, 2015 Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems Peter Benner Max Planck Institute for Dynamics of Complex Technical
More informationComputational methods for large-scale matrix equations and application to PDEs. V. Simoncini
Computational methods for large-scale matrix equations and application to PDEs V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1 Some matrix
More informationComputational methods for large-scale matrix equations and application to PDEs. V. Simoncini
Computational methods for large-scale matrix equations and application to PDEs V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Some matrix equations Sylvester
More informationStructured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries
Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationLecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm
CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction
More informationKarhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques
Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,
More informationS N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz
Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION www.tu-chemnitz.de/~benner
More informationModel Reduction for Dynamical Systems
Otto-von-Guericke Universität Magdeburg Faculty of Mathematics Summer term 2015 Model Reduction for Dynamical Systems Lecture 8 Peter Benner Lihong Feng Max Planck Institute for Dynamics of Complex Technical
More information1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?
. Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in
More informationMATH 5524 MATRIX THEORY Problem Set 4
MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationComputational methods for large-scale matrix equations: recent advances and applications. V. Simoncini
Computational methods for large-scale matrix equations: recent advances and applications V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationKrylov Subspace Methods for Nonlinear Model Reduction
MAX PLANCK INSTITUT Conference in honour of Nancy Nichols 70th birthday Reading, 2 3 July 2012 Krylov Subspace Methods for Nonlinear Model Reduction Peter Benner and Tobias Breiten Max Planck Institute
More informationOrder reduction numerical methods for the algebraic Riccati equation. V. Simoncini
Order reduction numerical methods for the algebraic Riccati equation V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1 The problem Find X
More information1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.
Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)
More informationOn ADI Method for Sylvester Equations
On ADI Method for Sylvester Equations Ninoslav Truhar Ren-Cang Li Technical Report 2008-02 http://www.uta.edu/math/preprint/ On ADI Method for Sylvester Equations Ninoslav Truhar Ren-Cang Li February 8,
More informationConjugate Gradient algorithm. Storage: fixed, independent of number of steps.
Conjugate Gradient algorithm Need: A symmetric positive definite; Cost: 1 matrix-vector product per step; Storage: fixed, independent of number of steps. The CG method minimizes the A norm of the error,
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG
More informationFast Structured Spectral Methods
Spectral methods HSS structures Fast algorithms Conclusion Fast Structured Spectral Methods Yingwei Wang Department of Mathematics, Purdue University Joint work with Prof Jie Shen and Prof Jianlin Xia
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More informationKrylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations
Krylov Subspace Type Methods for Solving Proected Generalized Continuous-Time Lyapunov Equations YUIAN ZHOU YIQIN LIN Hunan University of Science and Engineering Institute of Computational Mathematics
More informationMatrix-Product-States/ Tensor-Trains
/ Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).
More informationarxiv: v2 [math.na] 11 Jan 2018
On the singular values of matrices with high displacement rank Alex Townsend a,1, Heather Wilber b,2 arxiv:1712.05864v2 [math.na] 11 Jan 2018 Abstract a Department of Mathematics, Cornell University, Ithaca,
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationOn Solving Large Algebraic. Riccati Matrix Equations
International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationDIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix
DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that
More informationEigenvalue Problems CHAPTER 1 : PRELIMINARIES
Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14
More informationEfficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers
Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Jens Saak joint work with Peter Benner (MiIT) Professur Mathematik in Industrie und Technik (MiIT) Fakultät für Mathematik
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationNonlinear Eigenvalue Problems: An Introduction
Nonlinear Eigenvalue Problems: An Introduction Cedric Effenberger Seminar for Applied Mathematics ETH Zurich Pro*Doc Workshop Disentis, August 18 21, 2010 Cedric Effenberger (SAM, ETHZ) NLEVPs: An Introduction
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationLecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.
Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix
More informationStructure preserving Krylov-subspace methods for Lyapunov equations
Structure preserving Krylov-subspace methods for Lyapunov equations Matthias Bollhöfer, André Eppler Institute Computational Mathematics TU Braunschweig MoRePas Workshop, Münster September 17, 2009 System
More informationBivariate matrix functions
Eidgenössische Technische Hochschule Zürich Ecole polytechnique fédérale de Zurich Politecnico federale di Zurigo Swiss Federal Institute of Technology Zurich Bivariate matrix functions D. Kressner Research
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationMatrix assembly by low rank tensor approximation
Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis
More informationADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study
-preconditioned for large - A case study Matthias Bollhöfer, André Eppler TU Braunschweig Institute Computational Mathematics Syrene-MOR Workshop, TU Hamburg October 30, 2008 2 / 20 Outline 1 2 Overview
More informationKrylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations
Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction
More informationSolving Large-Scale Matrix Equations: Recent Progress and New Applications
19th Conference of the International Linear Algebra Society (ILAS) Seoul, August 6 9, 2014 Solving Large-Scale Matrix Equations: Recent Progress and New Applications Peter Benner Max Planck Institute for
More informationCSL361 Problem set 4: Basic linear algebra
CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices
More informationModel Reduction for Unstable Systems
Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model
More informationLecture 2: Computing functions of dense matrices
Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture
More informationH 2 -optimal model reduction of MIMO systems
H 2 -optimal model reduction of MIMO systems P. Van Dooren K. A. Gallivan P.-A. Absil Abstract We consider the problem of approximating a p m rational transfer function Hs of high degree by another p m
More informationReduction of Finite Element Models of Complex Mechanical Components
Reduction of Finite Element Models of Complex Mechanical Components Håkan Jakobsson Research Assistant hakan.jakobsson@math.umu.se Mats G. Larson Professor Applied Mathematics mats.larson@math.umu.se Department
More informationOn the -Sylvester Equation AX ± X B = C
Department of Applied Mathematics National Chiao Tung University, Taiwan A joint work with E. K.-W. Chu and W.-W. Lin Oct., 20, 2008 Outline 1 Introduction 2 -Sylvester Equation 3 Numerical Examples 4
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationSolving Large Scale Lyapunov Equations
Solving Large Scale Lyapunov Equations D.C. Sorensen Thanks: Mark Embree John Sabino Kai Sun NLA Seminar (CAAM 651) IWASEP VI, Penn State U., 22 May 2006 The Lyapunov Equation AP + PA T + BB T = 0 where
More information1 Extrapolation: A Hint of Things to Come
Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationOn solving linear systems arising from Shishkin mesh discretizations
On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationSolving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners
Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department
More informationLinear Algebra Practice Problems
Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,
More informationApplied Linear Algebra
Applied Linear Algebra Lecture notes on a course held in Padova Tobias Damm September 11, 28 Abstract These are some informal lecture notes, which contain the material presented in class with possibly
More informationITERATIVE METHODS BASED ON KRYLOV SUBSPACES
ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationDirect Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains
Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains Michael Nip Center for Control, Dynamical Systems, and Computation University of California,
More informationAlgorithms for Solving the Polynomial Eigenvalue Problem
Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey
More informationSolvability of Linear Matrix Equations in a Symmetric Matrix Variable
Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type
More informationOn the Ritz values of normal matrices
On the Ritz values of normal matrices Zvonimir Bujanović Faculty of Science Department of Mathematics University of Zagreb June 13, 2011 ApplMath11 7th Conference on Applied Mathematics and Scientific
More informationModel Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation
, July 1-3, 15, London, U.K. Model Order Reduction of Continuous LI Large Descriptor System Using LRCF-ADI and Square Root Balanced runcation Mehrab Hossain Likhon, Shamsil Arifeen, and Mohammad Sahadet
More informationarxiv: v2 [math.na] 22 Aug 2018
SOLVING RANK STRUCTURED SYLVESTER AND LYAPUNOV EQUATIONS STEFANO MASSEI, DAVIDE PALITTA, AND LEONARDO ROBOL arxiv:1711.05493v2 [math.na] 22 Aug 2018 Abstract. We consider the problem of efficiently solving
More information4. Multi-linear algebra (MLA) with Kronecker-product data.
ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial
More informationA orthonormal basis for Radial Basis Function approximation
A orthonormal basis for Radial Basis Function approximation 9th ISAAC Congress Krakow, August 5-9, 2013 Gabriele Santin, joint work with S. De Marchi Department of Mathematics. Doctoral School in Mathematical
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationSTRUCTURED CONDITION NUMBERS FOR INVARIANT SUBSPACES
STRUCTURED CONDITION NUMBERS FOR INVARIANT SUBSPACES RALPH BYERS AND DANIEL KRESSNER Abstract. Invariant subspaces of structured matrices are sometimes better conditioned with respect to structured perturbations
More informationTensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition
Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 4 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 12, 2012 Andre Tkacenko
More informationModel reduction of large-scale dynamical systems
Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationConjugate Gradient Method
Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three
More information1 Sylvester equations
1 Sylvester equations Notes for 2016-11-02 The Sylvester equation (or the special case of the Lyapunov equation) is a matrix equation of the form AX + XB = C where A R m m, B R n n, B R m n, are known,
More informationAMG for a Peta-scale Navier Stokes Code
AMG for a Peta-scale Navier Stokes Code James Lottes Argonne National Laboratory October 18, 2007 The Challenge Develop an AMG iterative method to solve Poisson 2 u = f discretized on highly irregular
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationMax Planck Institute Magdeburg Preprints
Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09
More informationPreconditioned inverse iteration and shift-invert Arnoldi method
Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationc 2015 Society for Industrial and Applied Mathematics
SIAM J MATRIX ANAL APPL Vol 36, No, pp 656 668 c 015 Society for Industrial and Applied Mathematics FAST SINGULAR VALUE DECAY FOR LYAPUNOV SOLUTIONS WITH NONNORMAL COEFFICIENTS JONATHAN BAKER, MARK EMBREE,
More informationConjugate Gradient Method
Conjugate Gradient Method Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 10, 2011 T.M. Huang (NTNU) Conjugate Gradient Method October 10, 2011 1 / 36 Outline 1 Steepest
More information