Structure preserving Krylov-subspace methods for Lyapunov equations
|
|
- Sara Hawkins
- 6 years ago
- Views:
Transcription
1 Structure preserving Krylov-subspace methods for Lyapunov equations Matthias Bollhöfer, André Eppler Institute Computational Mathematics TU Braunschweig MoRePas Workshop, Münster September 17, 2009 System Reduction for Nanoscale IC Design
2 2 / 26 Overview 1 Introduction 2 3 4
3 3 / 26 Overview Introduction 1 Introduction 2 3 4
4 SyreNe-Project Goals Develop and compare methods for system reduction in the design of high dimensional nanoelectronic ICs. (Integrated Circuits) Test these methods in the practice of semiconductor development. Two complementary approaches: reduction of the whole system by a global method creation of reduced order models for single devices and large linear sub-circuits 4 / 26
5 5 / 26 Generalized projected Lyapunov-equations EXA T + AXE T = P l BB T P T l, X = P T r XP r (1) E T YA + A T YE = P T r C T CP r, Y = P l YP T l (2) E, A R n n B, C T R n ns equations arising from the work group of T. Stykel E singular, n s n existence and uniqueness of solution proved
6 5 / 26 Generalized Lyapunov-equations EXA T + AXE T = B B T (1) E T YA + A T YE = C T (2) E, A R n n B, CT R n ns equations arising from the work group of T. Stykel E singular, n s n existence and uniqueness of solution proved
7 5 / 26 Generalized Lyapunov-equations EXA T + AXE T = B B T (1) E T YA + A T YE = C T (2) E, A R n n B, CT R n ns equations arising from the work group of T. Stykel E singular, n s n existence and uniqueness of solution proved
8 6 / 26 Definitions Let A := E A + A E be the Lyapunov operator and vec be the operator R n n R n n which puts the columns of a matrix as column vector X = vec(x). Rewrite the Lyapunov equations (1), (2) as linear systems AX = B with B = vec( B B T ) and C = vec( C T C). Problem: Dimension n 2 AY = C Good news: E, A are usually sparse when dealing with circuit equations.
9 6 / 26 Definitions Let A := E A + A E be the Lyapunov operator and vec be the operator R n n R n n which puts the columns of a matrix as column vector X = vec(x). Rewrite the Lyapunov equations (1), (2) as linear systems AX = B with B = vec( B B T ) and C = vec( C T C). Problem: Dimension n 2 AY = C Good news: E, A are usually sparse when dealing with circuit equations.
10 Structure preservation principle 7 / 26 right hand side is low rank and symmetric these properties transfer to the solution X of (1) iterative solver has to keep that structure in each step possible: Krylov-subspace methods, only need linear combination of vectors and applying the Lyapunov-operator use factorization X = VZV T =
11 Structure preservation - linear combination 8 / 26 Let X 1 = V 1 Z 1 V T 1, X 2 = V 2 Z 2 V T 2 be low rank matrices, then X 3 = α 1 X 1 + α 2 X 2 = α 1 V 1 Z 1 V1 T + α 2V 2 Z 2 V2 T ( α1 Z = (V 1 V 2 ) 1 0 }{{} 0 α 2 Z 2 V 3 has again a low rank factorization. Rank estimation rank(x 3 ) rank(x 1 ) + rank(x 2 ) } {{ } Z 3 ) (V 1 V 2 ) T }{{} V3 T
12 Structure preservation - apply Lyapunov operator 9 / 26 Let X = VZV T be a low rank matrix, then again X a = AX = E VZV T A T + A VZV T E T ( ) 0 Z = (EV AV ) (EV AV ) T }{{} Z 0 }{{} V a }{{} Va T Z a a low rank factorization exists. Rank estimation rank(x a ) 2 rank(x)
13 10 / 26 Overview Introduction 1 Introduction 2 3 4
14 11 / 26 ADI general Introduction properties iterative method for solving AX = B need shift parameters τ j, essential for convergence behavior, difficult to compute can be applied to solve Lyapunov equations X 0 = 0
15 11 / 26 ADI general Introduction properties iterative method for solving AX = B (E + τ j A)X j 1 = B B T X j 1 (E τ j A) T 2 (E + τ j A)X j = B B T X T (E τ j 1 j A) T 2 need shift parameters τ j, essential for convergence behavior, difficult to compute can be applied to solve Lyapunov equations X 0 = 0
16 ADI general Introduction properties iterative method for solving AX = B need shift parameters τ j, essential for convergence behavior, difficult to compute can be applied to solve Lyapunov equations E, A R n n B, CT R n ns X 0 = 0 EXA T + AXE T = B B T E T YA + A T YE = C T C 11 / 26
17 ADI for gen. Lyapunov 12 / 26 Standard ADI recursion (E + τ j A)X j 1 = B B T X j 1 (E τ j A) T (3) 2 (E + τ j A)X j = B B T X T (E τ j 1 j A) T (4) 2 Use (3) and (4) to obtain X j = 2τ j (E + τ j A) 1 B BT (E + τ j A) T (5) + (E + τ j A) 1 (E τ j A)X j 1 (E τ j A) T (E + τ j A) T. This symmetric sum can be factored X j = Z j Z T j.
18 ADI for gen. Lyapunov 12 / 26 Standard ADI recursion (E + τ j A)X j 1 = B B T X j 1 (E τ j A) T (3) 2 (E + τ j A)X j = B B T X T (E τ j 1 j A) T (4) 2 Use (3) and (4) to obtain X j = 2τ j (E + τ j A) 1 B BT (E + τ j A) T (5) + (E + τ j A) 1 (E τ j A)X j 1 (E τ j A) T (E + τ j A) T. This symmetric sum can be factored X j = Z j Z T j.
19 CF-ADI [Li,White 04] 13 / 26 Cholesky Factor Alternating Direct Implicit Iteration computes the Cholesky-Factor Z of the solution X = ZZ T Algorithm 1 compute shift parameters τ 1,...τ j 2 z 1 = 2τ 1 (E + τ 1 A) 1 B Z = [z 1 ] 3 For i=2..j z i = P i 1 z i 1, with 2τi+1 P i = [I (τ i+1 + τ i )(E + τ i+1 A) 1 ] 2τi Z = [Z z i ]
20 14 / 26 Overview Introduction 1 Introduction 2 3 4
21 15 / 26 Introduction iterative method for solving AX = B allows flexible preconditioning in each step calculates an orthonormal basis of m-dimensional Krylov space K m (R 0, A) = span(r 0, AR 0,..., A m 1 R 0 )
22 algorithm [Saad 92] Initialize: choose X 0 and dimension m Arnoldi process: 1 compute R 0 = B AX 0, h 1,0 = R 0, V 1 = R 0 h 1,0 2 for j = 1... m a) compute W j = M 1 V j b) compute V j+1 = AW j c) compute i = 1... j MGS h i,j = (V j+1, V i ), V j+1 = V j+1 h i,j V i h j+1,j = V j+1, V j+1 = V j+1 h j+1,j 3 Let W m := [W 1... W m ] Compute solution X m = X 0 + W m Y m with Y m minimizes h 1,0 e 1 H m Y Restart : If not converged X 0 = X m 16 / 26
23 Rank truncation strategy 1 obtain starting factorization VZV T 2 compute QR factorization of V = QR 3 compute EVD of RZR T = UΣU T 4 truncate rank to given relative tolerance (tol p, tol r ), keep only first r columns Û = U(r), ˆΣ = Σ(r) V Z V T X = V Z V T 17 / 26
24 Rank truncation strategy 1 obtain starting factorization VZV T 2 compute QR factorization of V = QR 3 compute EVD of RZR T = UΣU T 4 truncate rank to given relative tolerance (tol p, tol r ), keep only first r columns Û = U(r), ˆΣ = Σ(r) V Z V T X = QR Z R T Q T 17 / 26
25 Rank truncation strategy 1 obtain starting factorization VZV T 2 compute QR factorization of V = QR 3 compute EVD of RZR T = UΣU T 4 truncate rank to given relative tolerance (tol p, tol r ), keep only first r columns Û = U(r), ˆΣ = Σ(r) V Z V T X = Q UΣU T Q T 17 / 26
26 Rank truncation strategy 1 obtain starting factorization VZV T 2 compute QR factorization of V = QR 3 compute EVD of RZR T = UΣU T 4 truncate rank to given relative tolerance (tol p, tol r ), keep only first r columns Û = U(r), ˆΣ = Σ(r) V Z V T X = Q Û ˆΣ ÛT Q T 17 / 26
27 Rank truncation strategy 17 / 26 1 obtain starting factorization VZV T 2 compute QR factorization of V = QR 3 compute EVD of RZR T = UΣU T 4 truncate rank to given relative tolerance (tol p, tol r ), keep only first r columns Û = U(r), ˆΣ = Σ(r) ˆV ˆΣ ˆV T X = ˆV ˆΣ ˆV T Result X = ˆV ˆΣ ˆV T with ˆV = QÛ ˆV unitary matrix ˆΣ diagonal matrix
28 Preconditioning 18 / 26 Properties must preserve structure (important fact) may vary in each step should increase convergence rate possible candidate: CF-ADI Apply 1 cycle of CF-ADI to EW j A T + AW j E T = V j.
29 Preconditioning 18 / 26 Properties must preserve structure (important fact) may vary in each step should increase convergence rate possible candidate: CF-ADI Apply 1 cycle of CF-ADI to EW j A T + AW j E T = V j.
30 Preconditioning 18 / 26 Properties must preserve structure (important fact) may vary in each step should increase convergence rate possible candidate: CF-ADI Apply 1 cycle of CF-ADI to EW j A T + AW j E T = V j.
31 19 / 26 Overview Introduction Number of shifts Perturbed shifts Rank truncation 1 Introduction Number of shifts Perturbed shifts Rank truncation
32 20 / 26 Number of shifts Introduction Number of shifts Perturbed shifts Rank truncation rank X = 18 CF-ADI more sensitive to number of shift parameters
33 21 / 26 Number of shifts Introduction Number of shifts Perturbed shifts Rank truncation minimum number of shifts necessary increasing the number does not result in further profit convenient range from 10 to 30 optimal number? (problem dependend) is less sensitive to specific number Conjecture A lower number of shifts is sufficient compared to pure CF-ADI.
34 21 / 26 Number of shifts Introduction Number of shifts Perturbed shifts Rank truncation minimum number of shifts necessary increasing the number does not result in further profit convenient range from 10 to 30 optimal number? (problem dependend) is less sensitive to specific number Conjecture A lower number of shifts is sufficient compared to pure CF-ADI.
35 22 / 26 Introduction Number of shifts Perturbed shifts Rank truncation Perturbed shifts τ i δ [0.99, 1.01] randomly chosen,12 shifts CF-ADI much more sensitive to perturbation of shift parameters
36 Rank truncation Introduction Number of shifts Perturbed shifts Rank truncation rank X = 18, tolr r = 1e 12, 12 shifts certain accuracy of tol p needed for convergence problem depended similar results for tol r 23 / 26
37 Summary Introduction Benefits structure preservation (low rank, symmetry) is fulfilled algorithm is less sensitive to perturbations and number of ADI shift parameters τ j Problems convergence strongly depends on CF-ADI number of columns of CF-ADI iterates increases in each step but rank only increases slightly expensive rank truncation Remedy: need rank compression or stopping criteria within CF-ADI 24 / 26
38 25 / 26 Outlook Introduction Possible improvements compress ranks within ADI compute faster low rank factorization improve rank truncation strategy (parameters tol p and tol r ) use different Krylov-subspace methods
39 25 / 26 Outlook Introduction Possible improvements compress ranks within ADI compute faster low rank factorization improve rank truncation strategy (parameters tol p and tol r ) use different Krylov-subspace methods
40 } { z Dipl.-Math. techn. Thomas Mach Prof. Dr. Michael Hinze Dipl.Technomath. Martin Kunkel Prof. Dr. Heike Faßbender Juan Amorocho M.Sc. Dr. Patrick Lang Dipl.-Math. Oliver Schmidt Dr. Tatjana Stykel Dr. Andreas Steinbrecher } } Dipl.-Math. techn. André Eppler TU Berlin Prof. Dr. Matthias Bollhöfer { z { z } } Dipl.-Math. techn. André Schneider { z { z TU Chemnitz } ITWM Kaiserslautern TU Braunschweig z Prof. Dr. Peter Benner Universität Hamburg TU Braunschweig { BMBF Verbundprojekt SyreNe
ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study
-preconditioned for large - A case study Matthias Bollhöfer, André Eppler TU Braunschweig Institute Computational Mathematics Syrene-MOR Workshop, TU Hamburg October 30, 2008 2 / 20 Outline 1 2 Overview
More informationSyreNe System Reduction for Nanoscale IC Design
System Reduction for Nanoscale Max Planck Institute for Dynamics of Complex Technical Systeme Computational Methods in Systems and Control Theory Group Magdeburg Technische Universität Chemnitz Fakultät
More informationSystem Reduction for Nanoscale IC Design (SyreNe)
System Reduction for Nanoscale IC Design (SyreNe) Peter Benner February 26, 2009 1 Introduction Since 1993, the German Federal Ministry of Education and Research (BMBF Bundesministerium füa Bildung und
More informationA Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations
A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations Peter Benner Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory
More informationModel reduction of coupled systems
Model reduction of coupled systems Tatjana Stykel Technische Universität Berlin ( joint work with Timo Reis, TU Kaiserslautern ) Model order reduction, coupled problems and optimization Lorentz Center,
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationKrylov subspace methods for projected Lyapunov equations
Krylov subspace methods for projected Lyapunov equations Tatjana Stykel and Valeria Simoncini Technical Report 735-2010 DFG Research Center Matheon Mathematics for key technologies http://www.matheon.de
More informationModel Order Reduction of Electrical Circuits with Nonlinear Elements
Model Order Reduction of Electrical Circuits with Nonlinear Elements Tatjana Stykel and Technische Universität Berlin July, 21 Model Order Reduction of Electrical Circuits with Nonlinear Elements Contents:
More informationBALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS
BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007
More informationMODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS
MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology
More informationSolving large Hamiltonian eigenvalue problems
Solving large Hamiltonian eigenvalue problems David S. Watkins watkins@math.wsu.edu Department of Mathematics Washington State University Adaptivity and Beyond, Vancouver, August 2005 p.1 Some Collaborators
More informationModel order reduction of electrical circuits with nonlinear elements
Model order reduction of electrical circuits with nonlinear elements Andreas Steinbrecher and Tatjana Stykel 1 Introduction The efficient and robust numerical simulation of electrical circuits plays a
More informationModel reduction of nonlinear circuit equations
Model reduction of nonlinear circuit equations Tatjana Stykel Technische Universität Berlin Joint work with T. Reis and A. Steinbrecher BIRS Workshop, Banff, Canada, October 25-29, 2010 T. Stykel. Model
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More informationEigenvalue Problems CHAPTER 1 : PRELIMINARIES
Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14
More informationLow Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:
More informationStructured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries
Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender
More informationParametrische Modellreduktion mit dünnen Gittern
Parametrische Modellreduktion mit dünnen Gittern (Parametric model reduction with sparse grids) Ulrike Baur Peter Benner Mathematik in Industrie und Technik, Fakultät für Mathematik Technische Universität
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationKRYLOV SUBSPACE METHODS FOR PROJECTED LYAPUNOV EQUATIONS. 1. Introduction. Consider the projected continuous-time algebraic Lyapunov equation (PCALE)
KRYLOV SUBSPACE METHODS FOR PROJECTED LYAPUNOV EQUATIONS TATJANA STYKEL AND VALERIA SIMONCINI Abstract. We consider the numerical solution of projected Lyapunov equations using Krylov subspace iterative
More informationBalanced Truncation Model Reduction of Large and Sparse Generalized Linear Systems
Balanced Truncation Model Reduction of Large and Sparse Generalized Linear Systems Jos M. Badía 1, Peter Benner 2, Rafael Mayo 1, Enrique S. Quintana-Ortí 1, Gregorio Quintana-Ortí 1, A. Remón 1 1 Depto.
More informationModel Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation
, July 1-3, 15, London, U.K. Model Order Reduction of Continuous LI Large Descriptor System Using LRCF-ADI and Square Root Balanced runcation Mehrab Hossain Likhon, Shamsil Arifeen, and Mohammad Sahadet
More informationPROJECTED GMRES AND ITS VARIANTS
PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,
More informationThe Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner.
The Newton-ADI Method for Large-Scale Algebraic Riccati Equations Mathematik in Industrie und Technik Fakultät für Mathematik Peter Benner benner@mathematik.tu-chemnitz.de Sonderforschungsbereich 393 S
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationEfficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers
Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Jens Saak joint work with Peter Benner (MiIT) Professur Mathematik in Industrie und Technik (MiIT) Fakultät für Mathematik
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationS N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz
Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION www.tu-chemnitz.de/~benner
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationData Mining Lecture 4: Covariance, EVD, PCA & SVD
Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationA POD Projection Method for Large-Scale Algebraic Riccati Equations
A POD Projection Method for Large-Scale Algebraic Riccati Equations Boris Kramer Department of Mathematics Virginia Tech Blacksburg, VA 24061-0123 Email: bokr@vt.edu John R. Singler Department of Mathematics
More informationLecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University
Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of
More informationKrylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations
Krylov Subspace Type Methods for Solving Proected Generalized Continuous-Time Lyapunov Equations YUIAN ZHOU YIQIN LIN Hunan University of Science and Engineering Institute of Computational Mathematics
More informationA short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering
A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization
More information2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9
Spring 2015 Lecture 9 REVIEW Lecture 8: Direct Methods for solving (linear) algebraic equations Gauss Elimination LU decomposition/factorization Error Analysis for Linear Systems and Condition Numbers
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationKRYLOV SUBSPACE ITERATION
KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar
More informationKrylov Subspaces. Lab 1. The Arnoldi Iteration
Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD
ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation
More informationModel order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers
MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson
More informationPassivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.
Passivity Preserving Model Reduction for Large-Scale Systems Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik Sonderforschungsbereich 393 S N benner@mathematik.tu-chemnitz.de SIMULATION
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationModel Reduction for Dynamical Systems
Otto-von-Guericke Universität Magdeburg Faculty of Mathematics Summer term 2015 Model Reduction for Dynamical Systems Lecture 8 Peter Benner Lihong Feng Max Planck Institute for Dynamics of Complex Technical
More informationA Continuation Approach to a Quadratic Matrix Equation
A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September
More informationDavidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD
Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationMatrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury
Matrix Theory, Math634 Lecture Notes from September 27, 212 taken by Tasadduk Chowdhury Last Time (9/25/12): QR factorization: any matrix A M n has a QR factorization: A = QR, whereq is unitary and R is
More informationSimple iteration procedure
Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral
More informationOrder reduction numerical methods for the algebraic Riccati equation. V. Simoncini
Order reduction numerical methods for the algebraic Riccati equation V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1 The problem Find X
More informationMatrix Equations and and Bivariate Function Approximation
Matrix Equations and and Bivariate Function Approximation D. Kressner Joint work with L. Grasedyck, C. Tobler, N. Truhar. ETH Zurich, Seminar for Applied Mathematics Manchester, 17.06.2009 Sylvester matrix
More informationarxiv: v3 [math.na] 6 Jul 2018
A Connection Between Time Domain Model Order Reduction and Moment Matching for LTI Systems arxiv:181.785v3 [math.na] 6 Jul 218 Manuela Hund a and Jens Saak a a Max Planck Institute for Dynamics of Complex
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationMoving Frontiers in Model Reduction Using Numerical Linear Algebra
Using Numerical Linear Algebra Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory Group Magdeburg, Germany Technische Universität Chemnitz
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationA Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems
A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationSYSTEM-THEORETIC METHODS FOR MODEL REDUCTION OF LARGE-SCALE SYSTEMS: SIMULATION, CONTROL, AND INVERSE PROBLEMS. Peter Benner
SYSTEM-THEORETIC METHODS FOR MODEL REDUCTION OF LARGE-SCALE SYSTEMS: SIMULATION, CONTROL, AND INVERSE PROBLEMS Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität
More informationLecture 8: Fast Linear Solvers (Part 7)
Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi
More informationCOMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare
COMP6237 Data Mining Covariance, EVD, PCA & SVD Jonathon Hare jsh2@ecs.soton.ac.uk Variance and Covariance Random Variables and Expected Values Mathematicians talk variance (and covariance) in terms of
More informationUniversity of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012
University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR-202-07 April 202 LYAPUNOV INVERSE ITERATION FOR COMPUTING A FEW RIGHTMOST
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationTowards high performance IRKA on hybrid CPU-GPU systems
Towards high performance IRKA on hybrid CPU-GPU systems Jens Saak in collaboration with Georg Pauer (OVGU/MPI Magdeburg) Kapil Ahuja, Ruchir Garg (IIT Indore) Hartwig Anzt, Jack Dongarra (ICL Uni Tennessee
More informationAccelerating Model Reduction of Large Linear Systems with Graphics Processors
Accelerating Model Reduction of Large Linear Systems with Graphics Processors P. Benner 1, P. Ezzatti 2, D. Kressner 3, E.S. Quintana-Ortí 4, Alfredo Remón 4 1 Max-Plank-Institute for Dynamics of Complex
More informationA Comparison of Adaptive Algebraic Multigrid and Lüscher s Inexact Deflation
A Comparison of Adaptive Algebraic Multigrid and Lüscher s Inexact Deflation Andreas Frommer, Karsten Kahl, Stefan Krieg, Björn Leder and Matthias Rottmann Bergische Universität Wuppertal April 11, 2013
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationApproximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method
Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,
More informationOn the Ritz values of normal matrices
On the Ritz values of normal matrices Zvonimir Bujanović Faculty of Science Department of Mathematics University of Zagreb June 13, 2011 ApplMath11 7th Conference on Applied Mathematics and Scientific
More informationOn solving linear systems arising from Shishkin mesh discretizations
On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationBlock Krylov Space Solvers: a Survey
Seminar for Applied Mathematics ETH Zurich Nagoya University 8 Dec. 2005 Partly joint work with Thomas Schmelzer, Oxford University Systems with multiple RHSs Given is a nonsingular linear system with
More informationLinear and Nonlinear Matrix Equations Arising in Model Reduction
International Conference on Numerical Linear Algebra and its Applications Guwahati, January 15 18, 2013 Linear and Nonlinear Matrix Equations Arising in Model Reduction Peter Benner Tobias Breiten Max
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationModel Reduction for Unstable Systems
Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model
More informationLinear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg
Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector
More informationThe Singular Value Decomposition
The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition
More informationMax Planck Institute Magdeburg Preprints
Peter Benner Patrick Kürschner Jens Saak Real versions of low-rank ADI methods with complex shifts MAXPLANCKINSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationTowards parametric model order reduction for nonlinear PDE systems in networks
Towards parametric model order reduction for nonlinear PDE systems in networks MoRePas II 2012 Michael Hinze Martin Kunkel Ulrich Matthes Morten Vierling Andreas Steinbrecher Tatjana Stykel Fachbereich
More informationITERATIVE METHODS BASED ON KRYLOV SUBSPACES
ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method
More informationChemnitz Scientific Computing Preprints
Peter Benner, Mohammad-Sahadet Hossain, Tatjana Styel Low-ran iterative methods of periodic projected Lyapunov equations and their application in model reduction of periodic descriptor systems CSC/11-01
More informationModel reduction of large-scale dynamical systems
Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/
More informationArnoldi Methods in SLEPc
Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,
More informationNumerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems
MatTriad 2015 Coimbra September 7 11, 2015 Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems Peter Benner Max Planck Institute for Dynamics of Complex Technical
More informationAdaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini
Adaptive rational Krylov subspaces for large-scale dynamical systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria@dm.unibo.it joint work with Vladimir Druskin, Schlumberger Doll
More informationOn Solving Large Algebraic. Riccati Matrix Equations
International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University
More informationThe rational Krylov subspace for parameter dependent systems. V. Simoncini
The rational Krylov subspace for parameter dependent systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Given the continuous-time system Motivation. Model
More information