Nonlinear Eigenvalue Problems: Interpolatory Algorithms and Transient Dynamics

Similar documents
Rational Interpolation Methods for Nonlinear Eigenvalue Problems

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics

Model Reduction for Unstable Systems

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics

Nonlinear Eigenvalue Problems and Contour Integrals

Pseudospectra and Nonnormal Dynamical Systems

Case study: Approximations of the Bessel Function

LOCALIZATION THEOREMS FOR NONLINEAR EIGENVALUE PROBLEMS

A numerical method for polynomial eigenvalue problems using contour integral

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics

Approximation of the Linearized Boussinesq Equations

Nonlinear Eigenvalue Problems: Theory and Applications

LOCALIZATION THEOREMS FOR NONLINEAR EIGENVALUE PROBLEMS

KU Leuven Department of Computer Science

Key words. Nonlinear eigenvalue problem, analytic matrix-valued function, Sylvester-like operator, backward error. T (λ)v = 0. (1.

Nonlinear Eigenvalue Problems: An Introduction

1 Sylvester equations

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

Matrix stabilization using differential equations.

Chris Beattie and Serkan Gugercin

Reliably computing all characteristic roots of delay differential equations in a given right half plane using a spectral method

Contour Integral Method for the Simulation of Accelerator Cavities

Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem

An Operator Theoretical Approach to Nonlocal Differential Equations

VISUALIZING PSEUDOSPECTRA FOR POLYNOMIAL EIGENVALUE PROBLEMS. Adéla Klimentová *, Michael Šebek ** Czech Technical University in Prague

Collocation approximation of the monodromy operator of periodic, linear DDEs

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

KU Leuven Department of Computer Science

DELFT UNIVERSITY OF TECHNOLOGY

Katholieke Universiteit Leuven Department of Computer Science

A Smooth Operator, Operated Correctly

Nonnormality in Lyapunov Equations

MATH 5524 MATRIX THEORY Problem Set 5

16. Local theory of regular singular points and applications

Analysis of Transient Behavior in Linear Differential Algebraic Equations

Stability preserving post-processing methods applied in the Loewner framework

A fast and well-conditioned spectral method: The US method

Non-Self-Adjoint Operators and Pseudospectra

A block Newton method for nonlinear eigenvalue problems

Numerical Computation of Structured Complex Stability Radii of Large-Scale Matrices and Pencils

Pseudospectra and Nonnormal Dynamical Systems

PART IV Spectral Methods

Model Reduction of Inhomogeneous Initial Conditions

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Krylov Techniques for Model Reduction of Second-Order Systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An Arnoldi method with structured starting vectors for the delay eigenvalue problem

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics.

Computing Unstructured and Structured Polynomial Pseudospectrum Approximations

Robust control of resistive wall modes using pseudospectra

Polynomial eigenvalue solver based on tropically scaled Lagrange linearization. Van Barel, Marc and Tisseur, Francoise. MIMS EPrint: 2016.

Composition Operators on Hilbert Spaces of Analytic Functions

L 2 -optimal model reduction for unstable systems using iterative rational

Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems

An Interpolation-Based Approach to the Optimal H Model Reduction

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

DELFT UNIVERSITY OF TECHNOLOGY

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

The matrix sign function

A Spectral Characterization of Closed Range Operators 1

Model reduction of large-scale systems by least squares

Parameter-Dependent Eigencomputations and MEMS Applications

Solution of Linear State-space Systems

Linear Algebra: Matrix Eigenvalue Problems

Math Ordinary Differential Equations

NORMS ON SPACE OF MATRICES

Matrix functions and their approximation. Krylov subspaces

Numerical Methods for Differential Equations Mathematical and Computational Tools

H 2 optimal model approximation by structured time-delay reduced order models

c 2015 Society for Industrial and Applied Mathematics

Convergence Theory for Iterative Eigensolvers

AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL MOTIVATING EXAMPLE INVERTED PENDULUM

Exponential Stability of the Traveling Fronts for a Pseudo-Para. Pseudo-Parabolic Fisher-KPP Equation

Barycentric interpolation via the AAA algorithm

SOLVING RATIONAL EIGENVALUE PROBLEMS VIA LINEARIZATION

ETNA Kent State University

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

THE STABLE EMBEDDING PROBLEM

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Stability of an abstract wave equation with delay and a Kelvin Voigt damping

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

On the transient behaviour of stable linear systems

Model reduction via tangential interpolation

MATH 583A REVIEW SESSION #1

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Evaluation of Chebyshev pseudospectral methods for third order differential equations

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Eigenvector Error Bound and Perturbation for Polynomial and Rational Eigenvalue Problems

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Numerical Analysis for Nonlinear Eigenvalue Problems

November 18, 2013 ANALYTIC FUNCTIONAL CALCULUS

Analysis Preliminary Exam Workshop: Hilbert Spaces

VISA: Versatile Impulse Structure Approximation for Time-Domain Linear Macromodeling

Toeplitz matrices. Niranjan U N. May 12, NITK, Surathkal. Definition Toeplitz theory Computational aspects References

Numerical Methods in Matrix Computations

Full-State Feedback Design for a Multi-Input System

arxiv: v3 [math.na] 27 Jun 2017

Lecture notes: Applied linear algebra Part 2. Version 1

Transcription:

Nonlinear Eigenvalue Problems: Interpolatory Algorithms and Transient Dynamics Mark Embree Virginia Tech with Serkan Gugercin Jonathan Baker, Michael Brennan, Alexander Grimm Virginia Tech SIAM Conference on Applied Linear Algebra Hong Kong May 218 Thanks to: International Linear Algebra Society (ILAS) US National Science Foundation DMS-172257

a talk in two parts... rational interpolation for nlevps Rational / Loewner techniques for nonlinear eigenvalue problems, motivated by algorithms from model reduction. Structure Preserving Rational Interpolation Data-Driven Rational Interpolation Matrix Pencils Minimal Realization via Rational Contour Integrals transients for delay equations Scalar delay equations: a case-study for how one can apply pseudospectra techniques to analyze the transient behavior of a dynamical system. Finite dimensional nonlinear problem infinite dimensional linear problem Pseudospectral theory applies to the linear problem, but the choice of norm is important 2 15 1 1 5.5-5 -.5-1 -15-1 -2-4 -3.5-3 -2.5-2 -1.5-1 -.5.5-1 -.5.5 1 1.5 2

nonlinear eigenvalue problems: the final frontier? problem typical # eigenvalues standard eigenvalue problem (A λi)v = n generalized eigenvalue problem (A λe)v = n quadratic eigenvalue problem (K + λd + λ 2 M)v = 2n polynomial eigenvalue problem nonlinear eigenvalue problem ( d k= λk A k )v = dn ( d k= f k(λ)a k )v =

nonlinear eigenvalue problems: the final frontier? problem typical # eigenvalues standard eigenvalue problem (A λi)v = n generalized eigenvalue problem (A λe)v = n quadratic eigenvalue problem (K + λd + λ 2 M)v = 2n polynomial eigenvalue problem nonlinear eigenvalue problem ( d k= λk A k )v = dn ( d k= f k(λ)a k )v = nonlinear eigenvector problem F (λ, v) =

a basic nonlinear eigenvalue problem Consider the simple scalar delay differential equation x (t) = x(t 1). Substituting the ansatz x(t) = e λt yields the nonlinear eigenvalue problem T (λ) = 1 + λe λ =. 32 (of infinitely many) eigenvalues of T for this scalar (n = 1) equation: Im λk 1 8 6 4 2-2 -4-6 -8-1 -5-4 -3-2 -1 1 eigenvalues determined by the Lambert-W function Re λ k See, e.g., [Michiels & Niculescu 27]

nonlinear eigenvalue problems: many resources Nonlinear eigenvalue problems have classical roots, but now form a fast-moving field with many excellent resources and new algorithms. Helpful surveys: Mehrmann & Voss, GAMM, [24] Voss, Handbook of Linear Algebra, [214] Güttel & Tisseur, Acta Numerica survey [217] Software: NLEVP test collection [Betcke, Higham, Mehrmann, Schröder, Tisseur 213] SLEPC contains NLEVP algorithm implementations [Roman et al.] Many algorithms based on Newton s method, rational approximation, linearization, contour integration, projection, etc. Incomplete list of contributors: Asakura, Bai, Betcke, Beyn, Effenberger, Güttel, Ikegami, Jarlebring, Kimura, Kressner, Leitart, Meerbergen, Michiels, Niculescu, Pérez, Sakurai, Tadano, Van Beeumen, Vandereycken, Voss, Yokota,.... Infinite dimensional nonlinear spectral problems are even more subtle: [Appell, De Pascale, Vignoli 24] give seven distinct definitions of the spectrum.

Rational Interpolation Algorithms for Nonlinear Eigenvalue Problems

rational interpolation of functions and systems Rational interpolation problem. Given points {z j } 2r j=1 C and data {f j f (z j )} 2r j=1, find a rational function R(z) = p(z)/q(z) of type (r 1, r 1) such that R(z j ) = f j.

rational interpolation of functions and systems Rational interpolation problem. Given points {z j } 2r j=1 C and data {f j f (z j )} 2r j=1, find a rational function R(z) = p(z)/q(z) of type (r 1, r 1) such that R(z j ) = f j. Given Lagrange basis functions l j (z) = r (z z k ) and nodal polynomial l(z) = k=1 k j r (z z k ), k=1 R(z) = p(z) q(z) = r β j l j (z) j=1 r w j l j (z) j=1 = r j=1 r j=1 β j l j (z) l(z) w j l j (z) l(z) = r β j z z j=1 j r w j z z j=1 j barycentric form

rational interpolation: barycentric perspective Lagrange basis: l j (z) = r (z z k ) k=1 k j R(z) = p(z) q(z) = r β j l j (z) j=1 r w j l j (z) j=1 = r β j z z j=1 j r w j z z j=1 j

rational interpolation: barycentric perspective Lagrange basis: l j (z) = r (z z k ) k=1 k j R(z) = p(z) q(z) = r β j l j (z) j=1 r w j l j (z) j=1 = r β j z z j=1 j r w j z z j=1 j Fix {β j = f j w j } r j=1 to interpolate at z 1,..., z r : R(z j ) = f j.

rational interpolation: barycentric perspective Lagrange basis: l j (z) = r (z z k ) k=1 k j R(z) = p(z) q(z) = r β j l j (z) j=1 r w j l j (z) j=1 = r β j z z j=1 j r w j z z j=1 j Fix {β j = f j w j } r j=1 to interpolate at z 1,..., z r : R(z j ) = f j. Determine w 1,..., w r to interpolate at z r+1,..., z 2r : R(z k ) = r f j w j z j=1 k z j r w j z j=1 k z j = f k = r j=1 f j w j z k z j = r j=1 f k w j z k z j

rational interpolation: barycentric perspective Fix {β j = f j w j } r j=1 to interpolate at z 1,..., z r : r(z j ) = f j. Determine w 1,..., w r to interpolate at z r+1,..., z 2r : R(z k ) = f k = r j=1 f j w j z k z j = r j=1 f k w j z k z j = r j=1 f j f k z j z k w j =.

rational interpolation: barycentric perspective Fix {β j = f j w j } r j=1 to interpolate at z 1,..., z r : r(z j ) = f j. Determine w 1,..., w r to interpolate at z r+1,..., z 2r : R(z k ) = f k = r j=1 f j w j z k z j = f 1 f r+1 z 1 z r+1 f 2 f r+1 z 2 z r+1 f 1 f r+2 z 1 z r+2 f 2 f r+2 z 2 z r+2. f 1 f 2r z 1 z 2r. f 2 f 2r z 2 z 2r r j=1 f k w j z k z j = f r f r+1 z r z r+1 f r f r+2 z r z r+2...... Loewner matrix, L f r f 2r z r z 2r w 1 w 2. w r r j=1 f j f k z j z k w j =. =.

rational interpolation: barycentric perspective Fix {β j = f j w j } r j=1 to interpolate at z 1,..., z r : r(z j ) = f j. Determine w 1,..., w r to interpolate at z r+1,..., z 2r : R(z k ) = f k = r j=1 f j w j z k z j = f 1 f r+1 z 1 z r+1 f 2 f r+1 z 2 z r+1 f 1 f r+2 z 1 z r+2 f 2 f r+2 z 2 z r+2. f 1 f 2r z 1 z 2r. f 2 f 2r z 2 z 2r r j=1 f k w j z k z j = f r f r+1 z r z r+1 f r f r+2 z r z r+2...... Loewner matrix, L f r f 2r z r z 2r w 1 w 2. w r r j=1 f j f k z j z k w j =. =. Barycentric rational interpolation algorithm [Antoulas & Anderson [1986] AAA (Adaptive Antoulas Anderson) Method [Nakatsukasa, Sète, Trefethen, 216]

rational interpolation: state space perspective The rational interpolant R(z) to f at z 1,..., z 2r can also be formulated in state-space form using Loewner matrix techniques. R(z) = c(l s zl) 1 b, where c = [f r+1,..., f 2r ], b = [f 1,..., f r ] T and z 1 f 1 z r+1 f r+1 z 1 z r+1. z 1 f 1 z 2r f 2r z 1 z 2r z r f r z r+1 f r+1 z r z r+1...... z r f r z 2r f 2r z r z 2r, f 1 f r+1 z 1 z r+1. f 1 f 2r z 1 z 2r f r f r+1 z r z r+1...... f r f 2r z r z 2r. shifted Loewner matrix, L s Loewner matrix, L State space formulation proposed by Mayo & Antoulas [27] Natural approach for handling tangential interpolation for vector data For details, applications, and extensions, see [Antoulas, Lefteriu, Ionita 217]

approach one: structure preserving rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Method: Reduce dimension of T(λ) but maintain the nonlinear structure. Smaller problem will be more amenable to dense nonlinear eigensolvers. Rational tangential interpolation of T(λ) 1 at r points, directions. Iteratively Corrected Rational Interpolation method

approach one: structure preserving rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Method: Reduce dimension of T(λ) but maintain the nonlinear structure. Smaller problem will be more amenable to dense nonlinear eigensolvers. Rational tangential interpolation of T(λ) 1 at r points, directions. Iteratively Corrected Rational Interpolation method Pick r interpolation points {z j } r j=1 and interpolation directions {w j } r j=1.

approach one: structure preserving rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Method: Reduce dimension of T(λ) but maintain the nonlinear structure. Smaller problem will be more amenable to dense nonlinear eigensolvers. Rational tangential interpolation of T(λ) 1 at r points, directions. Iteratively Corrected Rational Interpolation method Pick r interpolation points {z j } r j=1 and interpolation directions {w j } r j=1. Construct a basis for projection (cf. shift-invert Arnoldi ): U = orth([t(z 1) 1 w 1 T(z 2) 1 w 2 T(z r ) 1 w r ] C n r.

approach one: structure preserving rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Method: Reduce dimension of T(λ) but maintain the nonlinear structure. Smaller problem will be more amenable to dense nonlinear eigensolvers. Rational tangential interpolation of T(λ) 1 at r points, directions. Iteratively Corrected Rational Interpolation method Pick r interpolation points {z j } r j=1 and interpolation directions {w j } r j=1. Construct a basis for projection (cf. shift-invert Arnoldi ): U = orth([t(z 1) 1 w 1 T(z 2) 1 w 2 T(z r ) 1 w r ] C n r. Form the reduced-dimension nonlinear system: T r (λ) := U T(λ)U C r r.

approach one: structure preserving rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Method: Reduce dimension of T(λ) but maintain the nonlinear structure. Smaller problem will be more amenable to dense nonlinear eigensolvers. Rational tangential interpolation of T(λ) 1 at r points, directions. Iteratively Corrected Rational Interpolation method Pick r interpolation points {z j } r j=1 and interpolation directions {w j } r j=1. Construct a basis for projection (cf. shift-invert Arnoldi ): U = orth([t(z 1) 1 w 1 T(z 2) 1 w 2 T(z r ) 1 w r ] C n r. Form the reduced-dimension nonlinear system: T r (λ) := U T(λ)U C r r. Compute the spectrum of T r (λ) and use its eigenvalues and eigenvectors to update {z j } r j=1 and {w j } r j=1, and repeat.

approach one: structure preserving rational interpolation The choice of projection subspace Ran(U) delivers the key interpolation property. Interpolation Theorem. Provided z j σ(t) σ(t r ) for all j = 1,..., r, T(z j ) 1 w j = UT r (z j ) 1 U w j. Inspiration: model reduction for nonlinear systems w/coprime factorizations [Beattie & Gugercin 29]; iteration like dominant pole algorithm [Martins, Lima, Pinto 1996]; [Roomes & Martins 26], IRKA [Gugercin, Antoulas, Beattie 28].

approach one: structure preserving rational interpolation The choice of projection subspace Ran(U) delivers the key interpolation property. Interpolation Theorem. Provided z j σ(t) σ(t r ) for all j = 1,..., r, T(z j ) 1 w j = UT r (z j ) 1 U w j. Inspiration: model reduction for nonlinear systems w/coprime factorizations [Beattie & Gugercin 29]; iteration like dominant pole algorithm [Martins, Lima, Pinto 1996]; [Roomes & Martins 26], IRKA [Gugercin, Antoulas, Beattie 28]. Illustration. As for all orthogonal projection methods: T(λ) = f (λ) A + f 1(λ) A 1 + f 2(λ) A 2 T r (λ) = f (λ) U A U + f 1(λ) U A 1U + f 2(λ) U A 2U

approach one: structure preserving rational interpolation The choice of projection subspace Ran(U) delivers the key interpolation property. Interpolation Theorem. Provided z j σ(t) σ(t r ) for all j = 1,..., r, T(z j ) 1 w j = UT r (z j ) 1 U w j. Inspiration: model reduction for nonlinear systems w/coprime factorizations [Beattie & Gugercin 29]; iteration like dominant pole algorithm [Martins, Lima, Pinto 1996]; [Roomes & Martins 26], IRKA [Gugercin, Antoulas, Beattie 28]. Illustration. As for all orthogonal projection methods: T(λ) = f (λ) A + f 1(λ) A 1 + f 2(λ) A 2 T r (λ) = f (λ) U A U + f 1(λ) U A 1U + f 2(λ) U A 2U The nonlinear functions f j remain intact: the structure is preserved. The coefficients A j C n n are compressed to U A j U C r r. Contrast: [Lietaert, Pérez, Vandereycken, Meerbergen 218+] apply AAA approximation to f j (λ), leave coefficient matrices intact.

approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n}. 8 6 4 2-2 -4-6 Eigenvalues of full T(λ) Interpolation points {z j } initial -8-7 -6-5 -4-3 -2-1 1 r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly

approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n}. 8 6 4 2-2 -4-6 Eigenvalues of full T(λ) Interpolation points {z j } Eigenvalues of reduced T r (λ) iteration 1-8 -7-6 -5-4 -3-2 -1 1 r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly

approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n}. 8 6 4 2-2 -4-6 Eigenvalues of full T(λ) Interpolation points {z j } Eigenvalues of reduced T r (λ) iteration 2-8 -7-6 -5-4 -3-2 -1 1 r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly

approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n}. 8 6 4 2-2 -4-6 Eigenvalues of full T(λ) Interpolation points {z j } Eigenvalues of reduced T r (λ) iteration 3-8 -7-6 -5-4 -3-2 -1 1 r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly

approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n}. 8 6 4 2-2 -4-6 Eigenvalues of full T(λ) Interpolation points {z j } Eigenvalues of reduced T r (λ) iteration 4-8 -7-6 -5-4 -3-2 -1 1 r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly

approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n}. 8 6 4 2-2 -4-6 -8-7 -6-5 -4-3 -2-1 1 2 15 1 5-5 -1-15 zoom -2-4 -3.5-3 -2.5-2 -1.5-1 -.5.5 Eigenvalues of full T(λ) Eigenvalues of reduced T r (λ) at the final cycle Final interpolation points {z j } r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly

approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n}. 8 6 4 1 5 1 convergence of r/2 rightmost (z j, w j ) pairs 2-2 T(zj)wj 1-5 -4 1-1 -6-8 -7-6 -5-4 -3-2 -1 1 Eigenvalues of full T(λ) Eigenvalues of reduced T r (λ) at the final cycle Final interpolation points {z j } 1-15 1 2 3 4 5 6 7 8 cycle r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly

approach one: structure preserving rational interpolation Example 2. T(λ) = λi A e λ I, A is symmetric with n = 1; eigenvalues of A = { 1/n, 4/n,..., n}. 6 4 2-2 2 15 1 5-5 tight clustering of rightmost eigenvalues is a challenge -4-6 -7-6 -5-4 -3-2 -1 1-1 -15 zoom -2-4 -3.5-3 -2.5-2 -1.5-1 -.5.5 1 Eigenvalues of full T(λ) Eigenvalues of reduced T r (λ) at the final cycle Final interpolation points {z j } r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly

approach one: structure preserving rational interpolation Example 2. T(λ) = λi A e λ I, A is symmetric with n = 1; eigenvalues of A = { 1/n, 4/n,..., n}. 6 4 1 5 1 convergence of r/2 rightmost (z j, w j ) pairs 2-2 -4 T(zj)wj 1-5 1-1 -6-7 -6-5 -4-3 -2-1 1 Eigenvalues of full T(λ) Eigenvalues of reduced T r (λ) at the final cycle Final interpolation points {z j } 1-15 1 2 3 4 5 6 7 8 cycle r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly

approach two: data-driven rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Obtain a small linear matrix pencil that interpolates the nonlinear eigenvalue problem. Smaller problem requires no further linearization. Method: Data-driven rational interpolation of T(λ) 1. Data-Driven Rational Interpolation Matrix Pencil method

approach two: data-driven rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Obtain a small linear matrix pencil that interpolates the nonlinear eigenvalue problem. Smaller problem requires no further linearization. Method: Data-driven rational interpolation of T(λ) 1. Data-Driven Rational Interpolation Matrix Pencil method Specify interpolation data: left points, directions: z 1,..., z r C, w 1,..., w r C n right points, directions: z r+1,..., z 2r C, w r+1,..., w 2r C n

approach two: data-driven rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Obtain a small linear matrix pencil that interpolates the nonlinear eigenvalue problem. Smaller problem requires no further linearization. Method: Data-driven rational interpolation of T(λ) 1. Data-Driven Rational Interpolation Matrix Pencil method Specify interpolation data: left points, directions: z 1,..., z r C, w 1,..., w r C n right points, directions: z r+1,..., z 2r C, w r+1,..., w 2r C n Construct T r (λ) 1 := C r (A r λe r ) 1 B r to tangentially interpolate T(λ) 1. Tangential Interpolation Theorem. Provided z j σ(t) σ(t r ), w T j T(z j ) 1 = w T j T r (z j ) 1, j = 1,..., r; T(z j ) 1 w j = T r (z j ) 1 w j, j = r + 1,..., 2r.

approach two: data-driven rational interpolation Given left points, directions: z 1,..., z r C, w 1,..., w r C n right points, directions: z r+1,..., z 2r C, w r+1,..., w 2r C n Define left interpolation data: f 1 = T(z 1) T w 1,..., f r = T(z r ) T w r right interpolation data: f r+1 = T(z r+1) 1 w r+1,..., f 2r = T(z 2r ) 1 w 2r

approach two: data-driven rational interpolation Given left points, directions: z 1,..., z r C, w 1,..., w r C n right points, directions: z r+1,..., z 2r C, w r+1,..., w 2r C n Define left interpolation data: f 1 = T(z 1) T w 1,..., f r = T(z r ) T w r right interpolation data: f r+1 = T(z r+1) 1 w r+1,..., f 2r = T(z 2r ) 1 w 2r Order-r (linear) model: T r (z) 1 = C r (A r ze r ) 1 B r

approach two: data-driven rational interpolation Given left points, directions: z 1,..., z r C, w 1,..., w r C n right points, directions: z r+1,..., z 2r C, w r+1,..., w 2r C n Define left interpolation data: f 1 = T(z 1) T w 1,..., f r = T(z r ) T w r right interpolation data: f r+1 = T(z r+1) 1 w r+1,..., f 2r = T(z 2r ) 1 w 2r Order-r (linear) model: T r (z) 1 = C r (A r ze r ) 1 B r C r = [ f r+1,..., f 2r ] A r = z 1 f1 T w r+1 z r+1 w1 T f r+1 z z 1 z r fr T w r+1 z r+1 wr T f r+1 r+1 z r z r+1....... z 1 f1 T w 2r z 2r w1 T f 2r z z 1 z r fr T w 2r z 2r wr T f 2r 2r z r z 2r E r = f1 T w r+1 w1 T f r+1 f z 1 z r T w r+1 w T r f r+1 r+1 z r z r+1....... f1 T w 2r w1 T f 2r f z 1 z r T w 2r wr T f 2r 2r z r z 2r B r = [ f 1,..., f r ] T shifted Loewner Loewner

approach two: data-driven rational interpolation Given left points, directions: z 1,..., z r C, w 1,..., w r C n right points, directions: z r+1,..., z 2r C, w r+1,..., w 2r C n Define left interpolation data: f 1 = T(z 1) T w 1,..., f r = T(z r ) T w r right interpolation data: f r+1 = T(z r+1) 1 w r+1,..., f 2r = T(z 2r ) 1 w 2r Rank-r (linear) model: T r (z) 1 = C r (A r ze r ) 1 B r (A r ze r) 1 B r T(z) 1 C r e.g., (A +f 1(z)A 1 +f 2(z)A 2) 1 linear matrix pencil

approach two: data-driven rational interpolation Example. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n}. 8 6 4 2-2 -4-6 -8-7 -6-5 -4-3 -2-1 1 25 2 15 1 5-5 -1-15 -2 zoom out -25-4 -35-3 -25-2 -15-1 -5 5 Eigenvalues of full T(λ) Eigenvalues of reduced matrix pencil A r z E r r = 4 interpolation points used, uniform in interval [ 8i, 8i] Hermite interpolation variant that only uses r distinct interpolation points. interpolation directions from smallest singular values of T(z j ).

approach three: Loewner realization via contour integration Scenario: Seek all eigenvalues of T(λ) C n n in a prescribed region Ω of C. Goal: Use Keldysh s Theorem to isolate interesting part of T(λ) in Ω. Method: Contour integration of T(λ) against rational test functions. Loewner matrix will reveal number of eigenvalues in Ω. Theorem [Keldysh 1951]. Suppose T(z) has m eigenvalues λ 1,..., λ m (counting multiplicity) in the region Ω C, all semi-simple. Then T(z) 1 = V(zI Λ) 1 U + R(z), V = [v 1 v m], U = [u 1 u m], Λ = diag(λ 1,..., λ m), u j T (λ j )v j = 1; R(z) is analytic in Ω.

approach three: Loewner realization via contour integration Scenario: Seek all eigenvalues of T(λ) C n n in a prescribed region Ω of C. Goal: Use Keldysh s Theorem to isolate interesting part of T(λ) in Ω. Method: Contour integration of T(λ) against rational test functions. Loewner matrix will reveal number of eigenvalues in Ω. Theorem [Keldysh 1951]. Suppose T(z) has m eigenvalues λ 1,..., λ m (counting multiplicity) in the region Ω C, all semi-simple. Then T(z) 1 = V(zI Λ) 1 U + R(z), V = [v 1 v m], U = [u 1 u m], Λ = diag(λ 1,..., λ m), u j T (λ j )v j = 1; R(z) is analytic in Ω. T(z) 1 = H(z) + R(z) where H(z) := V(zI Λ) 1 U is a transfer function for a linear system.

approach three: Loewner approximation via contour integration Theorem [Keldysh 1951]. Suppose T(z) has m eigenvalues λ 1,..., λ m (counting multiplicity) in the region Ω C, all semi-simple. Then T(z) 1 = V(zI Λ) 1 U + R(z), V = [v 1 v m], U = [U 1 U m], Λ = diag(λ 1,..., λ m), U j T (λ j )v j = 1; R(z) is analytic in Ω. (zi Λ) 1 U T(z) 1 = V + R(z) H(z) := V(zI Λ) 1 U n n linear system, order m: m poles in Ω nonlinear system, but nice in Ω

approach three: Loewner realization via contour integration T(z) 1 = H(z) + R(z) where H(z) : V(zI Λ) 1 U is a transfer function for a linear system. A family of algorithms use the fact that, by the Cauchy integral formula, 1 f (z)t(z) 1 dz = Vf (Λ)U ; 2πi Ω see [Asakura, Sakurai, Tadano, Ikegami, Kimura 29], [Beyn 212], [Yokota & Sakurai 213], etc., building upon contour integral eigensolvers for matrix pencils [Sakurai & Sugiura 23], [Polizzi 29], etc. These algorithms use f (z) = z k for k =, 1,... to produce Hankel matrix pencils.

approach three: Loewner realization via contour integration T(z) 1 = H(z) + R(z) where H(z) : V(zI Λ) 1 U is a transfer function for a linear system. A family of algorithms use the fact that, by the Cauchy integral formula, 1 f (z)t(z) 1 dz = Vf (Λ)U ; 2πi Ω see [Asakura, Sakurai, Tadano, Ikegami, Kimura 29], [Beyn 212], [Yokota & Sakurai 213], etc., building upon contour integral eigensolvers for matrix pencils [Sakurai & Sugiura 23], [Polizzi 29], etc. These algorithms use f (z) = z k for k =, 1,... to produce Hankel matrix pencils. Key observation: If we use f (z) = 1/(z j z) for z j exterior to Ω, we obtain 1 1 2πi z j z T(z) 1 dz = V(z j I Λ) 1 U = H(z j ). Ω Contour integrals yield measurements of the linear system with the desired eigenvalues.

approach three: Loewner realization via contour integration Minimal Realization via Rational Contour Integrals for m eigenvalues Let r m, and select interpolation points and directions: left points, directions: z 1,..., z r C \ Ω, w 1,..., w r C n right points, directions: z r+1,..., z 2r C \ Ω, w r+1,..., w 2r C n

approach three: Loewner realization via contour integration Minimal Realization via Rational Contour Integrals for m eigenvalues Let r m, and select interpolation points and directions: left points, directions: z 1,..., z r C \ Ω, w 1,..., w r C n right points, directions: z r+1,..., z 2r C \ Ω, w r+1,..., w 2r C n Use contour integrals to compute the left and right interpolation data: left interpolation data: f 1 = H(z 1) T w 1,..., f r = H(z r ) T w r right interpolation data: f r+1 = H(z r+1)w r+1,..., f 2r = H(z 2r )w 2r H(z j )w j = 1 1 2πi Ω z j z T(z) 1 w j dz.

approach three: Loewner realization via contour integration Minimal Realization via Rational Contour Integrals for m eigenvalues Let r m, and select interpolation points and directions: left points, directions: z 1,..., z r C \ Ω, w 1,..., w r C n right points, directions: z r+1,..., z 2r C \ Ω, w r+1,..., w 2r C n Use contour integrals to compute the left and right interpolation data: left interpolation data: f 1 = H(z 1) T w 1,..., f r = H(z r ) T w r right interpolation data: f r+1 = H(z r+1)w r+1,..., f 2r = H(z 2r )w 2r H(z j )w j = 1 1 2πi Ω z j z T(z) 1 w j dz. Construct Loewner and shifted Loewner matrices from this data, just as in the Data-Driven Rational Interpolation method: C r = [ f r+1,..., f 2r ] A r = shifted Loewner matrix B r = [ f 1,..., f r ] T E r = Loewner matrix

approach three: Loewner realization via contour integration Minimal Realization via Rational Contour Integrals for m eigenvalues Let r m, and select interpolation points and directions: left points, directions: z 1,..., z r C \ Ω, w 1,..., w r C n right points, directions: z r+1,..., z 2r C \ Ω, w r+1,..., w 2r C n Use contour integrals to compute the left and right interpolation data: left interpolation data: f 1 = H(z 1) T w 1,..., f r = H(z r ) T w r right interpolation data: f r+1 = H(z r+1)w r+1,..., f 2r = H(z 2r )w 2r H(z j )w j = 1 1 2πi Ω z j z T(z) 1 w j dz. Construct Loewner and shifted Loewner matrices from this data, just as in the Data-Driven Rational Interpolation method: C r = [ f r+1,..., f 2r ] A r = shifted Loewner matrix B r = [ f 1,..., f r ] T E r = Loewner matrix If r = m, then V(zI Λ) 1 U = C r (A r z E r ) 1 B r : compute eigenvalues! If r > m, use SVD truncation / minimum realization techniques to reduce dimension; cf. [Mayo & Antoulas 27].

approach three: Loewner realization via contour integration Example. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n}. 6 1.5 4 1 2.5-2 -.5-4 -6-1 zoom, N = 25-1.5-2.5-2 -1.5-1 -.5.5-8 -6-4 -2 2 Eigenvalues of full T(λ) 2 interpolation points in 2 + [ 6i, 6i] Eigenvalues of minimal (m = 4) matrix pencil Contour of integration (circle) Trapezoid rule uses N = 25, 5, 1, and 2 interpolation points

approach three: Loewner realization via contour integration Example. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n}. jth singular value of L 1 1-5 1-1 1-15 1-2 1 2 3 4 5 6 7 8 9 1 j eigenvalue error, λj λ (N) j 1-2 1-4 1-6 1-8 1-1 1-12 1-14 25 5 1 2 N 4 eigenvalues in Ω rank(l) = 4 Cf. [Beyn 212], [Güttel & Tisseur 217] for f (z) = z k. For rank detection for Loewner matrices, see [Hokanson 218+].

Transient Dynamics for Dynamical Systems with Delays a case study of pseudospectral analysis

introduction to transient dynamics We often care about eigenvalues because we seek insight about dynamics.

introduction to transient dynamics Start with the simple scalar system We often care about eigenvalues because we seek insight about dynamics. x (t) = αx(t), with solution x(t) = e tα x(). If Re α <, then x(t) monotonically as t. 1.5 x(t) 1 2 3 4 t

introduction to transient dynamics We often care about eigenvalues because we seek insight about dynamics. Now consider the n-dimensional system x (t) = Ax(t) with solution x(t) = e ta x(). If Re λ < for all λ σ(a), then x(t) asymptotically as t, 2 x(t) A = [ 1 ] 1 2 1 1 2 3 4 t

introduction to transient dynamics We often care about eigenvalues because we seek insight about dynamics. Now consider the n-dimensional system x (t) = Ax(t) with solution x(t) = e ta x(). If Re λ < for all λ σ(a), then x(t) asymptotically as t, but it is possible that x(t ) x() for some t (, ). 2 x(t) A = [ 1 ] 1 2 1 1 2 3 4 t

why transients matter Often the linear dynamical system x (t) = Ax(t) arises from linear stability analysis for a fixed point of a nonlinear system y (t) = F(y(t), t). For example, y (t) = Ay(t) + 1 2 y(t)2. 5 4 3 2 1 y(t) x(t) A = [ 1 ] 1 2 1 2 3 4 In this example, linear transient growth feeds the nonlinearity. Such behavior can provide a mechanism for transition to turbulence in fluid flows; see, e.g., [Butler & Farrell 1992], [Trefethen et al. 1993]. t

detecting the potential for transient growth One can draw insight about transient growth from the numerical range (field of values) and ε-pseudospectra of A: σ ε(a) = {z C : (zi A) 1 > 1/ε} = {z C : z σ(a + E) for some E C n n with E < ε} For upper and lower bounds on x(t), see [Trefethen & E. 25], e.g., sup e ta Re z If σ ε(a) extends sup t z σ ε(a) ε. more than ε across the imaginary axis, e ta grows transiently.

detecting the potential for transient growth One can draw insight about transient growth from the numerical range (field of values) and ε-pseudospectra of A: σ ε(a) = {z C : (zi A) 1 > 1/ε} = {z C : z σ(a + E) for some E C n n with E < ε} For upper and lower bounds on x(t), see [Trefethen & E. 25], e.g., sup e ta Re z If σ ε(a) extends sup t z σ ε(a) ε. more than ε across the imaginary axis, e ta grows transiently. 4 3 2 σ ε (A) -1-1.5 5 4 1-1 -2 3 2 e ta lower bound on sup t> e ta from σ ε(a) with ε = 1/1-2 -3-4 -4-2 2-2.5 1-3 log 1 ε 1 2 3 4 t Pseudospectra can guarantee that some x() induce transient growth.

two ways to look at pseudospectra Two equivalent definitions give two distinct perspectives. perturbed eigenvalues σ ε(a) = {z C : z σ(a + E) for some E C n n with E < ε} norms of resolvents σ ε(a) = {z C : (zi A) 1 > 1/ε}

two ways to look at pseudospectra Two equivalent definitions give two distinct perspectives. perturbed eigenvalues σ ε(a) = {z C : z σ(a + E) for some E C n n with E < ε} norms of resolvents σ ε(a) = {z C : (zi A) 1 > 1/ε} σ ε(a) contains the eigenvalues of all matrices with distance ε of A. Ideal for assessing asymptotic stability of uncertain systems: Is some matrix near A unstable? Why consider all E C n n? Structured pseudospectra further restrict E (real, Toeplitz, etc.). [Hinrichsen & Pritchard], [Karow], [Rump]

two ways to look at pseudospectra Two equivalent definitions give two distinct perspectives. perturbed eigenvalues σ ε(a) = {z C : z σ(a + E) for some E C n n with E < ε} σ ε(a) contains the eigenvalues of all matrices with distance ε of A. Ideal for assessing asymptotic stability of uncertain systems: Is some matrix near A unstable? Why consider all E C n n? Structured pseudospectra further restrict E (real, Toeplitz, etc.). [Hinrichsen & Pritchard], [Karow], [Rump] norms of resolvents σ ε(a) = {z C : (zi A) 1 > 1/ε} σ ε(a) is bounded by 1/ε level sets of the resolvent norm. Ideal for assessing transient behavior of stable systems: e ta > 1 or A k > 1? Rooted in semigroup theory: based on the solution operator for the dynamical system; structure of A plays no role.

two ways to look at pseudospectra Two equivalent definitions give two distinct perspectives. perturbed eigenvalues σ ε(a) = {z C : z σ(a + E) for some E C n n with E < ε} σ ε(a) contains the eigenvalues of all matrices with distance ε of A. Ideal for assessing asymptotic stability of uncertain systems: Is some matrix near A unstable? Why consider all E C n n? Structured pseudospectra further restrict E (real, Toeplitz, etc.). [Hinrichsen & Pritchard], [Karow], [Rump] norms of resolvents σ ε(a) = {z C : (zi A) 1 > 1/ε} σ ε(a) is bounded by 1/ε level sets of the resolvent norm. Ideal for assessing transient behavior of stable systems: e ta > 1 or A k > 1? Rooted in semigroup theory: based on the solution operator for the dynamical system; structure of A plays no role. These perspective match for x (t) = Ax(t), but not for more complicated systems.

scalar delay equations and the nonlinear eigenvalue problem We shall apply these ideas to explore the potential for transient growth in solutions to stable delay differential equations. Solutions of scalar systems x (t) = αx(t) behave monotonically: x(t) = e t Re α x(). What about scalar delay equations? x (t) = αx(t) + βx(t 1) Using the techniques seen earlier, we associate this system with the NLEVP (λ α)e λ = β, with infinitely many eigenvalues given by branches of the Lambert-W function: λ k = α + W k (βe α ).

scalar delay equations and the nonlinear eigenvalue problem We shall apply these ideas to explore the potential for transient growth in solutions to stable delay differential equations. Solutions of scalar systems x (t) = αx(t) behave monotonically: x(t) = e t Re α x(). What about scalar delay equations? 8 6 4 2-2 -4-6 {λ k } for { α = 3/4 β = 1-8 -6-5 -4-3 -2-1 1 x (t) = αx(t) + βx(t 1) = λ k = α + W k (βe α ).

scalar delay equations and the nonlinear eigenvalue problem We shall apply these ideas to explore the potential for transient growth in solutions to stable delay differential equations. Solutions of scalar systems x (t) = αx(t) behave monotonically: x(t) = e t Re α x(). What about scalar delay equations? 8 6 4 2 Re λ k < for all k = asymptotic stability -2-4 -6 {λ k } for { α = 3/4 β = 1-8 -6-5 -4-3 -2-1 1 x (t) = αx(t) + βx(t 1) = λ k = α + W k (βe α ).

stability chart for two-parameter delay equation Conventional eigenvalue-based stability analysis reveals the (α, β) combinations that yield asymptotically stable solutions. 2 1.5 1 unstable (α, β) pairs.5 β -.5-1 -1.5 stable (α, β) pairs -2-2.5-3 -3-2 -1 1 2 3 α Such stability charts are standard tools for studying stability of parameter-dependent delay systems.

pseudospectra for nonlinear eigenvalue problems Green & Wagenknecht [26] and Michiels, Green, Wagenknecht, & Niculescu [26] define pseudospectra for nonlinear eigenvalue problems, and apply them to delay differential equations. See [Cullum, Ruehli 21], [Wagenknecht, Michiels, Green 28], [Bindel, Hood 213]. Consider the nonlinear eigenvalue problem T(λ)v = with T(λ) = m f j (λ) A j. For p, q [1, ] and weights w 1,..., w m (, ], define the norm w 1 E 1 q (E 1,..., E m) p,q =.. w m E m q p Given this way of measuring a perturbation to T(λ), [MGWN 26] define σ ε(t) = { z C : z σ ( m j=1 ) f j (λ) (A j + E j ) for some j=1 } E 1,..., E m C n n with (E 1,..., E m) p,q < ε.

pseudospectra for the scalar delay equation x (t) = αx(t) + βx(t 1) T (λ) = λ α βe λ. 4 2 1.5 2 3 1.5 1 1.5 2 1 1.5 1.5.5-1 -.5-2 -3 -.5-1 -.5-4 -5 5-1 -1.5-1.5-1 -.5.5 1 1.5-1 MGWN ε-pseudospectra for α = 3 and β = 1, 4 with perturbation norm given by q [1, ] and p =, and w 1 = w 2 = 1.

pseudospectra for the scalar delay equation x (t) = x(t) + x(t 1) T (λ) = λ + 1 T (λ) = λ + 1 e λ 4 log 1 3 3 2 4 log 1 3 3 2 1 log 1 2 1 log 1 2-1 -1-2 -2-3 -3-4 -5-4 -3-2 -1 1 2 3 log 1 1-4 -5-4 -3-2 -1 1 2 3 log 1 1 MGWN ε-pseudospectra with p = : structure affects pseudospectra.

the solution operator To better understand transient behavior, just integrate the differential equation: x (t) = αx(t) + βx(t 1) history: x(t 1) = u(t) for t [, 1].

the solution operator To better understand transient behavior, just integrate the differential equation: x (t) = αx(t) + βx(t 1) history: x(t 1) = u(t) for t [, 1]. Integrate to get, for t [, 1], x (t) = αx(t) + βu(t) x(t) = e t α x() + β = e t α u(1) + β t t e (t s)α u(s) ds e (t s)α u(s) ds.

the solution operator To better understand transient behavior, just integrate the differential equation: x (t) = αx(t) + βx(t 1) history: x(t 1) = u(t) for t [, 1]. Integrate to get, for t [, 1], x (t) = αx(t) + βu(t) x(t) = e t α x() + β = e t α u(1) + β t t e (t s)α u(s) ds e (t s)α u(s) ds. This operation maps the history u to the solution x for t [, 1]: u C([, 1]) x C([, 1]).

the solution operator Define the solution operator K : C[, 1] C[, 1] via x(t) = (Ku)(t) = e t α u(1) + β t e (t s)α u(s) ds, t [, 1].

the solution operator Define the solution operator K : C[, 1] C[, 1] via x(t) = (Ku)(t) = e t α u(1) + β t e (t s)α u(s) ds, t [, 1]. x m = x(t) t [m 1,m] define: x () := u t [ 1, ] to advance t by 1 unit, apply K: x (1) := Kx () t [, 1] to advance t by 2 units, apply K 2 : x (2) := Kx (1) = K 2 x () t [1, 2]. to advance t by m units, apply K m : x (m) := Kx (m 1) = K m x () t [m 1, m]

the solution operator Define the solution operator K : C[, 1] C[, 1] via x(t) = (Ku)(t) = e t α u(1) + β t e (t s)α u(s) ds, t [, 1]. x m = x(t) t [m 1,m] define: x () := u t [ 1, ] to advance t by 1 unit, apply K: x (1) := Kx () t [, 1] to advance t by 2 units, apply K 2 : x (2) := Kx (1) = K 2 x () t [1, 2]. to advance t by m units, apply K m : x (m) := Kx (m 1) = K m x () t [m 1, m] View the delay system as a discrete-time dynamical system over 1-unit time intervals: x (m) = K m x ().

discretizing the solution operator We discretize the solution operator using a Chebyshev pseudospectral method based on [Trefethen 2]; see [Bueler 27], [Jarlebring 28]. N tj x(t j ) x j := e t j α u + βw j,k u k, w j,k := e (t j s)α l k (s) ds k= x e t α u w, w,1 w,n u x 1 e t 1α u 1 w 1, w 1,1 w 1,N u 1 =......... +β.......... x N e t N α }{{} E N (α) u N w N, w N,1 w N,N }{{} W N (α) u N

discretizing the solution operator We discretize the solution operator using a Chebyshev pseudospectral method based on [Trefethen 2]; see [Bueler 27], [Jarlebring 28]. N tj x(t j ) x j := e t j α u + βw j,k u k, w j,k := e (t j s)α l k (s) ds k= x e t α u w, w,1 w,n u x 1 e t 1α u 1 w 1, w 1,1 w 1,N u 1 =......... +β.......... x N e t N α }{{} E N (α) u N w N, w N,1 w N,N }{{} W N (α) u N K N := E N (α) + β W N (α) x (1) := K N u x (m) := K m N u

approaches to transient analysis of delay equations Jacob Stroh [26], in a master s thesis advised by Ed Bueler, computes L 2 -pseudospectra of Chebyshev discretizations of the compact solution operator and considers nonnormality as a function of a time-varying coefficient in the delay term: our approach follows closely. Green & Wagenknecht [26], in their paper about perturbation-based pseudospectra for delay equations, describe computing the pseudospectra of the generator for the solution semigroup as a way of gauging transient behavior; for relevant semigroup theory, see, e.g., [Engel & Nagel 2]. Hood & Bindel [216+] apply Laplace transform/pseudospectral techniques to the solution operator for delay differential equations for upper/lower bounds on transient behavior. See also the Lyapunov approach to analyzing transient behavior in the 25 Ph.D. thesis of Elmar Plischke. Solution operator approach converts a finite dimensional nonlinear problem into an infinite dimensional linear problem, akin to the infinite Arnoldi algorithm [Jarlebring, Meerbergen, Michiels 21, 212, 214].

convergence of the eigenvalues of the solution operator To study convergence, consider α =, β = 1: x (t) = x(t 1). µ (N) j : the jth largest magnitude eigenvalue of K N e λ j : λ j is the jth rightmost eigenvalue of the NLEVP eigenvalue error µ (N) j e λj 1 1 3 5 1-5 7 9 11 13 1-1 15 17 19 21 1-15 23 j 1-2 4 8 16 32 64 128 256 N We generally use N = 64 for our computations throughout what follows.

nonconvergence of the L 2 pseudospectra of the solution operator Eigenvalues converge, but the L 2 [, 1] pseudospectra of K N do not: the departure from normality increases with N! 2-1 2-1 -1.25-1.25 1-1.5 1-1.5-1.75-1.75-2 -2-2.25-2.25-1 -2.5-1 -2.5 N = 4-2 -2-1 1 2-2.75-3 -2-2 -1 1 2-2.75-3 N = 16 2-1 2-1 -1.25-1.25 1-1.5 1-1.5-1.75-1.75-2 -2-2.25-2.25-1 -2.5-1 -2.5 N = 64-2 -2-1 1 2-2.75-3 -2-2 -1 1 2-2.75-3 N = 256

the problem with the L 2 norm Problem: The L 2 (, 1) norm does not measure transient growth of x(t). One can easily find u(x) such that u L 2 [,1] 1 but x L 2 [,1] 1. Let α =, β = 1: x (t) = x(t 1) = x(t) = u(1) t u(s) ds. 1 x(t) -1 u(t) = exp(1(t 1)) -1 1 2 3 4 5 6 initial condition t 1 x(t) -1 u(t) = exp(1(t 1)) -1 1 2 3 4 5 6 initial condition t

the problem with the L 2 norm Problem: The L 2 (, 1) norm does not measure transient growth of x(t). One can easily find u(x) such that u L 2 [,1] 1 but x L 2 [,1] 1. Let α =, β = 1: x (t) = x(t 1) = x(t) = u(1) t u(s) ds. 15 x (m) L 2 [,1] u L 2 [,1] 1 5 1 2 3 4 5 6 7 8 9 1 11 12 m

pseudospectra and transient growth of matrix powers Since we care about the largest value x(t) can take, we should really study x (m) L, and thus the ε-pseudospectrum σ ε(k N ) defined using the -norm: σ ε(k N ) := {z C : (zi K N ) 1 > 1/ε} := {z C : z σ(k N + E) for some E C n n with E < ε}.

pseudospectra and transient growth of matrix powers Since we care about the largest value x(t) can take, we should really study x (m) L, and thus the ε-pseudospectrum σ ε(k N ) defined using the -norm: σ ε(k N ) := {z C : (zi K N ) 1 > 1/ε} := {z C : z σ(k N + E) for some E C n n with E < ε}. Even in Banach spaces, pseudospectra give lower bounds on transient growth; see, e.g., [Trefethen & E., 25]. sup K m z 1 sup m z σ ε(k) ε If σ ε(k) extends more than ε outside the unit disk, K m grows transiently. Limitations: [Greenbaum & Trefethen 1994], [Ransford et al. 27, 29, 211]

stability versus solution operator norm x (t) = αx(t) + βx(t 1) 2 1.5 1 unstable (α, β) pairs.5 β -.5-1 -1.5 stable (α, β) pairs -2-2.5-3 -3-2 -1 1 2 3 α Stable choices of the (α, β) parameters

stability versus solution operator norm x (t) = αx(t) + βx(t 1) 2 1.5 1 ρ(k) = 1.5 β -.5-1 -1.5-2 -2.5-3 -3-2 -1 1 2 3 α Level sets: ρ(k) =.1,.2,..., 1.

stability versus solution operator norm x (t) = αx(t) + βx(t 1) 2 K = 4.5 1.5 K =.5 1.5 β -.5-1 -1.5-2 -2.5-3 -3-2 -1 1 2 3 α Level sets: K =.5, 1.,..., 4.5

stability versus solution operator norm x (t) = αx(t) + βx(t 1) 2 1.5 1.5 β -.5-1 -1.5-2 -2.5-3 -3-2 -1 1 2 3 α Superimposed level sets for ρ(k) and K

stability versus solution operator norm x (t) = αx(t) + βx(t 1) 2 1.5 1.5 β -.5-1 -1.5-2 -2.5 sweet spot for transient growth -3-3 -2-1 1 2 3 α Superimposed level sets for ρ(k) and K

solution matrix pseudospectra ( -norm) x (t) = αx(t) + βx(t 1) -1 1-1.25.5-1.5-1.75-2 -.5-2.25-2.5-1 -2.75-1 -.5.5 1 1.5 2 α = 1 β = 1 ρ(k) = 1 K = 4.43632-3

solution matrix pseudospectra ( -norm) x (t) = αx(t) + βx(t 1) -1 1-1.25.5-1.5-1.75-2 -.5-2.25-2.5-1 -2.75-1 -.5.5 1 1.5 2 α =.98995 β =.99 ρ(k) =.99 K = 4.3824-3

solution matrix pseudospectra ( -norm) x (t) = αx(t) + βx(t 1) -1 1-1.25.5-1.5-1.75-2 -.5-2.25-2.5-1 -2.75-1 -.5.5 1 1.5 2 α =.98995 β =.9 ρ(k) =.9 K = 3.9135-3

solution matrix pseudospectra ( -norm) x (t) = αx(t) + βx(t 1).5-1.5-1 -1.25-1.25.25-1.5.25-1.5-1.75-1.75-2 -2-2.25-2.25 -.25-2.5 -.25-2.5-2.75-2.75 -.5.5.75 1 1.25 1.5-3 -.5.5.75 1 1.25 1.5-3 α =.98995 β =.99 ρ(k) =.99 K = 4.3824 α =.98995 β =.9 ρ(k) =.9 K = 3.9135

solution operator: transient growth x (t) = αx(t) + βx(t 1) x(t) 12 1 8 6 4 2 1-1 ρ(k) =.9 x(t) 12 1 8 6-1 1 2 3 4 initial condition t 4 2 1-1 ρ(k) =.99-1 1 2 3 4 initial condition t

solution operator: transient growth x (t) = αx(t) + βx(t 1) 1 4 1 3 x(t) 1 2 1 1 1 1-1 ρ(k) =.9 ρ(k) =.99 ρ(k) =.999 1-2 1-1 1 1 1 1 2 1 3 1 4 1 5 t As α 1 and β 1, solutions exhibit arbitrary transient growth, but slowly.

can scalar equations exhibit stronger transients? Is faster transient growth possible in a scalar equation if we allow multiple synchronized delays? x (t) = c x(t) + c 1 x(t 1) + c 2 x(t 2) + + c d x(t d).

can scalar equations exhibit stronger transients? Is faster transient growth possible in a scalar equation if we allow multiple synchronized delays? x (t) = c x(t) + c 1 x(t 1) + c 2 x(t 2) + + c d x(t d). Key: Look for solutions of the form x(t) = t d e λt.

can scalar equations exhibit stronger transients? Is faster transient growth possible in a scalar equation if we allow multiple synchronized delays? x (t) = c x(t) + c 1 x(t 1) + c 2 x(t 2) + + c d x(t d). Key: Look for solutions of the form x(t) = t d e λt. One can show that x(t) = t d e λt is a solution if and only if c, c 1,..., c d solve the Vandermonde linear system 1 1 1 1 1 2 d 1 4 d 2............... 1 2 d d d c e λ c 1 e 2λ c 2... e dλ c d = λ 1....

commensurate delays can give much larger pseudospectra x (t) = c x(t) + c 1x(t 1) + + c d x(t d) 1 d = 1 d = 2 d = 3 1 1.5.5.5 -.5 -.5 -.5-1 -1-1 ρ(k) =.9 ρ(k) =.9 ρ(k) =.9-1 -.5.5 1 1.5 2-1 -.5.5 1 1.5 2-1 -.5.5 1 1.5 2 1.5 d = 4 d = 5 1.5-1 -1.25-1.5-1.75-2 -.5-1 -.5-1 -2.25-2.5-2.75-1 -.5.5 1 1.5 2 ρ(k) =.9 ρ(k) =.934-1 -.5.5 1 1.5 2-3

commensurate delays can induce strong transients x (t) = c x(t) + c 1 x(t 1) + c 2 x(t 2) + + c d x(t d) Initial data: x(t) = 1 + 2e 1t for t d = 1 15 1 5 1-1 -4-3 -2-1 1 2 3 4 c =.8946 c 1 =.9 15 d = 2 1 5 1-1 -4-3 -2-1 1 2 3 4 c = 1.3946 c 1 = 1.8 c 2 =.45 15 d = 3 1 5 1-1 -4-3 -2-1 1 2 3 4 c = 1.728 c 1 = 2.7 c 2 = 1.215 c 3 =.243 d = 4 15 1 5 1-1 -4-3 -2-1 1 2 3 4 c = 1.978 c 1 = 3.96 c 2 = 2.43 c 3 =.972 c 4 =.164

commensurate delays can induce strong transients x (t) = c x(t) + c 1x(t 1) + + c d x(t d) 1 6 x(t) 1 4 1 2 1 d = 5 d = 1 1-2 1-1 1 1 1 1 2 1 3 t With commensurate delays, solutions to scalar equations can exhibit significant transient growth very quickly in time.

summary rational interpolation for nlevps Rational / Loewner techniques motivated by algorithms from model reduction Structure Preserving Rational Interpolation: iteratively improve projection subspaces via interpolation points and directions. Data-Driven Rational Interpolation Matrix Pencils: reduce nonlinear problem to linear matrix pencil with tangential interpolation property. Minimal Realization via Rational Contour Integrals: isolates a transfer function for a linear system, recover via Loewner minimal realization techniques. transients for delay equations Solutions to scalar delay equations can exhibit strong transient growth. Finite dimensional nonlinear problem infinite dimensional linear problem Pseudospectral theory applies to the linear problem, but the choice of norm is important. Chebyshev collocation keeps the discretization matrix size small. Adding commensurate delays enables a faster rate of initial transient growth.