Nonlinear Eigenvalue Problems: Interpolatory Algorithms and Transient Dynamics
|
|
- Abraham Hamilton
- 5 years ago
- Views:
Transcription
1 Nonlinear Eigenvalue Problems: Interpolatory Algorithms and Transient Dynamics Mark Embree Virginia Tech with Serkan Gugercin Jonathan Baker, Michael Brennan, Alexander Grimm Virginia Tech SIAM Conference on Applied Linear Algebra Hong Kong May 218 Thanks to: International Linear Algebra Society (ILAS) US National Science Foundation DMS
2 a talk in two parts... rational interpolation for nlevps Rational / Loewner techniques for nonlinear eigenvalue problems, motivated by algorithms from model reduction. Structure Preserving Rational Interpolation Data-Driven Rational Interpolation Matrix Pencils Minimal Realization via Rational Contour Integrals transients for delay equations Scalar delay equations: a case-study for how one can apply pseudospectra techniques to analyze the transient behavior of a dynamical system. Finite dimensional nonlinear problem infinite dimensional linear problem Pseudospectral theory applies to the linear problem, but the choice of norm is important
3 nonlinear eigenvalue problems: the final frontier? problem typical # eigenvalues standard eigenvalue problem (A λi)v = n generalized eigenvalue problem (A λe)v = n quadratic eigenvalue problem (K + λd + λ 2 M)v = 2n polynomial eigenvalue problem nonlinear eigenvalue problem ( d k= λk A k )v = dn ( d k= f k(λ)a k )v =
4 nonlinear eigenvalue problems: the final frontier? problem typical # eigenvalues standard eigenvalue problem (A λi)v = n generalized eigenvalue problem (A λe)v = n quadratic eigenvalue problem (K + λd + λ 2 M)v = 2n polynomial eigenvalue problem nonlinear eigenvalue problem ( d k= λk A k )v = dn ( d k= f k(λ)a k )v = nonlinear eigenvector problem F (λ, v) =
5 a basic nonlinear eigenvalue problem Consider the simple scalar delay differential equation x (t) = x(t 1). Substituting the ansatz x(t) = e λt yields the nonlinear eigenvalue problem T (λ) = 1 + λe λ =. 32 (of infinitely many) eigenvalues of T for this scalar (n = 1) equation: Im λk eigenvalues determined by the Lambert-W function Re λ k See, e.g., [Michiels & Niculescu 27]
6 nonlinear eigenvalue problems: many resources Nonlinear eigenvalue problems have classical roots, but now form a fast-moving field with many excellent resources and new algorithms. Helpful surveys: Mehrmann & Voss, GAMM, [24] Voss, Handbook of Linear Algebra, [214] Güttel & Tisseur, Acta Numerica survey [217] Software: NLEVP test collection [Betcke, Higham, Mehrmann, Schröder, Tisseur 213] SLEPC contains NLEVP algorithm implementations [Roman et al.] Many algorithms based on Newton s method, rational approximation, linearization, contour integration, projection, etc. Incomplete list of contributors: Asakura, Bai, Betcke, Beyn, Effenberger, Güttel, Ikegami, Jarlebring, Kimura, Kressner, Leitart, Meerbergen, Michiels, Niculescu, Pérez, Sakurai, Tadano, Van Beeumen, Vandereycken, Voss, Yokota,.... Infinite dimensional nonlinear spectral problems are even more subtle: [Appell, De Pascale, Vignoli 24] give seven distinct definitions of the spectrum.
7 Rational Interpolation Algorithms for Nonlinear Eigenvalue Problems
8 rational interpolation of functions and systems Rational interpolation problem. Given points {z j } 2r j=1 C and data {f j f (z j )} 2r j=1, find a rational function R(z) = p(z)/q(z) of type (r 1, r 1) such that R(z j ) = f j.
9 rational interpolation of functions and systems Rational interpolation problem. Given points {z j } 2r j=1 C and data {f j f (z j )} 2r j=1, find a rational function R(z) = p(z)/q(z) of type (r 1, r 1) such that R(z j ) = f j. Given Lagrange basis functions l j (z) = r (z z k ) and nodal polynomial l(z) = k=1 k j r (z z k ), k=1 R(z) = p(z) q(z) = r β j l j (z) j=1 r w j l j (z) j=1 = r j=1 r j=1 β j l j (z) l(z) w j l j (z) l(z) = r β j z z j=1 j r w j z z j=1 j barycentric form
10 rational interpolation: barycentric perspective Lagrange basis: l j (z) = r (z z k ) k=1 k j R(z) = p(z) q(z) = r β j l j (z) j=1 r w j l j (z) j=1 = r β j z z j=1 j r w j z z j=1 j
11 rational interpolation: barycentric perspective Lagrange basis: l j (z) = r (z z k ) k=1 k j R(z) = p(z) q(z) = r β j l j (z) j=1 r w j l j (z) j=1 = r β j z z j=1 j r w j z z j=1 j Fix {β j = f j w j } r j=1 to interpolate at z 1,..., z r : R(z j ) = f j.
12 rational interpolation: barycentric perspective Lagrange basis: l j (z) = r (z z k ) k=1 k j R(z) = p(z) q(z) = r β j l j (z) j=1 r w j l j (z) j=1 = r β j z z j=1 j r w j z z j=1 j Fix {β j = f j w j } r j=1 to interpolate at z 1,..., z r : R(z j ) = f j. Determine w 1,..., w r to interpolate at z r+1,..., z 2r : R(z k ) = r f j w j z j=1 k z j r w j z j=1 k z j = f k = r j=1 f j w j z k z j = r j=1 f k w j z k z j
13 rational interpolation: barycentric perspective Fix {β j = f j w j } r j=1 to interpolate at z 1,..., z r : r(z j ) = f j. Determine w 1,..., w r to interpolate at z r+1,..., z 2r : R(z k ) = f k = r j=1 f j w j z k z j = r j=1 f k w j z k z j = r j=1 f j f k z j z k w j =.
14 rational interpolation: barycentric perspective Fix {β j = f j w j } r j=1 to interpolate at z 1,..., z r : r(z j ) = f j. Determine w 1,..., w r to interpolate at z r+1,..., z 2r : R(z k ) = f k = r j=1 f j w j z k z j = f 1 f r+1 z 1 z r+1 f 2 f r+1 z 2 z r+1 f 1 f r+2 z 1 z r+2 f 2 f r+2 z 2 z r+2. f 1 f 2r z 1 z 2r. f 2 f 2r z 2 z 2r r j=1 f k w j z k z j = f r f r+1 z r z r+1 f r f r+2 z r z r Loewner matrix, L f r f 2r z r z 2r w 1 w 2. w r r j=1 f j f k z j z k w j =. =.
15 rational interpolation: barycentric perspective Fix {β j = f j w j } r j=1 to interpolate at z 1,..., z r : r(z j ) = f j. Determine w 1,..., w r to interpolate at z r+1,..., z 2r : R(z k ) = f k = r j=1 f j w j z k z j = f 1 f r+1 z 1 z r+1 f 2 f r+1 z 2 z r+1 f 1 f r+2 z 1 z r+2 f 2 f r+2 z 2 z r+2. f 1 f 2r z 1 z 2r. f 2 f 2r z 2 z 2r r j=1 f k w j z k z j = f r f r+1 z r z r+1 f r f r+2 z r z r Loewner matrix, L f r f 2r z r z 2r w 1 w 2. w r r j=1 f j f k z j z k w j =. =. Barycentric rational interpolation algorithm [Antoulas & Anderson [1986] AAA (Adaptive Antoulas Anderson) Method [Nakatsukasa, Sète, Trefethen, 216]
16 rational interpolation: state space perspective The rational interpolant R(z) to f at z 1,..., z 2r can also be formulated in state-space form using Loewner matrix techniques. R(z) = c(l s zl) 1 b, where c = [f r+1,..., f 2r ], b = [f 1,..., f r ] T and z 1 f 1 z r+1 f r+1 z 1 z r+1. z 1 f 1 z 2r f 2r z 1 z 2r z r f r z r+1 f r+1 z r z r z r f r z 2r f 2r z r z 2r, f 1 f r+1 z 1 z r+1. f 1 f 2r z 1 z 2r f r f r+1 z r z r f r f 2r z r z 2r. shifted Loewner matrix, L s Loewner matrix, L State space formulation proposed by Mayo & Antoulas [27] Natural approach for handling tangential interpolation for vector data For details, applications, and extensions, see [Antoulas, Lefteriu, Ionita 217]
17 approach one: structure preserving rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Method: Reduce dimension of T(λ) but maintain the nonlinear structure. Smaller problem will be more amenable to dense nonlinear eigensolvers. Rational tangential interpolation of T(λ) 1 at r points, directions. Iteratively Corrected Rational Interpolation method
18 approach one: structure preserving rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Method: Reduce dimension of T(λ) but maintain the nonlinear structure. Smaller problem will be more amenable to dense nonlinear eigensolvers. Rational tangential interpolation of T(λ) 1 at r points, directions. Iteratively Corrected Rational Interpolation method Pick r interpolation points {z j } r j=1 and interpolation directions {w j } r j=1.
19 approach one: structure preserving rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Method: Reduce dimension of T(λ) but maintain the nonlinear structure. Smaller problem will be more amenable to dense nonlinear eigensolvers. Rational tangential interpolation of T(λ) 1 at r points, directions. Iteratively Corrected Rational Interpolation method Pick r interpolation points {z j } r j=1 and interpolation directions {w j } r j=1. Construct a basis for projection (cf. shift-invert Arnoldi ): U = orth([t(z 1) 1 w 1 T(z 2) 1 w 2 T(z r ) 1 w r ] C n r.
20 approach one: structure preserving rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Method: Reduce dimension of T(λ) but maintain the nonlinear structure. Smaller problem will be more amenable to dense nonlinear eigensolvers. Rational tangential interpolation of T(λ) 1 at r points, directions. Iteratively Corrected Rational Interpolation method Pick r interpolation points {z j } r j=1 and interpolation directions {w j } r j=1. Construct a basis for projection (cf. shift-invert Arnoldi ): U = orth([t(z 1) 1 w 1 T(z 2) 1 w 2 T(z r ) 1 w r ] C n r. Form the reduced-dimension nonlinear system: T r (λ) := U T(λ)U C r r.
21 approach one: structure preserving rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Method: Reduce dimension of T(λ) but maintain the nonlinear structure. Smaller problem will be more amenable to dense nonlinear eigensolvers. Rational tangential interpolation of T(λ) 1 at r points, directions. Iteratively Corrected Rational Interpolation method Pick r interpolation points {z j } r j=1 and interpolation directions {w j } r j=1. Construct a basis for projection (cf. shift-invert Arnoldi ): U = orth([t(z 1) 1 w 1 T(z 2) 1 w 2 T(z r ) 1 w r ] C n r. Form the reduced-dimension nonlinear system: T r (λ) := U T(λ)U C r r. Compute the spectrum of T r (λ) and use its eigenvalues and eigenvectors to update {z j } r j=1 and {w j } r j=1, and repeat.
22 approach one: structure preserving rational interpolation The choice of projection subspace Ran(U) delivers the key interpolation property. Interpolation Theorem. Provided z j σ(t) σ(t r ) for all j = 1,..., r, T(z j ) 1 w j = UT r (z j ) 1 U w j. Inspiration: model reduction for nonlinear systems w/coprime factorizations [Beattie & Gugercin 29]; iteration like dominant pole algorithm [Martins, Lima, Pinto 1996]; [Roomes & Martins 26], IRKA [Gugercin, Antoulas, Beattie 28].
23 approach one: structure preserving rational interpolation The choice of projection subspace Ran(U) delivers the key interpolation property. Interpolation Theorem. Provided z j σ(t) σ(t r ) for all j = 1,..., r, T(z j ) 1 w j = UT r (z j ) 1 U w j. Inspiration: model reduction for nonlinear systems w/coprime factorizations [Beattie & Gugercin 29]; iteration like dominant pole algorithm [Martins, Lima, Pinto 1996]; [Roomes & Martins 26], IRKA [Gugercin, Antoulas, Beattie 28]. Illustration. As for all orthogonal projection methods: T(λ) = f (λ) A + f 1(λ) A 1 + f 2(λ) A 2 T r (λ) = f (λ) U A U + f 1(λ) U A 1U + f 2(λ) U A 2U
24 approach one: structure preserving rational interpolation The choice of projection subspace Ran(U) delivers the key interpolation property. Interpolation Theorem. Provided z j σ(t) σ(t r ) for all j = 1,..., r, T(z j ) 1 w j = UT r (z j ) 1 U w j. Inspiration: model reduction for nonlinear systems w/coprime factorizations [Beattie & Gugercin 29]; iteration like dominant pole algorithm [Martins, Lima, Pinto 1996]; [Roomes & Martins 26], IRKA [Gugercin, Antoulas, Beattie 28]. Illustration. As for all orthogonal projection methods: T(λ) = f (λ) A + f 1(λ) A 1 + f 2(λ) A 2 T r (λ) = f (λ) U A U + f 1(λ) U A 1U + f 2(λ) U A 2U The nonlinear functions f j remain intact: the structure is preserved. The coefficients A j C n n are compressed to U A j U C r r. Contrast: [Lietaert, Pérez, Vandereycken, Meerbergen 218+] apply AAA approximation to f j (λ), leave coefficient matrices intact.
25 approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n} Eigenvalues of full T(λ) Interpolation points {z j } initial r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly
26 approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n} Eigenvalues of full T(λ) Interpolation points {z j } Eigenvalues of reduced T r (λ) iteration r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly
27 approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n} Eigenvalues of full T(λ) Interpolation points {z j } Eigenvalues of reduced T r (λ) iteration r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly
28 approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n} Eigenvalues of full T(λ) Interpolation points {z j } Eigenvalues of reduced T r (λ) iteration r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly
29 approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n} Eigenvalues of full T(λ) Interpolation points {z j } Eigenvalues of reduced T r (λ) iteration r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly
30 approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n} zoom Eigenvalues of full T(λ) Eigenvalues of reduced T r (λ) at the final cycle Final interpolation points {z j } r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly
31 approach one: structure preserving rational interpolation Example 1. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n} convergence of r/2 rightmost (z j, w j ) pairs 2-2 T(zj)wj Eigenvalues of full T(λ) Eigenvalues of reduced T r (λ) at the final cycle Final interpolation points {z j } cycle r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly
32 approach one: structure preserving rational interpolation Example 2. T(λ) = λi A e λ I, A is symmetric with n = 1; eigenvalues of A = { 1/n, 4/n,..., n} tight clustering of rightmost eigenvalues is a challenge zoom Eigenvalues of full T(λ) Eigenvalues of reduced T r (λ) at the final cycle Final interpolation points {z j } r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly
33 approach one: structure preserving rational interpolation Example 2. T(λ) = λi A e λ I, A is symmetric with n = 1; eigenvalues of A = { 1/n, 4/n,..., n} convergence of r/2 rightmost (z j, w j ) pairs T(zj)wj Eigenvalues of full T(λ) Eigenvalues of reduced T r (λ) at the final cycle Final interpolation points {z j } cycle r = 16 used at each cycle (new points = real eigenvalues of T r (λ)) initial {z j } uniformly distributed on [ 1i, 1i], {w j } selected randomly
34 approach two: data-driven rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Obtain a small linear matrix pencil that interpolates the nonlinear eigenvalue problem. Smaller problem requires no further linearization. Method: Data-driven rational interpolation of T(λ) 1. Data-Driven Rational Interpolation Matrix Pencil method
35 approach two: data-driven rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Obtain a small linear matrix pencil that interpolates the nonlinear eigenvalue problem. Smaller problem requires no further linearization. Method: Data-driven rational interpolation of T(λ) 1. Data-Driven Rational Interpolation Matrix Pencil method Specify interpolation data: left points, directions: z 1,..., z r C, w 1,..., w r C n right points, directions: z r+1,..., z 2r C, w r+1,..., w 2r C n
36 approach two: data-driven rational interpolation Scenario: T(λ) C n n has large dimension n. Goal: Obtain a small linear matrix pencil that interpolates the nonlinear eigenvalue problem. Smaller problem requires no further linearization. Method: Data-driven rational interpolation of T(λ) 1. Data-Driven Rational Interpolation Matrix Pencil method Specify interpolation data: left points, directions: z 1,..., z r C, w 1,..., w r C n right points, directions: z r+1,..., z 2r C, w r+1,..., w 2r C n Construct T r (λ) 1 := C r (A r λe r ) 1 B r to tangentially interpolate T(λ) 1. Tangential Interpolation Theorem. Provided z j σ(t) σ(t r ), w T j T(z j ) 1 = w T j T r (z j ) 1, j = 1,..., r; T(z j ) 1 w j = T r (z j ) 1 w j, j = r + 1,..., 2r.
37 approach two: data-driven rational interpolation Given left points, directions: z 1,..., z r C, w 1,..., w r C n right points, directions: z r+1,..., z 2r C, w r+1,..., w 2r C n Define left interpolation data: f 1 = T(z 1) T w 1,..., f r = T(z r ) T w r right interpolation data: f r+1 = T(z r+1) 1 w r+1,..., f 2r = T(z 2r ) 1 w 2r
38 approach two: data-driven rational interpolation Given left points, directions: z 1,..., z r C, w 1,..., w r C n right points, directions: z r+1,..., z 2r C, w r+1,..., w 2r C n Define left interpolation data: f 1 = T(z 1) T w 1,..., f r = T(z r ) T w r right interpolation data: f r+1 = T(z r+1) 1 w r+1,..., f 2r = T(z 2r ) 1 w 2r Order-r (linear) model: T r (z) 1 = C r (A r ze r ) 1 B r
39 approach two: data-driven rational interpolation Given left points, directions: z 1,..., z r C, w 1,..., w r C n right points, directions: z r+1,..., z 2r C, w r+1,..., w 2r C n Define left interpolation data: f 1 = T(z 1) T w 1,..., f r = T(z r ) T w r right interpolation data: f r+1 = T(z r+1) 1 w r+1,..., f 2r = T(z 2r ) 1 w 2r Order-r (linear) model: T r (z) 1 = C r (A r ze r ) 1 B r C r = [ f r+1,..., f 2r ] A r = z 1 f1 T w r+1 z r+1 w1 T f r+1 z z 1 z r fr T w r+1 z r+1 wr T f r+1 r+1 z r z r z 1 f1 T w 2r z 2r w1 T f 2r z z 1 z r fr T w 2r z 2r wr T f 2r 2r z r z 2r E r = f1 T w r+1 w1 T f r+1 f z 1 z r T w r+1 w T r f r+1 r+1 z r z r f1 T w 2r w1 T f 2r f z 1 z r T w 2r wr T f 2r 2r z r z 2r B r = [ f 1,..., f r ] T shifted Loewner Loewner
40 approach two: data-driven rational interpolation Given left points, directions: z 1,..., z r C, w 1,..., w r C n right points, directions: z r+1,..., z 2r C, w r+1,..., w 2r C n Define left interpolation data: f 1 = T(z 1) T w 1,..., f r = T(z r ) T w r right interpolation data: f r+1 = T(z r+1) 1 w r+1,..., f 2r = T(z 2r ) 1 w 2r Rank-r (linear) model: T r (z) 1 = C r (A r ze r ) 1 B r (A r ze r) 1 B r T(z) 1 C r e.g., (A +f 1(z)A 1 +f 2(z)A 2) 1 linear matrix pencil
41 approach two: data-driven rational interpolation Example. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n} zoom out Eigenvalues of full T(λ) Eigenvalues of reduced matrix pencil A r z E r r = 4 interpolation points used, uniform in interval [ 8i, 8i] Hermite interpolation variant that only uses r distinct interpolation points. interpolation directions from smallest singular values of T(z j ).
42 approach three: Loewner realization via contour integration Scenario: Seek all eigenvalues of T(λ) C n n in a prescribed region Ω of C. Goal: Use Keldysh s Theorem to isolate interesting part of T(λ) in Ω. Method: Contour integration of T(λ) against rational test functions. Loewner matrix will reveal number of eigenvalues in Ω. Theorem [Keldysh 1951]. Suppose T(z) has m eigenvalues λ 1,..., λ m (counting multiplicity) in the region Ω C, all semi-simple. Then T(z) 1 = V(zI Λ) 1 U + R(z), V = [v 1 v m], U = [u 1 u m], Λ = diag(λ 1,..., λ m), u j T (λ j )v j = 1; R(z) is analytic in Ω.
43 approach three: Loewner realization via contour integration Scenario: Seek all eigenvalues of T(λ) C n n in a prescribed region Ω of C. Goal: Use Keldysh s Theorem to isolate interesting part of T(λ) in Ω. Method: Contour integration of T(λ) against rational test functions. Loewner matrix will reveal number of eigenvalues in Ω. Theorem [Keldysh 1951]. Suppose T(z) has m eigenvalues λ 1,..., λ m (counting multiplicity) in the region Ω C, all semi-simple. Then T(z) 1 = V(zI Λ) 1 U + R(z), V = [v 1 v m], U = [u 1 u m], Λ = diag(λ 1,..., λ m), u j T (λ j )v j = 1; R(z) is analytic in Ω. T(z) 1 = H(z) + R(z) where H(z) := V(zI Λ) 1 U is a transfer function for a linear system.
44 approach three: Loewner approximation via contour integration Theorem [Keldysh 1951]. Suppose T(z) has m eigenvalues λ 1,..., λ m (counting multiplicity) in the region Ω C, all semi-simple. Then T(z) 1 = V(zI Λ) 1 U + R(z), V = [v 1 v m], U = [U 1 U m], Λ = diag(λ 1,..., λ m), U j T (λ j )v j = 1; R(z) is analytic in Ω. (zi Λ) 1 U T(z) 1 = V + R(z) H(z) := V(zI Λ) 1 U n n linear system, order m: m poles in Ω nonlinear system, but nice in Ω
45 approach three: Loewner realization via contour integration T(z) 1 = H(z) + R(z) where H(z) : V(zI Λ) 1 U is a transfer function for a linear system. A family of algorithms use the fact that, by the Cauchy integral formula, 1 f (z)t(z) 1 dz = Vf (Λ)U ; 2πi Ω see [Asakura, Sakurai, Tadano, Ikegami, Kimura 29], [Beyn 212], [Yokota & Sakurai 213], etc., building upon contour integral eigensolvers for matrix pencils [Sakurai & Sugiura 23], [Polizzi 29], etc. These algorithms use f (z) = z k for k =, 1,... to produce Hankel matrix pencils.
46 approach three: Loewner realization via contour integration T(z) 1 = H(z) + R(z) where H(z) : V(zI Λ) 1 U is a transfer function for a linear system. A family of algorithms use the fact that, by the Cauchy integral formula, 1 f (z)t(z) 1 dz = Vf (Λ)U ; 2πi Ω see [Asakura, Sakurai, Tadano, Ikegami, Kimura 29], [Beyn 212], [Yokota & Sakurai 213], etc., building upon contour integral eigensolvers for matrix pencils [Sakurai & Sugiura 23], [Polizzi 29], etc. These algorithms use f (z) = z k for k =, 1,... to produce Hankel matrix pencils. Key observation: If we use f (z) = 1/(z j z) for z j exterior to Ω, we obtain 1 1 2πi z j z T(z) 1 dz = V(z j I Λ) 1 U = H(z j ). Ω Contour integrals yield measurements of the linear system with the desired eigenvalues.
47 approach three: Loewner realization via contour integration Minimal Realization via Rational Contour Integrals for m eigenvalues Let r m, and select interpolation points and directions: left points, directions: z 1,..., z r C \ Ω, w 1,..., w r C n right points, directions: z r+1,..., z 2r C \ Ω, w r+1,..., w 2r C n
48 approach three: Loewner realization via contour integration Minimal Realization via Rational Contour Integrals for m eigenvalues Let r m, and select interpolation points and directions: left points, directions: z 1,..., z r C \ Ω, w 1,..., w r C n right points, directions: z r+1,..., z 2r C \ Ω, w r+1,..., w 2r C n Use contour integrals to compute the left and right interpolation data: left interpolation data: f 1 = H(z 1) T w 1,..., f r = H(z r ) T w r right interpolation data: f r+1 = H(z r+1)w r+1,..., f 2r = H(z 2r )w 2r H(z j )w j = 1 1 2πi Ω z j z T(z) 1 w j dz.
49 approach three: Loewner realization via contour integration Minimal Realization via Rational Contour Integrals for m eigenvalues Let r m, and select interpolation points and directions: left points, directions: z 1,..., z r C \ Ω, w 1,..., w r C n right points, directions: z r+1,..., z 2r C \ Ω, w r+1,..., w 2r C n Use contour integrals to compute the left and right interpolation data: left interpolation data: f 1 = H(z 1) T w 1,..., f r = H(z r ) T w r right interpolation data: f r+1 = H(z r+1)w r+1,..., f 2r = H(z 2r )w 2r H(z j )w j = 1 1 2πi Ω z j z T(z) 1 w j dz. Construct Loewner and shifted Loewner matrices from this data, just as in the Data-Driven Rational Interpolation method: C r = [ f r+1,..., f 2r ] A r = shifted Loewner matrix B r = [ f 1,..., f r ] T E r = Loewner matrix
50 approach three: Loewner realization via contour integration Minimal Realization via Rational Contour Integrals for m eigenvalues Let r m, and select interpolation points and directions: left points, directions: z 1,..., z r C \ Ω, w 1,..., w r C n right points, directions: z r+1,..., z 2r C \ Ω, w r+1,..., w 2r C n Use contour integrals to compute the left and right interpolation data: left interpolation data: f 1 = H(z 1) T w 1,..., f r = H(z r ) T w r right interpolation data: f r+1 = H(z r+1)w r+1,..., f 2r = H(z 2r )w 2r H(z j )w j = 1 1 2πi Ω z j z T(z) 1 w j dz. Construct Loewner and shifted Loewner matrices from this data, just as in the Data-Driven Rational Interpolation method: C r = [ f r+1,..., f 2r ] A r = shifted Loewner matrix B r = [ f 1,..., f r ] T E r = Loewner matrix If r = m, then V(zI Λ) 1 U = C r (A r z E r ) 1 B r : compute eigenvalues! If r > m, use SVD truncation / minimum realization techniques to reduce dimension; cf. [Mayo & Antoulas 27].
51 approach three: Loewner realization via contour integration Example. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n} zoom, N = Eigenvalues of full T(λ) 2 interpolation points in 2 + [ 6i, 6i] Eigenvalues of minimal (m = 4) matrix pencil Contour of integration (circle) Trapezoid rule uses N = 25, 5, 1, and 2 interpolation points
52 approach three: Loewner realization via contour integration Example. T(λ) = λi A e λ I, where A is symmetric with n = 1; eigenvalues of A = { 1, 2,..., n}. jth singular value of L j eigenvalue error, λj λ (N) j N 4 eigenvalues in Ω rank(l) = 4 Cf. [Beyn 212], [Güttel & Tisseur 217] for f (z) = z k. For rank detection for Loewner matrices, see [Hokanson 218+].
53 Transient Dynamics for Dynamical Systems with Delays a case study of pseudospectral analysis
54 introduction to transient dynamics We often care about eigenvalues because we seek insight about dynamics.
55 introduction to transient dynamics Start with the simple scalar system We often care about eigenvalues because we seek insight about dynamics. x (t) = αx(t), with solution x(t) = e tα x(). If Re α <, then x(t) monotonically as t. 1.5 x(t) t
56 introduction to transient dynamics We often care about eigenvalues because we seek insight about dynamics. Now consider the n-dimensional system x (t) = Ax(t) with solution x(t) = e ta x(). If Re λ < for all λ σ(a), then x(t) asymptotically as t, 2 x(t) A = [ 1 ] t
57 introduction to transient dynamics We often care about eigenvalues because we seek insight about dynamics. Now consider the n-dimensional system x (t) = Ax(t) with solution x(t) = e ta x(). If Re λ < for all λ σ(a), then x(t) asymptotically as t, but it is possible that x(t ) x() for some t (, ). 2 x(t) A = [ 1 ] t
58 why transients matter Often the linear dynamical system x (t) = Ax(t) arises from linear stability analysis for a fixed point of a nonlinear system y (t) = F(y(t), t). For example, y (t) = Ay(t) y(t) y(t) x(t) A = [ 1 ] In this example, linear transient growth feeds the nonlinearity. Such behavior can provide a mechanism for transition to turbulence in fluid flows; see, e.g., [Butler & Farrell 1992], [Trefethen et al. 1993]. t
59 detecting the potential for transient growth One can draw insight about transient growth from the numerical range (field of values) and ε-pseudospectra of A: σ ε(a) = {z C : (zi A) 1 > 1/ε} = {z C : z σ(a + E) for some E C n n with E < ε} For upper and lower bounds on x(t), see [Trefethen & E. 25], e.g., sup e ta Re z If σ ε(a) extends sup t z σ ε(a) ε. more than ε across the imaginary axis, e ta grows transiently.
60 detecting the potential for transient growth One can draw insight about transient growth from the numerical range (field of values) and ε-pseudospectra of A: σ ε(a) = {z C : (zi A) 1 > 1/ε} = {z C : z σ(a + E) for some E C n n with E < ε} For upper and lower bounds on x(t), see [Trefethen & E. 25], e.g., sup e ta Re z If σ ε(a) extends sup t z σ ε(a) ε. more than ε across the imaginary axis, e ta grows transiently σ ε (A) e ta lower bound on sup t> e ta from σ ε(a) with ε = 1/ log 1 ε t Pseudospectra can guarantee that some x() induce transient growth.
61 two ways to look at pseudospectra Two equivalent definitions give two distinct perspectives. perturbed eigenvalues σ ε(a) = {z C : z σ(a + E) for some E C n n with E < ε} norms of resolvents σ ε(a) = {z C : (zi A) 1 > 1/ε}
62 two ways to look at pseudospectra Two equivalent definitions give two distinct perspectives. perturbed eigenvalues σ ε(a) = {z C : z σ(a + E) for some E C n n with E < ε} norms of resolvents σ ε(a) = {z C : (zi A) 1 > 1/ε} σ ε(a) contains the eigenvalues of all matrices with distance ε of A. Ideal for assessing asymptotic stability of uncertain systems: Is some matrix near A unstable? Why consider all E C n n? Structured pseudospectra further restrict E (real, Toeplitz, etc.). [Hinrichsen & Pritchard], [Karow], [Rump]
63 two ways to look at pseudospectra Two equivalent definitions give two distinct perspectives. perturbed eigenvalues σ ε(a) = {z C : z σ(a + E) for some E C n n with E < ε} σ ε(a) contains the eigenvalues of all matrices with distance ε of A. Ideal for assessing asymptotic stability of uncertain systems: Is some matrix near A unstable? Why consider all E C n n? Structured pseudospectra further restrict E (real, Toeplitz, etc.). [Hinrichsen & Pritchard], [Karow], [Rump] norms of resolvents σ ε(a) = {z C : (zi A) 1 > 1/ε} σ ε(a) is bounded by 1/ε level sets of the resolvent norm. Ideal for assessing transient behavior of stable systems: e ta > 1 or A k > 1? Rooted in semigroup theory: based on the solution operator for the dynamical system; structure of A plays no role.
64 two ways to look at pseudospectra Two equivalent definitions give two distinct perspectives. perturbed eigenvalues σ ε(a) = {z C : z σ(a + E) for some E C n n with E < ε} σ ε(a) contains the eigenvalues of all matrices with distance ε of A. Ideal for assessing asymptotic stability of uncertain systems: Is some matrix near A unstable? Why consider all E C n n? Structured pseudospectra further restrict E (real, Toeplitz, etc.). [Hinrichsen & Pritchard], [Karow], [Rump] norms of resolvents σ ε(a) = {z C : (zi A) 1 > 1/ε} σ ε(a) is bounded by 1/ε level sets of the resolvent norm. Ideal for assessing transient behavior of stable systems: e ta > 1 or A k > 1? Rooted in semigroup theory: based on the solution operator for the dynamical system; structure of A plays no role. These perspective match for x (t) = Ax(t), but not for more complicated systems.
65 scalar delay equations and the nonlinear eigenvalue problem We shall apply these ideas to explore the potential for transient growth in solutions to stable delay differential equations. Solutions of scalar systems x (t) = αx(t) behave monotonically: x(t) = e t Re α x(). What about scalar delay equations? x (t) = αx(t) + βx(t 1) Using the techniques seen earlier, we associate this system with the NLEVP (λ α)e λ = β, with infinitely many eigenvalues given by branches of the Lambert-W function: λ k = α + W k (βe α ).
66 scalar delay equations and the nonlinear eigenvalue problem We shall apply these ideas to explore the potential for transient growth in solutions to stable delay differential equations. Solutions of scalar systems x (t) = αx(t) behave monotonically: x(t) = e t Re α x(). What about scalar delay equations? {λ k } for { α = 3/4 β = x (t) = αx(t) + βx(t 1) = λ k = α + W k (βe α ).
67 scalar delay equations and the nonlinear eigenvalue problem We shall apply these ideas to explore the potential for transient growth in solutions to stable delay differential equations. Solutions of scalar systems x (t) = αx(t) behave monotonically: x(t) = e t Re α x(). What about scalar delay equations? Re λ k < for all k = asymptotic stability {λ k } for { α = 3/4 β = x (t) = αx(t) + βx(t 1) = λ k = α + W k (βe α ).
68 stability chart for two-parameter delay equation Conventional eigenvalue-based stability analysis reveals the (α, β) combinations that yield asymptotically stable solutions unstable (α, β) pairs.5 β stable (α, β) pairs α Such stability charts are standard tools for studying stability of parameter-dependent delay systems.
69 pseudospectra for nonlinear eigenvalue problems Green & Wagenknecht [26] and Michiels, Green, Wagenknecht, & Niculescu [26] define pseudospectra for nonlinear eigenvalue problems, and apply them to delay differential equations. See [Cullum, Ruehli 21], [Wagenknecht, Michiels, Green 28], [Bindel, Hood 213]. Consider the nonlinear eigenvalue problem T(λ)v = with T(λ) = m f j (λ) A j. For p, q [1, ] and weights w 1,..., w m (, ], define the norm w 1 E 1 q (E 1,..., E m) p,q =.. w m E m q p Given this way of measuring a perturbation to T(λ), [MGWN 26] define σ ε(t) = { z C : z σ ( m j=1 ) f j (λ) (A j + E j ) for some j=1 } E 1,..., E m C n n with (E 1,..., E m) p,q < ε.
70 pseudospectra for the scalar delay equation x (t) = αx(t) + βx(t 1) T (λ) = λ α βe λ MGWN ε-pseudospectra for α = 3 and β = 1, 4 with perturbation norm given by q [1, ] and p =, and w 1 = w 2 = 1.
71 pseudospectra for the scalar delay equation x (t) = x(t) + x(t 1) T (λ) = λ + 1 T (λ) = λ + 1 e λ 4 log log log log log log 1 1 MGWN ε-pseudospectra with p = : structure affects pseudospectra.
72 the solution operator To better understand transient behavior, just integrate the differential equation: x (t) = αx(t) + βx(t 1) history: x(t 1) = u(t) for t [, 1].
73 the solution operator To better understand transient behavior, just integrate the differential equation: x (t) = αx(t) + βx(t 1) history: x(t 1) = u(t) for t [, 1]. Integrate to get, for t [, 1], x (t) = αx(t) + βu(t) x(t) = e t α x() + β = e t α u(1) + β t t e (t s)α u(s) ds e (t s)α u(s) ds.
74 the solution operator To better understand transient behavior, just integrate the differential equation: x (t) = αx(t) + βx(t 1) history: x(t 1) = u(t) for t [, 1]. Integrate to get, for t [, 1], x (t) = αx(t) + βu(t) x(t) = e t α x() + β = e t α u(1) + β t t e (t s)α u(s) ds e (t s)α u(s) ds. This operation maps the history u to the solution x for t [, 1]: u C([, 1]) x C([, 1]).
75 the solution operator Define the solution operator K : C[, 1] C[, 1] via x(t) = (Ku)(t) = e t α u(1) + β t e (t s)α u(s) ds, t [, 1].
76 the solution operator Define the solution operator K : C[, 1] C[, 1] via x(t) = (Ku)(t) = e t α u(1) + β t e (t s)α u(s) ds, t [, 1]. x m = x(t) t [m 1,m] define: x () := u t [ 1, ] to advance t by 1 unit, apply K: x (1) := Kx () t [, 1] to advance t by 2 units, apply K 2 : x (2) := Kx (1) = K 2 x () t [1, 2]. to advance t by m units, apply K m : x (m) := Kx (m 1) = K m x () t [m 1, m]
77 the solution operator Define the solution operator K : C[, 1] C[, 1] via x(t) = (Ku)(t) = e t α u(1) + β t e (t s)α u(s) ds, t [, 1]. x m = x(t) t [m 1,m] define: x () := u t [ 1, ] to advance t by 1 unit, apply K: x (1) := Kx () t [, 1] to advance t by 2 units, apply K 2 : x (2) := Kx (1) = K 2 x () t [1, 2]. to advance t by m units, apply K m : x (m) := Kx (m 1) = K m x () t [m 1, m] View the delay system as a discrete-time dynamical system over 1-unit time intervals: x (m) = K m x ().
78 discretizing the solution operator We discretize the solution operator using a Chebyshev pseudospectral method based on [Trefethen 2]; see [Bueler 27], [Jarlebring 28]. N tj x(t j ) x j := e t j α u + βw j,k u k, w j,k := e (t j s)α l k (s) ds k= x e t α u w, w,1 w,n u x 1 e t 1α u 1 w 1, w 1,1 w 1,N u 1 = β x N e t N α }{{} E N (α) u N w N, w N,1 w N,N }{{} W N (α) u N
79 discretizing the solution operator We discretize the solution operator using a Chebyshev pseudospectral method based on [Trefethen 2]; see [Bueler 27], [Jarlebring 28]. N tj x(t j ) x j := e t j α u + βw j,k u k, w j,k := e (t j s)α l k (s) ds k= x e t α u w, w,1 w,n u x 1 e t 1α u 1 w 1, w 1,1 w 1,N u 1 = β x N e t N α }{{} E N (α) u N w N, w N,1 w N,N }{{} W N (α) u N K N := E N (α) + β W N (α) x (1) := K N u x (m) := K m N u
80 approaches to transient analysis of delay equations Jacob Stroh [26], in a master s thesis advised by Ed Bueler, computes L 2 -pseudospectra of Chebyshev discretizations of the compact solution operator and considers nonnormality as a function of a time-varying coefficient in the delay term: our approach follows closely. Green & Wagenknecht [26], in their paper about perturbation-based pseudospectra for delay equations, describe computing the pseudospectra of the generator for the solution semigroup as a way of gauging transient behavior; for relevant semigroup theory, see, e.g., [Engel & Nagel 2]. Hood & Bindel [216+] apply Laplace transform/pseudospectral techniques to the solution operator for delay differential equations for upper/lower bounds on transient behavior. See also the Lyapunov approach to analyzing transient behavior in the 25 Ph.D. thesis of Elmar Plischke. Solution operator approach converts a finite dimensional nonlinear problem into an infinite dimensional linear problem, akin to the infinite Arnoldi algorithm [Jarlebring, Meerbergen, Michiels 21, 212, 214].
81 convergence of the eigenvalues of the solution operator To study convergence, consider α =, β = 1: x (t) = x(t 1). µ (N) j : the jth largest magnitude eigenvalue of K N e λ j : λ j is the jth rightmost eigenvalue of the NLEVP eigenvalue error µ (N) j e λj j N We generally use N = 64 for our computations throughout what follows.
82 nonconvergence of the L 2 pseudospectra of the solution operator Eigenvalues converge, but the L 2 [, 1] pseudospectra of K N do not: the departure from normality increases with N! N = N = N = N = 256
83 the problem with the L 2 norm Problem: The L 2 (, 1) norm does not measure transient growth of x(t). One can easily find u(x) such that u L 2 [,1] 1 but x L 2 [,1] 1. Let α =, β = 1: x (t) = x(t 1) = x(t) = u(1) t u(s) ds. 1 x(t) -1 u(t) = exp(1(t 1)) initial condition t 1 x(t) -1 u(t) = exp(1(t 1)) initial condition t
84 the problem with the L 2 norm Problem: The L 2 (, 1) norm does not measure transient growth of x(t). One can easily find u(x) such that u L 2 [,1] 1 but x L 2 [,1] 1. Let α =, β = 1: x (t) = x(t 1) = x(t) = u(1) t u(s) ds. 15 x (m) L 2 [,1] u L 2 [,1] m
85 pseudospectra and transient growth of matrix powers Since we care about the largest value x(t) can take, we should really study x (m) L, and thus the ε-pseudospectrum σ ε(k N ) defined using the -norm: σ ε(k N ) := {z C : (zi K N ) 1 > 1/ε} := {z C : z σ(k N + E) for some E C n n with E < ε}.
86 pseudospectra and transient growth of matrix powers Since we care about the largest value x(t) can take, we should really study x (m) L, and thus the ε-pseudospectrum σ ε(k N ) defined using the -norm: σ ε(k N ) := {z C : (zi K N ) 1 > 1/ε} := {z C : z σ(k N + E) for some E C n n with E < ε}. Even in Banach spaces, pseudospectra give lower bounds on transient growth; see, e.g., [Trefethen & E., 25]. sup K m z 1 sup m z σ ε(k) ε If σ ε(k) extends more than ε outside the unit disk, K m grows transiently. Limitations: [Greenbaum & Trefethen 1994], [Ransford et al. 27, 29, 211]
87 stability versus solution operator norm x (t) = αx(t) + βx(t 1) unstable (α, β) pairs.5 β stable (α, β) pairs α Stable choices of the (α, β) parameters
88 stability versus solution operator norm x (t) = αx(t) + βx(t 1) ρ(k) = 1.5 β α Level sets: ρ(k) =.1,.2,..., 1.
89 stability versus solution operator norm x (t) = αx(t) + βx(t 1) 2 K = K = β α Level sets: K =.5, 1.,..., 4.5
90 stability versus solution operator norm x (t) = αx(t) + βx(t 1) β α Superimposed level sets for ρ(k) and K
91 stability versus solution operator norm x (t) = αx(t) + βx(t 1) β sweet spot for transient growth α Superimposed level sets for ρ(k) and K
92 solution matrix pseudospectra ( -norm) x (t) = αx(t) + βx(t 1) α = 1 β = 1 ρ(k) = 1 K =
93 solution matrix pseudospectra ( -norm) x (t) = αx(t) + βx(t 1) α = β =.99 ρ(k) =.99 K =
94 solution matrix pseudospectra ( -norm) x (t) = αx(t) + βx(t 1) α = β =.9 ρ(k) =.9 K =
95 solution matrix pseudospectra ( -norm) x (t) = αx(t) + βx(t 1) α = β =.99 ρ(k) =.99 K = α = β =.9 ρ(k) =.9 K =
96 solution operator: transient growth x (t) = αx(t) + βx(t 1) x(t) ρ(k) =.9 x(t) initial condition t ρ(k) = initial condition t
97 solution operator: transient growth x (t) = αx(t) + βx(t 1) x(t) ρ(k) =.9 ρ(k) =.99 ρ(k) = t As α 1 and β 1, solutions exhibit arbitrary transient growth, but slowly.
98 can scalar equations exhibit stronger transients? Is faster transient growth possible in a scalar equation if we allow multiple synchronized delays? x (t) = c x(t) + c 1 x(t 1) + c 2 x(t 2) + + c d x(t d).
99 can scalar equations exhibit stronger transients? Is faster transient growth possible in a scalar equation if we allow multiple synchronized delays? x (t) = c x(t) + c 1 x(t 1) + c 2 x(t 2) + + c d x(t d). Key: Look for solutions of the form x(t) = t d e λt.
100 can scalar equations exhibit stronger transients? Is faster transient growth possible in a scalar equation if we allow multiple synchronized delays? x (t) = c x(t) + c 1 x(t 1) + c 2 x(t 2) + + c d x(t d). Key: Look for solutions of the form x(t) = t d e λt. One can show that x(t) = t d e λt is a solution if and only if c, c 1,..., c d solve the Vandermonde linear system d 1 4 d d d d c e λ c 1 e 2λ c 2... e dλ c d = λ 1....
101 commensurate delays can give much larger pseudospectra x (t) = c x(t) + c 1x(t 1) + + c d x(t d) 1 d = 1 d = 2 d = ρ(k) =.9 ρ(k) =.9 ρ(k) = d = 4 d = ρ(k) =.9 ρ(k) =
102 commensurate delays can induce strong transients x (t) = c x(t) + c 1 x(t 1) + c 2 x(t 2) + + c d x(t d) Initial data: x(t) = 1 + 2e 1t for t d = c =.8946 c 1 =.9 15 d = c = c 1 = 1.8 c 2 = d = c = c 1 = 2.7 c 2 = c 3 =.243 d = c = c 1 = 3.96 c 2 = 2.43 c 3 =.972 c 4 =.164
103 commensurate delays can induce strong transients x (t) = c x(t) + c 1x(t 1) + + c d x(t d) 1 6 x(t) d = 5 d = t With commensurate delays, solutions to scalar equations can exhibit significant transient growth very quickly in time.
104 summary rational interpolation for nlevps Rational / Loewner techniques motivated by algorithms from model reduction Structure Preserving Rational Interpolation: iteratively improve projection subspaces via interpolation points and directions. Data-Driven Rational Interpolation Matrix Pencils: reduce nonlinear problem to linear matrix pencil with tangential interpolation property. Minimal Realization via Rational Contour Integrals: isolates a transfer function for a linear system, recover via Loewner minimal realization techniques. transients for delay equations Solutions to scalar delay equations can exhibit strong transient growth. Finite dimensional nonlinear problem infinite dimensional linear problem Pseudospectral theory applies to the linear problem, but the choice of norm is important. Chebyshev collocation keeps the discretization matrix size small. Adding commensurate delays enables a faster rate of initial transient growth.
Rational Interpolation Methods for Nonlinear Eigenvalue Problems
Rational Interpolation Methods for Nonlinear Eigenvalue Problems Michael Brennan Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the
More informationResearch Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics
Research Matters February 25, 2009 The Nonlinear Eigenvalue Problem Nick Higham Part III Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester
More informationModel Reduction for Unstable Systems
Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model
More informationResearch Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics
Research Matters February 25, 2009 The Nonlinear Eigenvalue Nick Problem: HighamPart II Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester Woudschoten
More informationNonlinear Eigenvalue Problems and Contour Integrals
Nonlinear Eigenvalue Problems and Contour Integrals Marc Van Barel a,,, Peter Kravanja a a KU Leuven, Department of Computer Science, Celestijnenlaan 200A, B-300 Leuven (Heverlee), Belgium Abstract In
More informationPseudospectra and Nonnormal Dynamical Systems
Pseudospectra and Nonnormal Dynamical Systems Mark Embree and Russell Carden Computational and Applied Mathematics Rice University Houston, Texas ELGERSBURG MARCH 1 Overview of the Course These lectures
More informationCase study: Approximations of the Bessel Function
Case study: Approximations of the Bessel Function DS Karachalios, IV Gosea, Q Zhang, AC Antoulas May 5, 218 arxiv:181339v1 [mathna] 31 Dec 217 Abstract The purpose of this note is to compare various approximation
More informationLOCALIZATION THEOREMS FOR NONLINEAR EIGENVALUE PROBLEMS
LOCALIZATION THEOREMS FOR NONLINEAR EIGENVALUE PROBLEMS DAVID BINDEL AND AMANDA HOOD Abstract. Let T : Ω C n n be a matrix-valued function that is analytic on some simplyconnected domain Ω C. A point λ
More informationA numerical method for polynomial eigenvalue problems using contour integral
A numerical method for polynomial eigenvalue problems using contour integral Junko Asakura a Tetsuya Sakurai b Hiroto Tadano b Tsutomu Ikegami c Kinji Kimura d a Graduate School of Systems and Information
More informationResearch Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics
Research Matters February 25, 2009 The Nonlinear Eigenvalue Nick Problem: HighamPart I Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester Woudschoten
More informationApproximation of the Linearized Boussinesq Equations
Approximation of the Linearized Boussinesq Equations Alan Lattimer Advisors Jeffrey Borggaard Serkan Gugercin Department of Mathematics Virginia Tech SIAM Talk Series, Virginia Tech, April 22, 2014 Alan
More informationNonlinear Eigenvalue Problems: Theory and Applications
Nonlinear Eigenvalue Problems: Theory and Applications David Bindel 1 Department of Computer Science Cornell University 12 January 217 1 Joint work with Amanda Hood Berkeley 1 / 37 The Computational Science
More informationLOCALIZATION THEOREMS FOR NONLINEAR EIGENVALUE PROBLEMS
LOCALIZATION THEOREMS FOR NONLINEAR EIGENVALUE PROBLEMS DAVID BINDEL AND AMANDA HOOD Abstract. Let T : Ω C n n be a matrix-valued function that is analytic on some simplyconnected domain Ω C. A point λ
More informationKU Leuven Department of Computer Science
Backward error of polynomial eigenvalue problems solved by linearization of Lagrange interpolants Piers W. Lawrence Robert M. Corless Report TW 655, September 214 KU Leuven Department of Computer Science
More informationKey words. Nonlinear eigenvalue problem, analytic matrix-valued function, Sylvester-like operator, backward error. T (λ)v = 0. (1.
NONLINEAR EIGENVALUE PROBLEMS WITH SPECIFIED EIGENVALUES MICHAEL KAROW, DANIEL KRESSNER, AND EMRE MENGI Abstract. This work considers eigenvalue problems that are nonlinear in the eigenvalue parameter.
More informationNonlinear Eigenvalue Problems: An Introduction
Nonlinear Eigenvalue Problems: An Introduction Cedric Effenberger Seminar for Applied Mathematics ETH Zurich Pro*Doc Workshop Disentis, August 18 21, 2010 Cedric Effenberger (SAM, ETHZ) NLEVPs: An Introduction
More information1 Sylvester equations
1 Sylvester equations Notes for 2016-11-02 The Sylvester equation (or the special case of the Lyapunov equation) is a matrix equation of the form AX + XB = C where A R m m, B R n n, B R m n, are known,
More informationAn iterative SVD-Krylov based method for model reduction of large-scale dynamical systems
An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Serkan Gugercin Department of Mathematics, Virginia Tech., Blacksburg, VA, USA, 24061-0123 gugercin@math.vt.edu
More informationPerturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop
Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael
More informationThe quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying
I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given
More informationMatrix stabilization using differential equations.
Matrix stabilization using differential equations. Nicola Guglielmi Universitá dell Aquila and Gran Sasso Science Institute, Italia NUMOC-2017 Roma, 19 23 June, 2017 Inspired by a joint work with Christian
More informationChris Beattie and Serkan Gugercin
Krylov-based model reduction of second-order systems with proportional damping Chris Beattie and Serkan Gugercin Department of Mathematics, Virginia Tech, USA 44th IEEE Conference on Decision and Control
More informationReliably computing all characteristic roots of delay differential equations in a given right half plane using a spectral method
Reliably computing all characteristic roots of delay differential equations in a given right half plane using a spectral method Zhen Wu Wim Michiels Report TW 596, May 2011 Katholieke Universiteit Leuven
More informationContour Integral Method for the Simulation of Accelerator Cavities
Contour Integral Method for the Simulation of Accelerator Cavities V. Pham-Xuan, W. Ackermann and H. De Gersem Institut für Theorie Elektromagnetischer Felder DESY meeting (14.11.2017) November 14, 2017
More informationScaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem
Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/
More informationAn Operator Theoretical Approach to Nonlocal Differential Equations
An Operator Theoretical Approach to Nonlocal Differential Equations Joshua Lee Padgett Department of Mathematics and Statistics Texas Tech University Analysis Seminar November 27, 2017 Joshua Lee Padgett
More informationVISUALIZING PSEUDOSPECTRA FOR POLYNOMIAL EIGENVALUE PROBLEMS. Adéla Klimentová *, Michael Šebek ** Czech Technical University in Prague
VSUALZNG PSEUDOSPECTRA FOR POLYNOMAL EGENVALUE PROBLEMS Adéla Klimentová *, Michael Šebek ** * Department of Control Engineering Czech Technical University in Prague ** nstitute of nformation Theory and
More informationCollocation approximation of the monodromy operator of periodic, linear DDEs
p. Collocation approximation of the monodromy operator of periodic, linear DDEs Ed Bueler 1, Victoria Averina 2, and Eric Butcher 3 13 July, 2004 SIAM Annual Meeting 2004, Portland 1=Dept. of Math. Sci.,
More informationIterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations
Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Klajdi Sinani Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationKU Leuven Department of Computer Science
Compact rational Krylov methods for nonlinear eigenvalue problems Roel Van Beeumen Karl Meerbergen Wim Michiels Report TW 651, July 214 KU Leuven Department of Computer Science Celestinenlaan 2A B-31 Heverlee
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug
More informationKatholieke Universiteit Leuven Department of Computer Science
Pseudospectra and stability radii for analytic matrix functions with application to time-delay systems Wim Michiels Kirk Green Thomas Wagenknecht Silviu-Iulian Niculescu Report TW 425, March 2005 Katholieke
More informationA Smooth Operator, Operated Correctly
Clark Department of Mathematics Bard College at Simon s Rock March 6, 2014 Abstract By looking at familiar smooth functions in new ways, we can make sense of matrix-valued infinite series and operator-valued
More informationNonnormality in Lyapunov Equations
RICE UNIVERSITY Nonnormality in Lyapunov Equations by Jonathan Baker A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Master of Arts Approved, Thesis Committee: Danny Sorensen,
More informationMATH 5524 MATRIX THEORY Problem Set 5
MATH 554 MATRIX THEORY Problem Set 5 Posted Tuesday April 07. Due Tuesday 8 April 07. [Late work is due on Wednesday 9 April 07.] Complete any four problems, 5 points each. Recall the definitions of the
More information16. Local theory of regular singular points and applications
16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic
More informationAnalysis of Transient Behavior in Linear Differential Algebraic Equations
Analysis of Transient Behavior in Linear Differential Algebraic Equations Dr. Mark Embree and Blake Keeler May 6, 5 Abstract The linearized Navier-Stokes equations form a type of system called a differential
More informationStability preserving post-processing methods applied in the Loewner framework
Ion Victor Gosea and Athanasios C. Antoulas (Jacobs University Bremen and Rice University May 11, 2016 Houston 1 / 20 Stability preserving post-processing methods applied in the Loewner framework Ion Victor
More informationA fast and well-conditioned spectral method: The US method
A fast and well-conditioned spectral method: The US method Alex Townsend University of Oxford Leslie Fox Prize, 24th of June 23 Partially based on: S. Olver & T., A fast and well-conditioned spectral method,
More informationNon-Self-Adjoint Operators and Pseudospectra
Non-Self-Adjoint Operators and Pseudospectra E.B. Davies King s College London 1 The Relevance of Pseudospectra The theory of pseudospectra is a way of imposing some order on the chaos of non-self-adjoint
More informationA block Newton method for nonlinear eigenvalue problems
A block Newton method for nonlinear eigenvalue problems Daniel Kressner July 20, 2009 Abstract We consider matrix eigenvalue problems that are nonlinear in the eigenvalue parameter. One of the most fundamental
More informationNumerical Computation of Structured Complex Stability Radii of Large-Scale Matrices and Pencils
51st IEEE onference on Decision and ontrol December 10-13, 01. Maui, Hawaii, USA Numerical omputation of Structured omplex Stability Radii of Large-Scale Matrices and Pencils Peter Benner and Matthias
More informationPseudospectra and Nonnormal Dynamical Systems
Pseudospectra and Nonnormal Dynamical Systems Mark Embree and Russell Carden Computational and Applied Mathematics Rice University Houston, Texas ELGERSBURG MARCH 2012 Overview of the Course These lectures
More informationPART IV Spectral Methods
PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,
More informationModel Reduction of Inhomogeneous Initial Conditions
Model Reduction of Inhomogeneous Initial Conditions Caleb Magruder Abstract Our goal is to develop model reduction processes for linear dynamical systems with non-zero initial conditions. Standard model
More informationModule 07 Controllability and Controller Design of Dynamical LTI Systems
Module 07 Controllability and Controller Design of Dynamical LTI Systems Ahmad F. Taha EE 5143: Linear Systems and Control Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ataha October
More informationKrylov Techniques for Model Reduction of Second-Order Systems
Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of
More informationAn iterative SVD-Krylov based method for model reduction of large-scale dynamical systems
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 1964 1986 www.elsevier.com/locate/laa An iterative SVD-Krylov based method for model reduction of large-scale dynamical
More informationAn Arnoldi method with structured starting vectors for the delay eigenvalue problem
An Arnoldi method with structured starting vectors for the delay eigenvalue problem Elias Jarlebring, Karl Meerbergen, Wim Michiels Department of Computer Science, K.U. Leuven, Celestijnenlaan 200 A, 3001
More informationAn Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics.
An Algorithm for the Research Complete Matters Solution of Quadratic February Eigenvalue 25, 2009 Problems Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University
More informationComputing Unstructured and Structured Polynomial Pseudospectrum Approximations
Computing Unstructured and Structured Polynomial Pseudospectrum Approximations Silvia Noschese 1 and Lothar Reichel 2 1 Dipartimento di Matematica, SAPIENZA Università di Roma, P.le Aldo Moro 5, 185 Roma,
More informationRobust control of resistive wall modes using pseudospectra
Robust control of resistive wall modes using pseudospectra M. Sempf, P. Merkel, E. Strumberger, C. Tichmann, and S. Günter Max-Planck-Institut für Plasmaphysik, EURATOM Association, Garching, Germany GOTiT
More informationPolynomial eigenvalue solver based on tropically scaled Lagrange linearization. Van Barel, Marc and Tisseur, Francoise. MIMS EPrint: 2016.
Polynomial eigenvalue solver based on tropically scaled Lagrange linearization Van Barel, Marc and Tisseur, Francoise 2016 MIMS EPrint: 201661 Manchester Institute for Mathematical Sciences School of Mathematics
More informationComposition Operators on Hilbert Spaces of Analytic Functions
Composition Operators on Hilbert Spaces of Analytic Functions Carl C. Cowen IUPUI (Indiana University Purdue University Indianapolis) and Purdue University First International Conference on Mathematics
More informationL 2 -optimal model reduction for unstable systems using iterative rational
L 2 -optimal model reduction for unstable systems using iterative rational Krylov algorithm Caleb agruder, Christopher Beattie, Serkan Gugercin Abstract Unstable dynamical systems can be viewed from a
More informationRecent Advances in the Numerical Solution of Quadratic Eigenvalue Problems
Recent Advances in the Numerical Solution of Quadratic Eigenvalue Problems Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/
More informationAn Interpolation-Based Approach to the Optimal H Model Reduction
An Interpolation-Based Approach to the Optimal H Model Reduction Garret M. Flagg Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment
More informationAn iterative SVD-Krylov based method for model reduction of large-scale dynamical systems
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 12-15, 2005 WeC10.4 An iterative SVD-Krylov based method for model reduction
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of
More informationModel order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers
MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson
More informationThe matrix sign function
The matrix sign function 1 Re x > 0, sign(x) = 1 Re x < 0, undefined Re x = 0. Suppose the Jordan form of A is reblocked as A = [ V 1 V 2 ] [ J 1 J 2 ] [ V 1 V 2 ] 1, where J 1 contains all eigenvalues
More informationA Spectral Characterization of Closed Range Operators 1
A Spectral Characterization of Closed Range Operators 1 M.THAMBAN NAIR (IIT Madras) 1 Closed Range Operators Operator equations of the form T x = y, where T : X Y is a linear operator between normed linear
More informationModel reduction of large-scale systems by least squares
Model reduction of large-scale systems by least squares Serkan Gugercin Department of Mathematics, Virginia Tech, Blacksburg, VA, USA gugercin@mathvtedu Athanasios C Antoulas Department of Electrical and
More informationParameter-Dependent Eigencomputations and MEMS Applications
Parameter-Dependent Eigencomputations and MEMS Applications David Bindel UC Berkeley, CS Division Parameter-Dependent Eigencomputations and MEMS Applications p.1/36 Outline Some applications Mathematical
More informationSolution of Linear State-space Systems
Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationMath Ordinary Differential Equations
Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationMatrix functions and their approximation. Krylov subspaces
[ 1 / 31 ] University of Cyprus Matrix functions and their approximation using Krylov subspaces Matrixfunktionen und ihre Approximation in Krylov-Unterräumen Stefan Güttel stefan@guettel.com Nicosia, 24th
More informationNumerical Methods for Differential Equations Mathematical and Computational Tools
Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic
More informationH 2 optimal model approximation by structured time-delay reduced order models
H 2 optimal model approximation by structured time-delay reduced order models I. Pontes Duff, Charles Poussot-Vassal, C. Seren 27th September, GT MOSAR and GT SAR meeting 200 150 100 50 0 50 100 150 200
More informationc 2015 Society for Industrial and Applied Mathematics
SIAM J MATRIX ANAL APPL Vol 36, No, pp 656 668 c 015 Society for Industrial and Applied Mathematics FAST SINGULAR VALUE DECAY FOR LYAPUNOV SOLUTIONS WITH NONNORMAL COEFFICIENTS JONATHAN BAKER, MARK EMBREE,
More informationConvergence Theory for Iterative Eigensolvers
Convergence Theory for Iterative Eigensolvers Mark Embree Virginia Tech with Chris Beattie, Russell Carden, John Rossi, Dan Sorensen RandNLA Workshop Simons Institute September 28 setting for the talk
More informationAN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL MOTIVATING EXAMPLE INVERTED PENDULUM
Controls Lab AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL Eduardo Gildin (UT ICES and Rice Univ.) with Thanos Antoulas (Rice ECE) Danny Sorensen (Rice
More informationExponential Stability of the Traveling Fronts for a Pseudo-Para. Pseudo-Parabolic Fisher-KPP Equation
Exponential Stability of the Traveling Fronts for a Pseudo-Parabolic Fisher-KPP Equation Based on joint work with Xueli Bai (Center for PDE, East China Normal Univ.) and Yang Cao (Dalian University of
More informationBarycentric interpolation via the AAA algorithm
Barycentric interpolation via the AAA algorithm Steven Elsworth Stefan Güttel November 217 Contents 1 Introduction 1 2 A simple scalar example 1 3 Solving a nonlinear eigenproblem 2 4 References 6 1 Introduction
More informationSOLVING RATIONAL EIGENVALUE PROBLEMS VIA LINEARIZATION
SOLVNG RATONAL EGENVALUE PROBLEMS VA LNEARZATON YANGFENG SU AND ZHAOJUN BA Abstract Rational eigenvalue problem is an emerging class of nonlinear eigenvalue problems arising from a variety of physical
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis Volume 38, pp 75-30, 011 Copyright 011, ISSN 1068-9613 ETNA PERTURBATION ANALYSIS FOR COMPLEX SYMMETRIC, SKEW SYMMETRIC, EVEN AND ODD MATRIX POLYNOMIALS SK
More informationKrylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations
Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction
More informationTHE STABLE EMBEDDING PROBLEM
THE STABLE EMBEDDING PROBLEM R. Zavala Yoé C. Praagman H.L. Trentelman Department of Econometrics, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands Research Institute for Mathematics
More informationDefinite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra
Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear
More informationStability of an abstract wave equation with delay and a Kelvin Voigt damping
Stability of an abstract wave equation with delay and a Kelvin Voigt damping University of Monastir/UPSAY/LMV-UVSQ Joint work with Serge Nicaise and Cristina Pignotti Outline 1 Problem The idea Stability
More informationNumerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018
Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More informationOn the transient behaviour of stable linear systems
On the transient behaviour of stable linear systems D. Hinrichsen Institut für Dynamische Systeme Universität Bremen D-28334 Bremen Germany dh@math.uni-bremen.de A. J. Pritchard Mathematics Institute University
More informationModel reduction via tangential interpolation
Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationEvaluation of Chebyshev pseudospectral methods for third order differential equations
Numerical Algorithms 16 (1997) 255 281 255 Evaluation of Chebyshev pseudospectral methods for third order differential equations Rosemary Renaut a and Yi Su b a Department of Mathematics, Arizona State
More informationMATHEMATICAL ENGINEERING TECHNICAL REPORTS. Eigenvector Error Bound and Perturbation for Polynomial and Rational Eigenvalue Problems
MATHEMATICAL ENGINEERING TECHNICAL REPORTS Eigenvector Error Bound and Perturbation for Polynomial and Rational Eigenvalue Problems Yuji NAKATSUKASA and Françoise TISSEUR METR 2016 04 April 2016 DEPARTMENT
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationNumerical Analysis for Nonlinear Eigenvalue Problems
Numerical Analysis for Nonlinear Eigenvalue Problems D. Bindel Department of Computer Science Cornell University 14 Sep 29 Outline Nonlinear eigenvalue problems Resonances via nonlinear eigenproblems Sensitivity
More informationNovember 18, 2013 ANALYTIC FUNCTIONAL CALCULUS
November 8, 203 ANALYTIC FUNCTIONAL CALCULUS RODICA D. COSTIN Contents. The spectral projection theorem. Functional calculus 2.. The spectral projection theorem for self-adjoint matrices 2.2. The spectral
More informationAnalysis Preliminary Exam Workshop: Hilbert Spaces
Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H
More informationVISA: Versatile Impulse Structure Approximation for Time-Domain Linear Macromodeling
VISA: Versatile Impulse Structure Approximation for Time-Domain Linear Macromodeling Chi-Un Lei and Ngai Wong Dept. of Electrical and Electronic Engineering The University of Hong Kong Presented by C.U.
More informationToeplitz matrices. Niranjan U N. May 12, NITK, Surathkal. Definition Toeplitz theory Computational aspects References
Toeplitz matrices Niranjan U N NITK, Surathkal May 12, 2010 Niranjan U N (NITK, Surathkal) Linear Algebra May 12, 2010 1 / 15 1 Definition Toeplitz matrix Circulant matrix 2 Toeplitz theory Boundedness
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationFull-State Feedback Design for a Multi-Input System
Full-State Feedback Design for a Multi-Input System A. Introduction The open-loop system is described by the following state space model. x(t) = Ax(t)+Bu(t), y(t) =Cx(t)+Du(t) () 4 8.5 A =, B =.5.5, C
More informationarxiv: v3 [math.na] 27 Jun 2017
PSEUDOSPECTRA OF MATRIX PENCILS FOR TRANSIENT ANALYSIS OF DIFFERENTIAL-ALGEBRAIC EQUATIONS MARK EMBREE AND BLAKE KEELER arxiv:6.44v3 [math.na] 7 Jun 7 Abstract. To understand the solution of a linear,
More informationLecture notes: Applied linear algebra Part 2. Version 1
Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least
More information