Low rank differential equations for Hamiltonian matrix nearness problems

Size: px
Start display at page:

Download "Low rank differential equations for Hamiltonian matrix nearness problems"

Transcription

1 13 December 2013 Low rank differential equations for Hamiltonian matrix nearness problems Nicola Guglielmi Daniel Kressner Christian Lubich Abstract For a Hamiltonian matrix with purely imaginary eigenvalues, we aim to determine the nearest Hamiltonian matrix such that some or all eigenvalues leave the imaginary axis. Conversely, for a Hamiltonian matrix with all eigenvalues lying off the imaginary axis, we look for a nearest Hamiltonian matrix that has a pair of imaginary eigenvalues. The Hamiltonian matrices can be allowed to be complex or restricted to be real. Such Hamiltonian matrix nearness problems are motivated by applications such as the analysis of passive control systems. They are closely related to the problem of determining extremal points of Hamiltonian pseudospectra. We obtain a characterization of optimal perturbations, which turn out to be of low rank and are attractive stationary points of low-rank differential equations that we derive. We use a two-level approach, where in the inner level we determine extremal points of the Hamiltonian ε-pseudospectrum for a given ε by following the low-rank differential equations into a stationary point, and on the outer level we optimize for ε. This permits us to give fast algorithms- exhibiting quadratic convergence - for solving the considered Hamiltonian matrix nearness problems. Keywords Hamiltonian pseudospectrum, passivity radius, algebraic Riccati equations, low-rank dynamics, differential equations on Stiefel manifolds Nicola Guglielmi Dipartimento di Matematica Pura ed Applicata, Università degli Studi di L Aquila, Via Vetoio - Loc. Coppito, I L Aquila, Italy guglielm@univaq.it Daniel Kressner EPFL-SB-MATHICSE-ANCHP, Station 8, CH-1015 Lausanne, Switzerland daniel.kressner@epfl.ch Christian Lubich Mathematisches Institut, Universität Tübingen, Auf der Morgenstelle 10, D Tübingen, Germany lubich@na.uni-tuebingen.de

2 2 Nicola Guglielmi et al. Mathematics Subject Classification (2000) 15A18, 65K05, 93B36, 93B40, 49N35, 65F15, 93B52, 93C05 1 Introduction In this paper we propose and study algorithms for the following optimization problems: (A) Given a Hamiltonian matrix with no eigenvalues on the imaginary axis, find a nearest Hamiltonian matrix having some purely imaginary eigenvalue. (B) Given a Hamiltonian matrix with all eigenvalues on the imaginary axis, find a nearest Hamiltonian matrix such that arbitrarily close to that matrix there exist Hamiltonian matrices with eigenvalues off the imaginary axis. The notion of nearest depends on the choice of norm, for which we consider the matrix 2-norm. The Hamiltonian matrices can be allowed to be complex or restricted to be real. Such Hamiltonian matrix nearness problems arise in several important applications: in the solution of algebraic Riccati equations whose Hamiltonian matrix has eigenvalues on the imaginary axis (see, e.g., [And98, Hes09, PaVL81]), in passivation of linear time invariant control systems (see, e.g., [ABKMM11]), and in the stability of gyroscopic systems (see, e.g., [HKLP00]). Both problems (A) and (B) are closely related to the problem of finding extremal (locally leftmost or rightmost) points of the structured pseudospectrum (for this notion see, e.g., [BK04,WA05,Ru06,HP05,TE05,KKK10,Ka11]) of a given matrix A M with respect to perturbed matrices on a matrix manifold M C n n that are ε-close to A in a norm : Λ ε (A,M, ) = {λ C :λ is an eigenvalue of some B M with B A ε}. We will always consider the matrix 2-norm = 2 in this paper. In the Hamiltonian case considered here we take M as the real-linear space of Hamiltonian matrices, i.e., those satisfying (with even dimension n = 2d) ( ) 0 Id JA is complex hermitian (or real symmetric), where J =. I d 0 We recall that the eigenvalues of a Hamiltonian matrix lie symmetric to the imaginary axis. The paper is organized as follows. In Section 2 we characterize extremal complex Hamiltonian perturbations, which correspond to a locally leftmost or rightmost point in the ε-pseudospectrum. In Section 3 we deal with extremal real Hamiltonian perturbations. In both cases, in analogy to [GL11, GL11b, GL13] (see also [GO11]) we derive low-rank matrix differential equations that

3 Low rank ODEs for Hamiltonian matrix nearness problems 3 have the extremizers as attractive stationary points. The ranks are 2 in the complex case and 4 in the real case. Our approach to solving the Hamiltonian matrix nearness problems (A) and (B) consists of a two-level procedure, where on the inner level we determine an extremizer for a fixed perturbation size ε by following the differential equation into a stationary point, and then on the outer level we optimize over ε. While the matrix nearness problems treated here are of the same type as those studied in [ABKMM11], the algorithmic approach of the present paper is complementary. Whereas [ABKMM11] takes an inner approach by following eigenvalues on the imaginary axis and studying the sign characteristic in coalescent eigenvalues, we take an outer approach where we follow eigenvalues that are off the imaginary axis. We will demonstrate that the outer approach proposed here is very efficient in various example problems, but it is not the purpose of this paper to give an in-depth numerical comparison of the two fundamentally different approaches, each of which has its own merits. From an algorithmic point of view we are interested in robust variants of (A) and (B). We therefore introduce a thin vertical strip symmetric with respect to the imaginary axis and compute distances with respect to this strip instead of the imaginary axis itself, as considered also in [ABKMM11]. In Section 5 we present a fast algorithm for solving problem (A) and in Section 6 for problem (B). The first algorithm computes a Hamiltonian perturbation of minimal norm such that some pair of eigenvalues of a Hamiltonian matrix with purely imaginary spectrum is moved outside a vertical strip symmetric with respect to the imaginary axis, of a preassigned arbitrarily small width. The second algorithm computes a Hamiltonian perturbation of minimal norm such that a pair of eigenvalues of a Hamiltonian matrix A with spectrum bounded away from the imaginary axis is taken inside a given small strip that is symmetric with respect to the imaginary axis. In the examples we shall also deal with the following third problem, which is related to problem (B). (C) Given a Hamiltonian matrix with all eigenvalues on the imaginary axis, find a nearest Hamiltonian matrix such that arbitrarily close to that matrix there exist Hamiltonian matrices with all eigenvalues off the imaginary axis. This is associated to passivity measures. In Section 7 we present in some detail two interesting applications: passivation of linear control systems and gyroscopic stability. There are various extensions to the present work, which we do not report here: While we consider only the matrix 2-norm in this paper, a related but different theory and corresponding algorithms can be given also for the Frobenius norm. Analogous matrix nearness problems to this paper can be studied also for symplectic matrices, where the unit circle assumes the role of the imaginary axis for the Hamiltonian case. Another interesting extension is to Hamiltonian matrices with additional (block) structure, as they arise in control systems.

4 4 Nicola Guglielmi et al. We will make frequent use of the following standard perturbation result for eigenvalues; see, e.g., [Kat95, Section II.1.1]. Here and in the following, we denote = d/dt. Lemma 1.1 Consider the differentiable n n matrix valued function C(t) for t in a neighborhood of 0. Let λ(t) be an eigenvalue of C(t) converging to a simple eigenvalue λ 0 of C 0 = C(0) as t 0. Let x 0 and y 0 be left and right eigenvectors, respectively, of C 0 corresponding to λ 0, that is, (C 0 λ 0 I)y = 0 and x (C 0 λ 0 I) = 0. Then, x 0y 0 0 and λ(t) is differentiable near t = 0 with λ(0) = x 0Ċ(0)y 0 x 0 y. 0 2 Rank-2 dynamics to extremal points in complex Hamiltonian pseudospectra In this section we study Hamiltonian perturbations to the given Hamiltonian matrix A that correspond to extremal (locally leftmost or rightmost) points in the Hamiltonian ε-pseudospectrum. It is shown that to each extremal point the corresponding extremizer can be chosen of rank at most 2. We then proceed to derive and study a differential equation on the manifold of Hamiltonian rank-2 matrices that has the rank-2 extremizers as attractive stationary points. 2.1 Extremizers Theorem 2.1 For a Hamiltonian matrix A C n n and ε > 0, let λ / ir be a locally rightmost point in the Hamiltonian 2-norm ε-pseudospectrum, implying that there exists a Hamiltonian matrix E C n n of unit 2-norm such that λ is an eigenvalue of A+εE. If λ is a simple eigenvalue of A+εE and not an eigenvalue of A, then there exists a unique Hamiltonian matrix E of rank 2 such that λ is an eigenvalue of A+εE with the same left and right eigenvectors as for A+εE. Moreover, the nonzero eigenvalues of JE are +1 and 1. The proof of Theorem 2.1 shows that in addition to the unique extremizer E of rank 2, there exist also extremizers of any rank > 2. A characterization of the rank-2 extremizers is given in the following result. ( ) 1 0 Theorem 2.2 Assume that E = J T V V 0 1, where V C n 2 has orthonormal columns. Let λ / ir be a simple eigenvalue of A + εe, with left and right eigenvectors x and y, respectively, both of unit norm and with x y > 0. Let Y = (Jx,y) C n 2, and T C 2 2 be defined by ( ) T = RPR with R = V Y C and P =. 1 0 Then the following two statements are equivalent:

5 Low rank ODEs for Hamiltonian matrix nearness problems 5 1. Every differentiable path (E(t),λ(t)) (for small t 0) such that E(t) is Hamiltonian with E(t) 2 1 and λ(t) is an eigenvalue of A+εE(t), with E(0) = E and λ(0) = λ, has Re λ(0) ( 0. ) +τ Y and V have the same range, and T = with τ 0 τ 1,τ 2 > 0. 2 We remark that 1. is satisfied at points on the boundary of the Hamiltonian ε-pseudospectrum with a vertical tangent and the pseudospectrum lying locally to the left. This includes in particular locally rightmost points of the pseudospectrum, but includes also locally leftmost boundary points on the right boundary or inflection points with a vertical tangent. Proof (of Theorem 2.1) (a) Let V C n n be a unitary matrix obtained from a QR factorization of Y = (Jx,y): ( ) Y = V R with R C Let E(t), for small t 0, be a continuously differentiable path on the set of Hamiltonian matrices of 2-norm at most 1, with E(0) = E, which we write as with a hermitian matrix Ŝ(t) C n n = E(t) = J T VŜ(t) V ( ) S11 (t) S21(t), S S 21 (t) S 22 (t) 11 (t) C 2 2. We can choose the first 2 columns of the unitary matrix V such that ( ) σ1 0 S 11 (0) =. 0 σ 2 Clearly, σ 1,σ 2 are real and have absolute value at most 1. In the following, we will in fact show that σ 1 = σ 2 = 1 and that σ 1,σ 2 have opposite sign. (b) Let λ(t), for small t 0, be the continuously differentiable path of eigenvalues of A+εE(t) with λ(0) = λ. Since λ is locally rightmost, we have 0 Re λ(0) = ε x y Re(x Ė(0)y). Denoting by A,B = trace(a B) the standard matrix inner product and by A herm = 1 2 (A+A ) the hermitian part of a matrix, we observe 0 Re(x Ė(0)y) = xy,ė(0) = Jxy,JĖ(0) = (Jxy ) herm,jė(0) = 1 2 YPY, V Ŝ(0) V = 1 2 V YPY V, Ŝ(0) = 1 2 RPR,Ṡ11(0) = 1 2 T,Ṡ11(0). (2.1) This inequality holds for every path Ŝ(t) of unit norm, which in turn implies that it holds for every path S 11 (t) of norm at most one.

6 6 Nicola Guglielmi et al. Suppose that one of the diagonal entries of S 11 (0), say σ 2, has absolute value less than one. For the moment, let us additionally assume that σ 1 σ 2. Then we consider the following three different paths of norm at most one, locally for small t 0: S (1) 11 (t) = ( σ1 0 0 σ 2 ±t ) ( ) ( ), S (2) a(t) ±t 11 (t) =, S (3) a(t) it ±t b(t) 11 (t) =, ±it b(t) where in the latter two cases a(t),b(t) are chosen such that the matrices have the eigenvalues σ 1 and σ 2 : a(t) = σ 1 +σ 2 (σ1 σ 2 ) + 2 t 2 4 2, b(t) = σ 1 +σ 2 a(t). In particular, ȧ(0) = ḃ(0) = 0. We then obtain from (2.1) that T,Ṡ(1) 11 (0) = ±t 22 = t 22 = 0, T,Ṡ(2) 11 (0) = ±(t 21 +t 21 ) = Re(t 21 ) = 0, T,Ṡ(3) 11 (0) = ±(t 21 t 21 ) = Im(t 21 ) = 0. In particular, this means that T = RPR is singular and hence R itself is singular. By the definition of R, this would imply that the eigenvectors Jx and y arelinearlydependent.hence,jxwouldbearighteigenvectorofh = A+εE to the eigenvalue λ. On the other hand, since H is Hamiltonian, Jx is a right eigenvalue of H to the eigenvalue λ, because HJx = JH x = J λ x = λ Jx. We would thus have λ = λ, so that λ is on the imaginary axis, in contradiction to our assumption. Therefore, σ 1 = σ 2 = 1. The same conclusion holds when σ 1 = σ 2, using appropriately modified paths. (c) Now suppose that σ 1 and σ 2 have the same sign, so that S 11 (0) = ±I 2. We then choose S 11 (t) = ±e tm with a hermitian positive definite matrix M, so that Ṡ11(0) = M. The inequality 0 T,Ṡ11(0) = T,M would then hold for every positive definite hermitian M, which would imply that T is positive semi-definite (negative semi-definite). This is not possible, as P and therefore also T = RPR is an indefinite nonsingular matrix. In view of this contradiction, S 11 (0) must have two eigenvalues of different sign, +1 and 1. (d) Finally, σ 1 = σ 2 = 1 implies that S 21 (0) = 0, since otherwise one of the first two columns of Ŝ(0) has norm larger than one, in contradiction to Ŝ(0) 2 = 1. This shows that A+εE, with ( ) E = J T S11 (0) 0 ˆV ˆV, 0 0

7 Low rank ODEs for Hamiltonian matrix nearness problems 7 satisfies (A+εE )y = (A+εE)y = λ y and x (A+εE ) = x (A+εE) = λ x. The rank-2 matrix E is unique because the block structure of Ŝ(0) and the orthogonality of the columns of V show that for every other rank-2 reduction of E(t) we have Ey = 0 or EJx = 0, which would imply that λ is an eigenvalue of A, contrary to the assumption. Proof (of Theorem 2.2) 1. implies 2.: We know from the proof of Theorem 2.1 (that proof uses precisely condition 1. rather than the slightly stronger property of λ being locally rightmost) that Y and V have the same range, and we write Y = VR with an invertible 2 2 matrix R. We note that T = RPR is the matrix of which we want to show that it is diagonal with entries of different sign. Let us denote ( ) 1 0 D =. 0 1 We consider the path of matrices of unit 2-norm JE(t) = Ve tz De tz V with a skew-hermitian matrix Z C 2 2 and the corresponding path of eigenvalues λ(t) of A+εE(t) with λ(0) = λ. By 1., we have 0 Re(x Ė(0)y) = YPY,Ė(0) = T,ZD DZ, which holds for every skew-hermitian matrix Z. The matrix ZD DZ is hermitian with zero diagonal. It thus follows that T is diagonal. Since T = RPR is hermitian, indefinite and nonsingular, its diagonal entries must be real and have opposite sign. Using the paths E(t) = Ve tm De tm V with a diagonal positive semi-definite matrix M we then conclude from 0 Re(x Ė(0)y) = T,MD +DM that the(1,1) entryoft is positive andthe(2,2) entryis negative. This proves implies 1.: Let E(t), for small t 0, be a continuously differentiable path of Hamiltonian matrices of 2-norm at most one, with E(0) = J T VDV. We extend V C n 2 to a unitary matrix V C n n and write V = (V,V ). We define the hermitian matrix Ŝ(t) by writing E(t) as E(t) = J T VŜ(t) V. ( ) With the matrix T = RPR = V YPY τ1 0 V satisfying T = with 0 τ 2 τ 1,τ 2 > 0, and with S(t) denoting the upper left 2 2 block of Ŝ(t), we have

8 8 Nicola Guglielmi et al. as in (2.1) Re(x Ė(0)y) = YPY, V Ŝ(0) V ( ) R = P(R,0), Ŝ(0) = RPR,Ṡ(0) = T,Ṡ(0) 0 = τ 1 ṡ 11 (0) τ 2 ṡ 22 (0) 0, since s 11 (0) = 1 and s 22 (0) = 1 imply ṡ 11 (0) 0 and ṡ 22 (0) 0. This yields statement Rank-2 dynamics for the complex case Transfering the approach of [GL11, GL13] to the Hamiltonian case, we search for a differential equation for the perturbation E(t) in A+εE(t) such that the extremizers corresponding to locally rightmost points in the Hamiltonian 2- norm pseudospectrum are attractive stationary points of the differential equation. Since Theorem 2.1 yields the existence of extremizers E of rank 2 with the property that the two nonzero eigenvalues of JE are ±1, we search for a differential equation on the manifold M = {E C n n : E is Hamiltonian of rank 2 and the nonzero eigenvalues of JE are equal to +1 and 1}. We represent matrices in M in a non-unique way as JE = VQV, (2.2) where V C n 2 has orthonormal columns and Q C 2 2 is a hermitian unitary matrix with eigenvalues +1 and 1. Such a Q can be written as a complex Householder matrix Q = I 2 2uu, u C 2, u u = 1. (2.3) For a given choice of V and Q, tangent matrices Ė T E M can then be uniquely written as (cf. [KL07]) JĖ = VQV +V QV +VQ V with V V = 0, u u = 0. (2.4) We have given up a unique representation of E for the benefit of the very useful orthogonality relations for V and u. Our objective is to find differential equations for u(t) and V(t) along which an eigenvalue (most interestingly, the rightmost eigenvalue) λ(t) of A + εe(t) increases monotonically with t and which have attractive stationary points where the extremality condition of Theorem 2.2 is satisfied. Assuming that λ(t) is simple for all t under consideration, we denote the left and right eigenvectors by x(t) and y(t), respectively, both of unit norm and with x(t) y(t) > 0. We let Y(t) = (Jx(t),y(t)). We omit the argument t in the following when its presence is clear from the context.

9 Low rank ODEs for Hamiltonian matrix nearness problems 9 Motivated by the orthogonality relations in (2.4) and the objective that Y and V have the same range when V = 0, we make the ansatz, with as yet unknown f C 2 and G C 2 2, u = (I 2 uu )f V = (I n VV )YG. (2.5) We call a tangent matrix Ė T EM admissible if Ė is of the form (2.4) with (2.5) and satisfies the normalizing constraint that Ė has Frobenius norm at most one. The set of admissible tangent matrices at E M is denoted by A E = {Z T E M : Z is admissible}. Given E(t) M at some t, we want to move in an admissible direction Ė(t) T E(t) M such that Re λ(t) is as large as possible. We thus maximize Re λ = εre(x Ėy)/x y (see Lemma 1.1) over the admissible set A E. Since Ė = 0 is admissible, the maximum value will be nonnegative. In this context we have the following result. (Note that in this lemma the dots are not time derivatives, but just suggestive notational symbols.) Lemma 2.1 Let A be a Hamiltonian matrix, let E M with the factorization (2.2) (2.3), and ε > 0. Let λ be a simple eigenvalue of A+εE with left and right eigenvectors x and y, respectively, both of unit norm and with x y > 0. Then, among all admissible tangent matrices Z A E, Re(x Zy) is maximal for Z = J 1 ( VQV +V QV +VQ V ), where Q = 2( uu +u u ), and u and V are of the form (2.5) with f = 1 4µ Tu, G = 1 2µ PR Q. (2.6) ( ) 0 1 Here P =, T = RPR 1 0 C 2 2 with R = V Y for Y = (Jx,y) C n 2. If u or V are different from zero for any scaling factor µ > 0 then µ is chosen such that Z has unit Frobenius norm. Otherwise, µ can be chosen arbitrarily. Proof Inserting (2.4), we obtain for any Ė A E that Re(x Ėy) = (Jxy ) herm,jė = 1 2 YPY,JĖ = 1 2 YPY, VQV +V QV +VQ V = 1 2 YPR Q, V T, Q QRPY, V = 1 2 T, Q +Re YPR Q, V. We further note that, with the hermitian positive semi-definite 2 2 matrices L = I 2 uu, M = Y (I n VV )Y,

10 10 Nicola Guglielmi et al. we have by (2.5) 1 2 T, Q = T, uu +u u = 2Re Tu, u = 2Re LTu,f Re YPR Q, V = Re MPR Q,G. The normalizing constraint becomes, on inserting (2.4), 1 JĖ 2 F = VQV 2 F + V QV 2 F + VQ V 2 F = Q 2 F +2 V 2 F. Because of (2.5) we obtain Q 2 F = 4 u u + uu 2 F = 8 u 2 2 = 8f Lf V 2 F = trace(g MG), where we used L 2 = L and (I n VV ) 2 = (I n VV ). Summing up, we want to find f C 2 and G C 2 2 that maximize under the quadratic constraint 2Re LTu,f +Re MPR Q,G 1 8f Lf +2trace(G MG). We thus have an optimization problem for z = vec(f,g) of the type Re(c z) max, z Kz 1 (2.7) with a symmetric positive semi-definite matrix K. Note that c is in the range of K, that is, there is z with c = K z. For c 0, the solution of (2.7) is thus given by z = z/µ with µ = z K z > 0. For c = 0, we can still choose z = z/µ = 0 with an arbitrary µ. In our situation, c = ( 2LTu MPR Q ) which yields the result. z = 1 2 ( 1 2 Tu PR Q ) ( ) f = 1 ( 1 G 2µ 2 Tu PR Q Lemma 2.1 leads us to the following system of differential equations for u(t) and V(t) where we rescale time by choosing µ = 1/2 in (2.6) (a different factor µ > 0 just leads to a different scaling of time, dt = µdt, which is irrelevant in the following): ), u = 1 2 (I 2 uu )Tu V = (I n VV )YPY VQ. (2.8) Here Y(t) = (Jx(t),y(t)) contains the left and right eigenvectors x(t) and y(t), of unit norm and with x(t) y(t) > 0, corresponding to a simple eigenvalue λ(t) of A + εe(t) with JE(t) = V(t)Q(t)V(t) ( and ) Q(t) = I 2 2u(t)u(t) with 0 1 u(t) C 2 of unit norm. Moreover, P = and T(t) = (V 1 0 YPY V)(t). We then have the following monotonicity result.

11 Low rank ODEs for Hamiltonian matrix nearness problems 11 Theorem 2.3 Let E(t) = J T V(t)(I 2u(t)u(t) )V(t) with u(t),v(t) satisfying the differential equations (2.8) and with initial values such that u(0) C 2 has unit Euclidean norm and V(0) C n 2 has orthonormal columns. If the eigenvalue λ(t) of A+εE(t) is simple, then Re λ(t) 0 with equality only if u = 0 and V = 0. Proof From the proof of Lemma 2.1 and the choice of the scaling factor µ = 1 2 we have the identity (x y)re λ = Re(x Ėy) = 1 2 T, Q +Re YPR Q, V = 2Re LTu,f +Re MPR Q,G = LTu 2 + MPR Q 2 F = 4 u 2 + V 2 F. Since x y > 0, this yields the stated result. Since Re λ(t) is bounded for all t (by the ε-pseudospectral abscissa) and Re λ(t) 0, we must have Re λ(t) 0 as t. Since 0 < x y 1, the above formula implies that u(t) and V(t) must tend to zero as t. In the following result we make the generic assumption that Y V C 2 2 is invertible. We note that if this were not satisfied, then Jx and y would span an invariant subspace of the unperturbed matrix A (in addition to A+εE). As the following theorem shows in comparison with Theorem 2.2, stationary points of (2.8) are extremizers. Let u be a vector of unit norm orthogonal to u, and let T = (u,u) T(u,u). Theorem 2.4 Suppose that Y V is invertible. If λ / ir, then (u,v) is a stationary point of (2.8) if and only if Y and V have the same range and T is diagonal. ( ) 1 0 We note that the transformation Ṽ = V(u,u) yields JE = Ṽ Ṽ 0 1, and therefore T corresponds to T of Theorem 2.2. T has the same eigenvalues as the hermitian matrix T = RPR, which are of different sign. Proof From (2.8) it is immediate that (V,u) is a stationary point if and only if the range of Y is contained in the range of V and (u ) Tu = 0. Since Y has rank 2 for an eigenvalue λ / ir, this yields the result.

12 12 Nicola Guglielmi et al. 2.3 An illustrative example Consider the Hamiltonian matrix (2.9) Its eigenvalues are given by ±0.4060i, ± i, ± i. In Figure 2.1 we show the boundary of the Hamiltonian ε-pseudospectrum, computed by Structured Eigtool [KKK10], and the path of rightmost eigenvalues of A+εE(t) for ε = 0.1, where E(t) = J T V(t)(I 2u(t)u(t) )V(t) is computed by solving (2.8) x Fig. 2.1 Hamiltonian pseudospectrum of the matrix A in Example (2.9) for ε = 0.1 and trajectory of the rightmost eigenvalue λ(t) (in blue) associated to the solution of (2.8). Right picture: zoom close to the rightmost point. We note in the right picture of Figure 2.1 that the trajectory approaches the rightmost point of the Hamiltonian ε-pseudospectrum tangentially, as it happens for the unstructured ε-pseudospectrum case. This can be explained by analogous arguments.

13 Low rank ODEs for Hamiltonian matrix nearness problems 13 3 Rank-4 dynamics to extremal points in real Hamiltonian pseudospectra 3.1 Extremizers We have the following real analog of Theorem 2.1. Theorem 3.1 For a Hamiltonian matrix A R n n and ε > 0, let λ / R ir be a locally rightmost point in the real Hamiltonian 2-norm ε-pseudospectrum, implying that there exists a real Hamiltonian matrix E R n n of unit 2-norm such that λ is an eigenvalue of A+εE. If λ is a simple eigenvalue of A+εE and not an eigenvalue of A, then there exists a unique real Hamiltonian matrix E of rank 4 such that λ is an eigenvalue of A+εE with the same left and right eigenvectors as for A + εe. Moreover, the nonzero eigenvalues of JE are +1 and 1, each of multiplicity 2. For λ R we have the rank-2 matrix E of Theorem 2.1, which is real for real λ. The following characterization of the rank-4 real Hamiltonian extremizers is given in the following simultaneous analog of Theorem 2.2 and of [GL13, Theorem 3.2] on extremizers for unstructured real perturbations. We denote by x R and x I the real part and the imaginary part of x, respectively, and similarly by y R and y I the real part and the imaginary part of y. Theorem 3.2 Assume that E = J T Vdiag(1,1, 1, 1)V T, where V R n 4 has orthonormal columns. Let λ / R ir be a simple eigenvalue of A+εE, with left and right eigenvectors x and y, respectively, both of unit norm and with x y > 0. Let Y = (Jx R,y R,Jx I,y I ) R n 4 and T R 4 4 be defined by ( ) ( ) T = RPR T with R = V T Y R 4 4 P and P = with P 0 P 2 = Then the following two statements are equivalent: 1. Every differentiable path (E(t),λ(t)) (for small t 0) such that E(t) is real Hamiltonian with E(t) 2 1 and λ(t) is an eigenvalue of A+εE(t), with E(0) = E and λ(0) = λ, has Re λ(0) ( 0. ) +T Y and V have the same range, and T = with symmetric 0 T 2 positive definite blocks T 1,T 2 R 2 2. Proof (of Theorem 3.1) Let V R n n be an orthogonal matrix obtained from a QR factorization of Y = (Jx R,y R,Jx I,y I ): ( ) Y = V R with R R Let E(t), for small t > 0, be a continuously differentiable path on the set of real Hamiltonian matrices of at most unit 2-norm, with E(0) = E, which we write as E(t) = J T VŜ(t) V T

14 14 Nicola Guglielmi et al. with a symmetric matrix Ŝ(t) R n n = ( ) S11 (t) S21(t) T, S S 21 (t) S 22 (t) 11 (t) R 4 4, We can choose the first 4 columns of the orthogonal matrix V such that S 11 (0) is diagonal. The diagonal entries σ j are real and have absolute value at most 1. Using the same arguments as in the proof of Theorem 2.1 it is shown that σ j = 1 for j = 1,2,3,4 and that two of the σ j are positive and two are negative. This implies that S 21 (0) = 0, which shows that A + εe, with the rank-4 matrix ( ) E = J T S11 (0) 0 ˆV ˆV T, 0 0 satisfies (A+εE )y = (A+εE)y = λ y and x (A+εE ) = x (A+εE) = λ x. By the same argument as in the proof of Theorem 2.1, E is the unique rank-4 matrix with this property. Proof (of Theorem 3.2) 1. implies 2.: We know from the proof of Theorem 3.1 that Y and V have the same range, and we write Y = VR with an invertible 4 4 real matrix R. We denote D = diag(1,1, 1, 1). We consider the path of matrices of unit 2-norm JE(t) = Ve tz De tz V T with a skew-symmetric matrix Z R 4 4 and the corresponding path of eigenvalues λ(t) of A+εE(t) with λ(0) = λ. By 1., we have 0 Re(x T Ė(0)y) = YPY,Ė(0) = T,ZD DZ, which holds for every skew-symmetric matrix Z. The matrix ZD DZ is symmetric and its upper and lower diagonal 2 2 blocks are zero. It thus follows that T is block-diagonal with 2 2 blocks. Moreover, T = RPR T is symmetric and nonsingular. Using the paths E(t) = Ve tm De tm V T with positive semi-definite matrices M that are block diagonal with 2 2 blocks, we then conclude from 0 Re(x Ė(0)y) = T,MD +DM that the upper diagonal block of T is positive definite and the lower block is negative definite; compare [GL13, Section 3] for an analogous situation in the characterization of extremizers among general real ε-perturbations. This proves implies 1.: This is proven by a direct generalization of the corresponding argument in the proof of Theorem 2.2.

15 Low rank ODEs for Hamiltonian matrix nearness problems Rank-4 dynamics for the real case In view of the preceding theorem, we search for a differential equation for matrices of the form JE = VQV T, where V R n 4 has orthonormal columns and Q R 4 4 is a symmetric orthogonal matrix with double eigenvalues +1 and 1, to be determined in the form of a double Householder transformation Q = I 4 2UU T, U R 4 2, U T U = I 2. For a given choice of V and Q, the tangent matrices Ė T EM can then be uniquely written as (cf. [KL07]) JĖ = VQV T +V QV T +VQ V T with V T V = 0, U T U = 0. (3.1) In the same way as in Section 2.2, we derive the following differential equations for V and U: U = 1 2 (I 4 UU T )TU V = (I n VV T )YPY T VQ. (3.2) Let us recall that Y = (Jx R,y R,Jx I,y I ) R n 4 contains the real and imaginary parts of the left and right eigenvectors x and y, of unit norm and with x y > 0, corresponding to an eigenvalue λ of A+εE with JE = VQV T and Q = I 2UU T with U having two orthonormal columns. Further, P R 4 4 and T R 4 4 are defined as in Theorem 3.2. We obtain again the desired monotonicity result. Theorem 3.3 Let E(t) = J T V(t)(I 2U(t)U(t) T )V(t) T with U(t),V(t) satisfying the differential equations (3.2) and with initial values such that U(0) R 4 2 and V(0) R n 4 have orthonormal columns. If λ(t) is a simple eigenvalue of A+εE(t), then Re λ(t) 0, with equality only if U = 0 and V = 0. Let Ũ R4 4 be an orthogonal matrix such that its last two columns are those of U, and let T = ŨTTŨ. The following characterization of the stationary points is then obtained by direct inspection of (3.2), and similar to the proof of Theorem 2.4. Theorem 3.4 Suppose that Y T V is invertible. If λ / R ir, then (U,V) is a stationary point of (3.2) if and only if Y and V have the same range and T is block-diagonal with 2 2 blocks. Note that the transformation Ṽ = VŨ yields JE = Ṽdiag(1,1, 1, 1)Ṽ T, and therefore T corresponds to T of Theorem 3.2. Since T = RPR T, we know that T always has two positive and two negative eigenvalues.

16 16 Nicola Guglielmi et al. 4 An algorithm to compute locally rightmost points in the Hamiltonian pseudospectrum and their extremizers We discretize the differential equation (2.8) or (3.2) by the explicit Euler method with an adaptively chosen stepsize. At each discretization step, we require according to the monotonicity property of Theorem 2.3 that the real part of the rightmost eigenvalue λ of A + εe is increased, for a given ε > 0. In this way the method determines a sequence (λ n,e n ) such that Reλ n > Reλ n 1, until E n approaches a stationary point. We consider here the case where the matrix A and the perturbations are complex. Given E n = J T V n Q n V n of rank 2 and unit 2-norm, such that Re(λ n ) 0, whereλ n istherightmosteigenvalueofa+εe n,wherev n C n 2 withv nv n = I 2, and u n C 2 is such that u n = 1 and hence Q n = I 2 2u n u n C 2 2 is unitary, Algorithm 1 determines an approximation at time t n+1 = t n +h n. In Algorithm 1, γ > 1 denotes a given scaling factor for the stepsize control and Orth(B) denotes the unitary matrix obtained by orthogonalizing the columns of a given (rectangular) matrix B. Algorithm 1: adaptive Euler step Data: E n,v n,q n,x n,y n,λ n and ρ n (step size predicted by previous step) Result: E n+1,v n+1,q n+1,x n+1,y n+1,λ n+1 and ρ n+1 begin 1 Set h = ρ n, Y n = (Jx n,y n ) and T n = V ny n PY nv n C Compute û n+1 = u n 1 2 hq nt n u n, u n+1 = ûn+1 û n+1 V n+1 = V n +h(i n V n V n)y n PY nv n Q n, V n+1 = Orth ( Vn+1 ) Q n+1 = I 2 2u n+1 u n+1. 3 Set E n+1 = J T V n+1 Q n+1 V n+1 4 Compute the rightmost eigentriple λ, x and ŷ of A+εE n+1 5 if Re( λ) Re(λ n ) then reject the step, reduce the stepsize to h := h/γ and repeat from 2 else accept the step: set h n+1 = h, λ n+1 = λ, x n+1 = x and y n+1 = ŷ 6 if h n+1 = ρ n then increase the stepsize to ρ n+1 := γρ n else set ρ n+1 = ρ n 7 Proceed to next step. For our purposes it is not important that E n+1 gives an accurate approximation of E(t n+1 ), the solution - at time t n+1 - of the ODEs (2.8) or (3.2)

17 Low rank ODEs for Hamiltonian matrix nearness problems 17 with initial datum E n as we need not follow the exact solution. We are only interested in increasing Re(λ n+1 ) with respect to Re(λ n ), as is ensured by Theorem 2.3 for sufficiently small step size h n > 0. We remark that the most expensive step of the algorithm is the computation of a leading eigentriple, which for a large sparse matrix can be done efficiently by using the implicitly restarted Arnoldi method, as implemented in ARPACK [LeSY98]. Initial conditions. We often make use of the following initial condition: Ẽ 0 = J T (Jx 0 y 0) herm, V 0 S 0 V 0 = JẼ0, where x 0 and y 0 are the left and right eigenvectors associated to the rightmost eigenvalue of A and S 0 is diagonal. Then we choose, according to the sign of the (diagonal) entries of S 0, and E 0 = J T V 0 Q 0 V 0. Q 0 = ( ) ±1 0 0 ±1 Terminating at a globally rightmost point. When it is crucial to find a globally (and not just locally) rightmost point in the Hamiltonian ε-pseudospectrum, we run the algorithm with a set of different initial values to reduce the possibility of getting trapped in a point that is only locally rightmost, which cannot be excluded from following a single trajectory and which we have indeed occasionally observed in our numerical experiments. According to Theorems a trajectory could even run into a stationary point for which the corresponding rightmost eigenvalue is a boundary point of the pseudospectrum that has a vertical tangent but is not locally rightmost, such as a locally leftmost boundary point on a non-convex right boundary. In view of the monotonicity of the real part of the eigenvalue along a trajectory (Theorem 2.3), it appears that only exceptional trajectories would run into a stationary point that does not correspond to a locally rightmost point of the pseudospectrum, and in fact we never observed such a situation in our extensive numerical experiments. For conciseness we do not formulate the algorithm for the real Hamiltonian case, which has a structure very similar to Algorithm 1. 5 Algorithms for Problem (A) In this section, we present two methods, making use of Algorithm 1, for addressing a generalization of Problem (A) from the introduction: Given a Hamiltonian matrix A with no eigenvalues on the imaginary axis and a given constant δ > 0, find a nearest (in the 2-norm) Hamiltonian matrix

18 18 Nicola Guglielmi et al. B having some eigenvalue δ-close to the imaginary axis. The algorithm has the goal to compute ε = inf{ε 0 : Λ ε (A,M, 2 ) S δ }, (5.1) where M is the set of Hamiltonian matrices and S δ = {z C : Re(z) δ}. Algorithm 1 is used to determine the extremizer E such that the rightmost eigenvalue of B = A + ε E in the left half-plane has real part δ. Since Algorithm 1 finds locally rightmost points, the algorithms we present here are guaranteed only to find upper bounds on ε ; nevertheless they typically find good approximations for the cases we have tested. 5.1 The left pseudospectral abscissa In order to compute the value of ε defined in (5.1), we make use of the following definition. The left ε-pseudospectral abscissa of a Hamiltonian matrix A is the real part of the right-most point of the Hamiltonian ε-pseudospectrum in the left complex plane C = {z : Re(z) 0}, i.e., α l ε(a) = max{rez : z Λ ε (A,M, ) C }. (5.2) Starting from ε > 0 such that α l ε(a) < 0, we want to compute a root ε of the equation α l ε(a) = δ. (5.3) By the monotonicity of pseudospectral sets, α l ε(a) is monotonically increasing with ε. Under a generic assumption, it is also strictly increasing, and then ε of (5.1) is the unique solution of (5.3). Remark 5.1 Since the function ε α l ε(a) is monotonically increasing, we can easily apply a bisection method in order to compute a zero of Equation (5.3). Having said this, our aim is to obtain a fast method to compute such a zero, which turns out to be possible under some additional assumptions, as we are going to explain. When such assumptions are not satisfied, we can always fall back to the bisection method. In this section we use the following notation: all quantities written as g(ε), like λ(ε), E(ε) and so on, are intended to be associated to stationary points (i.e., local extremizers) of (2.8) (or (3.2)). We make the following generic assumption for all ε near ε, for ε < ε. Assumption 5.1 Let λ(ε) with Reλ(ε) 0 be a locally rightmost point in the Hamiltonian ε-pseudospectrum of A and E(ε) the corresponding extremizer (with E(ε) of unit norm and Hamiltonian), that is λ(ε) is a rightmost eigenvalue of A+εE(ε). Then we assume that λ(ε) is simple and the mapping ε E(ε) is smooth. This clearly implies that also the mapping ε λ(ε) is smooth and the same holds for suitably normalized eigenvectors x(ε) and y(ε) (see e.g. [Kat95, MS88]).

19 Low rank ODEs for Hamiltonian matrix nearness problems 19 Remark 5.2 Assumption 5.1 requires the matrix E(ε), which is obtained from solving the differential equation (2.8) for a given ε, to depend smoothly on ε. Although the Newton method proposed in this section relies on this requirement, we will combine it with a bisection method to guarantee convergence also in cases where the solutions are not smooth. On the other hand, we note that in all numerical experiments we have performed, we have always observed the fast convergence resulting from such a smoothness assumption. This leads us to conjecture that Assumption 5.1 holds true generically. In order to derive an equation for ε to approximate ε, we compute the derivative of the left ε-pseudospectral abscissa, with respect to ε. α l ε(a) = Reλ(ε), Theorem 5.1 Let λ(ε) be a locally rightmost point of the complex or real Hamiltonian ε-pseudospectrum of A, and let E(ε) the corresponding extremizer (with E(ε) of unit norm and Hamiltonian) for ε [ε,ε). Suppose that Reλ(ε) 0 for all ε [ε,ε), and that Assumption 5.1 holds. Let x(ε) and y(ε) be left and right eigenvectors of A + εe(ε) belonging to the eigenvalue λ(ε). We then have dre λ(ε) dε = Re x(ε) E(ε)y(ε) x(ε) y(ε) 0. (5.4) Proof By Lemma 1.1, λ (ε) = x(ε) ( E(ε)+εE (ε) ) y(ε) x(ε), y(ε) where indicatesdifferentiationwithrespecttoε.letλ ε (t),forsmall t,denote the path of eigenvalues of A+εE(ε+t) with λ ε (0) = λ(ε). The maximality property of the real part of the eigenvalue λ(ε) of A+εE(ε) yields and hence 0 = Re dλ ε dt (0) = Re x(ε) E (ε)y(ε) x(ε) y(ε) Re(x(ε) E (ε)y(ε)) = 0. (5.5) This yields the stated formula. The non-negativity of monotonicity of pseudospectral sets. d Reλ(ε) is due to the dε Intuitively, the real part of a locally right-most point λ(ε) C should provide an indication how much ε needs to be increased for λ(ε) to hit the imaginary axis. In order to investigate this, we make the following assumption.

20 20 Nicola Guglielmi et al. Assumption 5.2 Let ε be such that for ε < ε, λ(ε) is a locally rightmost point λ(ε) with Reλ(ε) < 0 and limreλ(ε) = 0. We assume that the eigenvalue εրε λ(ε) of A+εE(ε) has algebraic multiplicity two. Moreover, we assume that λ(ε) is defective or, equivalently, that the zero singular value of A+εE(ε) λ(ε)i is simple. A few comments about Assumption 5.2 are needed. Here and in the following λ(ε), the left and right eigenvectors x(ε) and y(ε), and the extremizer E(ε) are considered as the limits of the corresponding quantities for ε ր ε. Assumption 5.2 ensures that these limits exist, see [ConH80] for the eigenvectors and part (c) of the proof of Theorem 5.2 for E(ε). Assumption 5.2 is generically valid for the set of unstructured matrices that have a multiple eigenvalue, see Remark 2.16 in [DP09]. We conjecture that Assumption 5.2 is also generically valid for the situation under consideration. We emphasize that Assumption 5.2 is useful to justify a fast algorithm for the computation of the distances we are interested in, through the next Theorem 5.2, but it is not essential for the two-level approach taken here. In fact, we have to compute the zero of a monotonically increasing function (see Theorem 5.1), which can be done - although less efficiently - by a bisection-like method, see also Remark 5.1. Theorem 5.2 Consider the setting of Theorem 5.1 with E(ε) being an extremizer having the form stated in Theorem 2.2. Under Assumption 5.2 we have 0 < lim εրε Reλ(ε) Reλ (ε) <. Moreover we have for some positive constant c. Reλ(ε) = c ε ε(1+o(1)), ε ր ε, (5.6) Proof (a) Let us consider the nonnegative function δ(ε) := x(ε) y(ε) > 0 ε [ε,ε), δ(ε) = 0, (recall that by the normalization considered in this paper x(ε) y(ε) > 0 for a simple eigenvalue). In order to prove the result we first study δ(ε) for ε close to ε and show that δ(ε) = O( ε ε). Then we show that Re(λ(ε)) is infinitesimal to the same order of δ(ε) in a neighbourhood of ε. In order to compute the derivative of δ(ε) we make use of the formulas in [MS88] for the derivative of the eigenvectors (x(ε) and y(ε) of unit 2-norm and such that x(ε) y(ε) > 0) associated to a simple eigenvalue of a smooth matrix valued function M(ε), x (ε) = x(ε) M (ε)g(ε)x(ε)x(ε) x(ε) M (ε)g(ε) y (ε) = y(ε) G(ε)M (ε)y(ε)y(ε) G(ε)M (ε)y(ε), (5.7)

21 Low rank ODEs for Hamiltonian matrix nearness problems 21 where G(ε) is the group inverse of M(ε) λ(ε)i. Exploiting the well-known property G(ε)y(ε) = 0, x(ε) G(ε) = 0, these formulas imply δ (ε) = x(ε) M (ε)g(ε)x(ε)δ(ε)+y(ε) G(ε)M (ε)y(ε)δ(ε). (5.8) Using a result in[go11] relating the group inverse and the pseudoinverse of amatrixwithasimplezeroeigenvalue,wehave(b indicatesthepseudoinverse of B) G(ε) = 1 δ(ε) 2Ĝ(ε) Ĝ(ε) = ( δ(ε)i y(ε)x(ε) )( M(ε) λ(ε)i ) ( δ(ε)i y(ε)x(ε) ). (5.9) Let M(ε) = A+εE(ε) so that M (ε) = E(ε)+εE (ε). By Assumption 5.2 we get that (M(ε) λ(ε)i) remains bounded as ε ր ε, and thus Ĝ(ε) = y(ε)x(ε) ( M(ε) λ(ε)i) y(ε)x(ε). Next define N(ε) = M(ε) λ(ε)i and Using (5.9) and (5.5), we obtain κ(ε) = x(ε) N(ε) y(ε). (5.10) x(ε) ( E(ε)+εE (ε))ĝ(ε)x(ε) = κ(ε)x(ε) E(ε)y(ε)+O(δ(ε)) (5.11) ( ) y(ε) Ĝ(ε) E(ε)+εE (ε) y(ε) = κ(ε)x(ε) E(ε)y(ε)+O(δ(ε)). (5.12) We show in parts (b) and (c) of the proof that the right-hand side in these equations is different from zero for ε close to ε. (b) We prove that κ(ε) 0. Since δ(ε) = 0, we have x(ε) y(ε), and since x(ε) spans the nullspace of N(ε) (by the defectivity condition in Assumption 5.2), we obtain y(ε) Ker(N(ε) ) = Range(N(ε)) = z 1 (ε) s.t. N(ε)z 1 (ε) = y(ε). (5.13) Assume, in a proof by contradiction, N(ε) y(ε) x(ε), which means N(ε) y(ε) Range(N(ε)) = z 2 (ε) s.t. N(ε)z 2 (ε) = N(ε) y(ε). (5.14) Premultiplying (5.14) by N(ε) 2 we get (omitting the argument ε in the following) N 3 z 2 = N 2 N y = N NN Nz 1 = NNz 1 = Ny = 0. The null-space of N 3 is two-dimensional by Assumption 5.2 and contains the two nonzero vectors y and N y, since Ny = 0 and N 2 N y = 0. Note that N y 0 because otherwise y would be in the nullspace of N, which is the

22 22 Nicola Guglielmi et al. nullspace of N, which contradicts the above observation that y 0 is in the orthogonal complement of the nullspace of N. Moreover, y and N y are linearly independent, since otherwise the relation y = cn y would yield, on multiplication with N, that 0 = Ny = cnn y = cnn Nz 1 = cnz 1 = cy, which contradicts y 0. Therefore, the null-space of N 3 is spanned by y and N y, and since we have shown that it contains z 2, we obtain z 2 = c 1 y +c 2 N y. Premultiplying this equation with N then gives N y = Nz 2 = c 2 NN Nz 1 = c 2 Nz 1 = c 2 y, which contradicts the linear independence of y and N y. We have thus led the assumption N(ε) y(ε) x(ε) to a contradiction. Therefore, κ(ε) 0. (c) We now show that x(ε) E(ε)y(ε) 1 for ε ε. First we note that, for the eigenvalue λ(ε) ir, the left and right eigenvalues are related by y(ε) = Jx(ε). The characterization 2. of Theorem 2.2 yields that for ε < ε, where V(ε) diagonalizes JE(ε), Y(ε) = (Jx(ε),y(ε)) = V(ε)R(ε), JE(ε) = V(ε) ( ) 1 0 V(ε), 0 1 and R C 2 2. For ε ε we thus obtain ( ) ρ ρ R(ε) = with ρ 2 + σ 2 = 1. σ σ Since again by 2. of Theorem 2.2 we have for ε < ε ( ) ( ) 0 1 R(ε) R(ε) τ1 (ε) 0 = with τ τ 2 (ε) 1 (ε),τ 2 (ε) > 0, for ε ε we must have ( ) ρ 2 2 ρσ ρσ σ 2 = ( ) τ1 (ε) 0 0 τ 2 (ε) with τ 1 (ε),τ 2 (ε) 0, which implies σ = 0 and hence ( ) ρ ρ R(ε) = 0 0 with ρ 2 = 1.

23 Low rank ODEs for Hamiltonian matrix nearness problems 23 This yields which implies V(ε) y(ε) = ( ) ρ, 0 x(ε) E(ε)y(ε) = y(ε) JE(ε)y(ε) = y(ε) V(ε) (d) Inserting (5.11) and (5.12) into (5.8) we get δ (ε) = 2κ(ε) x(ε) E(ε)y(ε) δ(ε) ( ) 1 0 V(ε) y(ε) = (1+O(δ(ε))). (5.15) Equation (5.15) implies that lim εրε δ(ε)δ (ε) is finite and is different from zero. The solution of the initial value problem δ (ε) = d (1+o(1)), δ(ε) = 0 (5.16) δ(ε) where d is a positive constant (this is due to the fact that δ decreases in a left neighbourhood of ε), is Observing that (see Theorem 5.1) δ(ε) = 2d ε ε ( 1+o(1) ). (5.17) Reλ (ε) = 1 2Reκ(ε) δ (ε) ( 1+O(δ(ε)) ) with Reλ(ε) = 0 and Reκ(ε) bounded away from zero in a neighbourhood of ε, we get (5.6) and conclude the proof. 5.2 A Newton bisection method Recalling that we have introduced the notation α l ε(a) = Reλ(ε) (see (5.2)), according to Theorem 5.2 we have that the function ε α l ε(a) has a graph like that shown in Figure 5.2. For conciseness we omit the dependence on A in the coded algorithms and denote it simply by α l ε. We consider a given δ > 0. In view of Theorem 5.1, it seems very natural to make use of a Newton iteration, ε n+1 = ε n ( α l ε n +δ ) x(ε n ) y(ε n ) Re(x(ε n ) E(ε n )y(ε n )), n = 0,1,... possibly coupled with bisection (see Algorithm 2 where we denote by dα l ε the derivative of α l ε with respect to ε stated in Theorem 5.1). The method yields quadratic convergence to ε when this is a simple root.

24 24 Nicola Guglielmi et al. Algorithm 2: Newton-bisection method for Problem (A) Data: Matrix A Hamiltonian, k max (max number of iterations), tol (tolerance) ε lb and ε ub (starting values for the lower and upper bounds for ε ) Result: ε (upper bound for the measure) begin 1 Set λ(0) rightmost eigenvalue of A in the left complex plane, x(0) and y(0) the corresponding left and right eigenvectors Set ε 1 = 0 2 Initialize E(ε 1 ) (according to the setting) Compute α l ε 1, dα l ε 1 3 Set ε 0 = α l ε 1 /dα l ε 1 4 Set k = 0 5 Initialize lower and upper bounds: ε lb = 0, ε ub = + while α l ε k α l ε k 1 < tol do 6 Compute α l ε k, E(ε k ) by solving the ode (2.8) with initial datum E(ε k 1 ) 7 Update upper and lower bounds ε lb, ε ub 8 if α l ε k = 0 then Set ε ub = min(ε ub,ε k ) Compute ε k+1 = (ε lb +ε ub )/2 (bisection step) else Set ε lb = max(ε lb,ε k ) 9 Compute dα l ε k 10 Compute ε k+1 = ε k α l ε k /dα l ε k (Newton step) if k = k max then Halt else Set k = k Return ε = ε k This works very well when δ is not too small (see the following Example 5.1) since when δ is very small, that is when ε is very close to ε, the loss of differentiability of the function Re(λ(ε)) at ε leads to difficulties in the Newton iteration and results in a bisection-like behaviour of the method. A good choice for ε 0 is the unstructured distance from the imaginary axis, that is the norm of the smallest unstructured matrix F such that A+F has a purely imaginary eigenvalue. Such a distance can be effectively computed by combining the method proposed in [GO11, GL11] with a Newton iteration. Example 5.1 Consider the Hamiltonian matrix i 2 2+2i 1 i i 1 i 1+2i 2 2i 0 1+i A = 2+i 1 i 1+2i 1+i 1 i i 4 i 2+i i 1 1 i 1 i 1 i 1+2i 0 3 i 1+2i 1+2i

25 Low rank ODEs for Hamiltonian matrix nearness problems Fig. 5.1 Hamiltonian 2-norm pseudospectra of the matrix A in Example 5.1 for ε = (left picture) and ε = ε 10 in Table 5.2 (right picture). (i) We first consider the choice δ = 0.5 and obtain the results shown in Table 5.1. k ε k α l ε k Table 5.1 Computed values of ε and α l ε for Example 5.1 with δ = 0.5. The convergence appears to be quadratic, as expected, and ε 4 provides a very accurate estimate. (ii) Then we consider the case δ = In this case the method behaves like bisection as is reported by Table 5.2, where we indicate by 0 a situation when the considered ε is larger than ε, meaning that ε is in the part of the domain where α l ε(a) is identically zero. Table 5.2 shows that after 10 iterations the solution is not yet fully accurate (compare to the following Table 5.3). In order to obtain a faster convergence when δ is small, in the next section we propose a method which is based on an interpolation idea exploiting Theorem 5.2 (see Figure 5.2). 5.3 An algorithm for small δ Let the distance δ > 0 be given, with δ small, and let ε denote a value of ε such that the Hamiltonian pseudospectrum of A touches the imaginary axis. Note that ε is the limit of ε (see (5.1)) as δ 0 +.

26 26 Nicola Guglielmi et al. k ε k α l ε k Table 5.2 Computed values of ε and α l ε for Example 5.1 with δ = δ ε Fig. 5.2 The function α l ε(a) in a neighbourhood of ε. For ε ε it becomes identically zero. The computational problem consists in finding the value ε such that α l ε (A) = δ, as shown in the picture. Close to the imaginary axis we make use of Theorem 5.2 and use the expansion (which is meaningful only when ε ε), ε ε γα l ε(a) 2, for ε < ε, (5.18) with γ > 0. Given ε k, we use Theorem 5.1 and estimate γ and ε by γ k and ε k using the formulæ (the first of which is obtained by differentiating (5.18) with respect to ε), γ k = x (ε k )y(ε k ) 2 α l ε k (A) Re(x (ε k )(E(ε k ))y(ε k )) (5.19) ε k = ε k +γ k (α l ε k ) 2 (5.20)

Matrix stabilization using differential equations.

Matrix stabilization using differential equations. Matrix stabilization using differential equations. Nicola Guglielmi Universitá dell Aquila and Gran Sasso Science Institute, Italia NUMOC-2017 Roma, 19 23 June, 2017 Inspired by a joint work with Christian

More information

Key words. Regular matrix pencil, singular matrix pencil, differential-algebraic equation, lowrank perturbation, matrix differential equation.

Key words. Regular matrix pencil, singular matrix pencil, differential-algebraic equation, lowrank perturbation, matrix differential equation. ON THE NEAREST SINGULAR MATRIX PENCIL NICOLA GUGLIELMI, CHRISTIAN LUBICH, AND VOLKER MEHRMANN Abstract. Given a regular matrix pencil A + µe, we consider the problem of determining the nearest singular

More information

Differential equations for the approximation of the distance to the closest defective matrix.

Differential equations for the approximation of the distance to the closest defective matrix. Vancouver, 8 August 2013 Differential equations for the approximation of the distance to the closest defective matrix. N. Guglielmi (L Aquila) and M.Manetta (L Aquila), P.Buttà, S.Noschese (Roma) Dedicated

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Dynamical low-rank approximation

Dynamical low-rank approximation Dynamical low-rank approximation Christian Lubich Univ. Tübingen Genève, Swiss Numerical Analysis Day, 17 April 2015 Coauthors Othmar Koch 2007, 2010 Achim Nonnenmacher 2008 Dajana Conte 2010 Thorsten

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

arxiv: v1 [math.na] 12 Mar 2019

arxiv: v1 [math.na] 12 Mar 2019 MEASURING STABILITY OF SPECTRAL CLUSTERING ELEONORA ANDREOTTI, DOMINIK EDELMANN, NICOLA GUGLIELMI, AND CHRISTIAN LUBICH arxiv:193.5193v1 [math.na] 12 Mar 219 Abstract. As an indicator of the stability

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

Chapter 2 Finite Element Spaces for Linear Saddle Point Problems

Chapter 2 Finite Element Spaces for Linear Saddle Point Problems Chapter 2 Finite Element Spaces for Linear Saddle Point Problems Remark 2.1. Motivation. This chapter deals with the first difficulty inherent to the incompressible Navier Stokes equations, see Remark

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

c 2017 Society for Industrial and Applied Mathematics

c 2017 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 38, No. 3, pp. 89 923 c 207 Society for Industrial and Applied Mathematics CRISS-CROSS TYPE ALGORITHMS FOR COMPUTING THE REAL PSEUDOSPECTRAL ABSCISSA DING LU AND BART VANDEREYCKEN

More information

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n)

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n) GROUP THEORY PRIMER New terms: so(2n), so(2n+1), symplectic algebra sp(2n) 1. Some examples of semi-simple Lie algebras In the previous chapter, we developed the idea of understanding semi-simple Lie algebras

More information

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in 86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Key words. eigenvalue problem, real pseudospectra, spectral abscissa, subspace methods, crisscross methods, robust stability

Key words. eigenvalue problem, real pseudospectra, spectral abscissa, subspace methods, crisscross methods, robust stability CRISS-CROSS TYPE ALGORITHMS FOR COMPUTING THE REAL PSEUDOSPECTRAL ABSCISSA DING LU AND BART VANDEREYCKEN Abstract. The real ε-pseudospectrum of a real matrix A consists of the eigenvalues of all real matrices

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

NOTES ON BILINEAR FORMS

NOTES ON BILINEAR FORMS NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education-2015. Symmetric bilinear

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

MATH 5524 MATRIX THEORY Problem Set 5

MATH 5524 MATRIX THEORY Problem Set 5 MATH 554 MATRIX THEORY Problem Set 5 Posted Tuesday April 07. Due Tuesday 8 April 07. [Late work is due on Wednesday 9 April 07.] Complete any four problems, 5 points each. Recall the definitions of the

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Least Squares Based Self-Tuning Control Systems: Supplementary Notes

Least Squares Based Self-Tuning Control Systems: Supplementary Notes Least Squares Based Self-Tuning Control Systems: Supplementary Notes S. Garatti Dip. di Elettronica ed Informazione Politecnico di Milano, piazza L. da Vinci 32, 2133, Milan, Italy. Email: simone.garatti@polimi.it

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

How to Find Matrix Modifications Keeping Essentially Unaltered a Selected Set of Eigenvalues

How to Find Matrix Modifications Keeping Essentially Unaltered a Selected Set of Eigenvalues How to Find Matrix Modifications Keeping Essentially Unaltered a Selected Set of Eigenvalues S. Noschese and L. Pasquini 1 Introduction In this paper we consider a matrix A C n n and we show how to construct

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012 University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR-202-07 April 202 LYAPUNOV INVERSE ITERATION FOR COMPUTING A FEW RIGHTMOST

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point Applied Mathematics E-Notes, 7(007), 65-70 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs

On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs Saeed Ahmadizadeh a, Iman Shames a, Samuel Martin b, Dragan Nešić a a Department of Electrical and Electronic Engineering, Melbourne

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

Singular Value Decomposition (SVD) and Polar Form

Singular Value Decomposition (SVD) and Polar Form Chapter 2 Singular Value Decomposition (SVD) and Polar Form 2.1 Polar Form In this chapter, we assume that we are dealing with a real Euclidean space E. Let f: E E be any linear map. In general, it may

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

On rank one perturbations of Hamiltonian system with periodic coefficients

On rank one perturbations of Hamiltonian system with periodic coefficients On rank one perturbations of Hamiltonian system with periodic coefficients MOUHAMADOU DOSSO Université FHB de Cocody-Abidjan UFR Maths-Info., BP 58 Abidjan, CÔTE D IVOIRE mouhamadou.dosso@univ-fhb.edu.ci

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Structured singular values for Bernoulli matrices

Structured singular values for Bernoulli matrices Int. J. Adv. Appl. Math. and Mech. 44 2017 41 49 ISSN: 2347-2529 IJAAMM Journal homepage: www.ijaamm.com International Journal of Advances in Applied Mathematics and Mechanics Structured singular values

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

The speed of Shor s R-algorithm

The speed of Shor s R-algorithm IMA Journal of Numerical Analysis 2008) 28, 711 720 doi:10.1093/imanum/drn008 Advance Access publication on September 12, 2008 The speed of Shor s R-algorithm J. V. BURKE Department of Mathematics, University

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

4 Film Extension of the Dynamics: Slowness as Stability

4 Film Extension of the Dynamics: Slowness as Stability 4 Film Extension of the Dynamics: Slowness as Stability 4.1 Equation for the Film Motion One of the difficulties in the problem of reducing the description is caused by the fact that there exists no commonly

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information