University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

Size: px
Start display at page:

Download "University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012"

Transcription

1 University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 202 LYAPUNOV INVERSE ITERATION FOR COMPUTING A FEW RIGHTMOST EIGENVALUES OF LARGE GENERALIZED EIGENVALUE PROBLEMS HOWARD C. ELMAN AND MINGHAO WU Abstract. In linear stability analysis of a large-scale dynamical system, we need to compute the rightmost eigenvalues for a series of large generalized eigenvalue problems. Existing iterative eigenvalue solvers are not robust when no estimate of the rightmost eigenvalues is available. In this study, we show that such an estimate can be obtained from Lyapunov inverse iteration applied to a special eigenvalue problem of Lyapunov structure. We also show that Lyapunov inverse iteration will always converge in only two steps if the Lyapunov equation in the first step is solved accurately enough. Furthermore, we generalize the analysis to a deflated version of this Lyapunov eigenvalue problem and propose an algorithm that computes a few rightmost eigenvalues for the eigenvalue problems arising from linear stability analysis. Numerical experiments demonstrate the robustness of the algorithm.. Introduction. This paper introduces an efficient algorithm for computing a few rightmost eigenvalues of generalized eigenvalue problems. We are concerned with problems of the form J αx = µmx. arising from linear stability analysis see [] of the dynamical system Mu t = fu, α..2 M R n n is called the mass matrix, and the parameter-dependent matrix J α R n n is the Jacobian matrix f f u uα, α = u α, where uα is the steady-state solution to.2 at α, i.e., fu, α = 0. Let the solution path be the following set: S = {u, α fu, α = 0}. We seek the critical point u c, α c associated with transition to instability on S. While the method developed in this study works for any dynamical system of the form.2, our primary interest is the ones arising from spatial discretization of 2- or 3-dimensional time-dependent partial differential equations PDEs. Therefore, we assume n to be large and J α, M to be sparse throughout this paper. The conventional method of locating the critical parameter α c is to monitor the rightmost eigenvalues of. while marching along S using numerical continuation see []. In the stable regime of S, the eigenvalues µ of. all lie to the left of the imaginary axis. As u, α approaches the critical point, the rightmost eigenvalue of. moves towards the imaginary axis; at u c, α c, the rightmost eigenvalue of. has real part zero, and finally, in the unstable regime, some eigenvalues of. have positive real parts. The continuation usually starts from a point u 0, α 0 in the stable regime of S and the critical point is detected when the real part of the rightmost eigenvalue of. becomes nonnegative. Consequently, robustness and efficiency of the eigenvalue solver for the rightmost eigenvalues of. are crucial for the performance of this method. Direct eigenvalue This work was supported in part by the U. S. Department of Energy under grant DEFG0204ER2569 and the U. S. National Science Foundation under grant DMS537. Department of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, MD elman@cs.umd.edu. Applied Mathematics & Statistics, and Scientific Computation Program, Department of Mathematics, University of Maryland, College Park, MD mwu@math.umd.edu.

2 solvers such as the QR and QZ algorithms see [20] compute all the eigenvalues of., but they are too expensive for large n. Existing iterative eigenvalue solvers [20] are able to compute a small set k n of eigenvalues of. near a given shift or target σ C efficiently. For example, they work well when k eigenvalues of. with smallest modulus are sought, in which case σ = 0. One issue with such methods is that there is no robust way to determine a good choice of σ when we have no idea where the target eigenvalues may be. In the computation of the rightmost eigenvalues, the most commonly used heuristic choice for σ is zero, i.e., we compute k eigenvalues of. with smallest modulus and hope that the rightmost one is one of them. When the rightmost eigenvalue is real, zero is a good choice. However, such an approach is not robust when the rightmost eigenvalues consist of a complex conjugate pair: the rightmost pair can be far away from zero and it is not clear how big k should be to ensure that they are found. Such examples can be found in the numerical experiments of this study. Meerbergen and Spence [5] proposed the Lyapunov inverse iteration method, which estimates the critical parameter α c without computing the rightmost eigenvalues of.. Assume u 0, α 0 is in the stable regime of S and is also in the neighborhood of the critical point u c, α c. Let λ c = α c α 0 and A = J α 0. Then the Jacobian matrix J α c at the critical point can be approximated by A + λ c B where B = dj dα α 0. It is shown in [5] that λ c is the eigenvalue with smallest modulus of the eigenvalue problem AZM T + MZA T + λbzm T + MZB T = 0.3 of Lyapunov structure and that λ c can be computed by a matrix version of inverse iteration. Estimates of the rightmost eigenvalues of. at α c can be obtained as by-products. Elman et al. [8] refined the Lyapunov inverse iteration proposed in [5] to make it more robust and efficient and examined its performance on challenging test problems arising from fluid dynamics. Various implementation issues were discussed, including the use of inexact inner iterations, the impact of the choice of iterative method used to solve the Lyapunov equations, and the effect of eigenvalue distribution on performance. Numerical experiments demonstrated the robustness of their algorithm. The method proposed in [8, 5], although it allows us to estimate the critical value of the parameter without computing the rightmost eigenvalues of., only works in the neighborhood of the critical point u c, α c. In [8], for instance, the critical parameter value α c of all numerical examples is known a priori, so that we can pick a point u 0, α 0 close to u c, α c and apply Lyapunov inverse iteration with confidence. In reality, α c is unknown and we start from a point u 0, α 0 in the stable regime of S that may be distant from the critical point. In this scenario, the method of [8, 5] cannot be used to estimate α c, since J α c cannot be approximated by A + λ c B. However, quantitative information about how far away u 0, α 0 is from u c, α c can still be obtained by estimating the distance between the rightmost eigenvalue of. at α 0 and the imaginary axis: if the rightmost eigenvalue is far away from the imaginary axis, then it is reasonable to assume that u 0, α 0 is far away from the critical point as well, and therefore we should march along S using numerical continuation until we are close enough to u c, α c ; otherwise, we can assume that u, α 0 is already in the neighborhood of the critical point and the method of [8, 5] can be applied to estimate α c. The goal of this paper is to develop a robust method to compute a few rightmost eigenvalues of. in the stable regime of S. The plan of the paper is as follows. In section 2, we show that the distance between the imaginary axis and the rightmost eigenvalue of. is the eigenvalue with smallest modulus of an eigenvalue problem similar in structure to.3. As a result, this eigenvalue 2

3 can be computed efficiently by Lyapunov inverse iteration. In section 3, we present numerical results for several examples arising from fluid dynamics, and provide an analysis of the fast convergence of Lyapunov inverse iteration observed in our experiments. Based on the analysis, we also propose a modified algorithm that guarantees convergence in only two iterations. In section 4, we show that the analysis in sections 2 and 3 can be generalized to a deflated version of the Lyapunov eigenvalue problem, which leads to an algorithm for computing k k n rightmost eigenvalues of.. Details of the implementation of this algorithm are discussed in section 5. Finally, we make some concluding remarks in section Computing the distance between the rightmost eigenvalues and the imaginary axis. Let u 0, α 0 be any point in the stable regime of S and assume M is nonsingular in.. Let µ j, x j x j 2 =, j =, 2,..., n be the eigenpairs of. at α 0, where the real parts of µ j, Reµ j, are in decreasing order, i.e., 0 > Reµ Reµ 2... Reµ n. Then the distance between the rightmost eigenvalues and the imaginary axis is Reµ. Let A = J α 0 and S = A M. To compute this distance, we first observe that Reµ is the eigenvalue with smallest modulus of the n 2 n 2 generalized eigenvalue problem z = λ 0 z 2. where = S I + I S and 0 = 2S S I is the identity matrix of order n. We proceed in two steps to prove this assertion. First, we show that Reµ is an eigenvalue of 2.. Theorem 2.. The eigenvalues of 2. are λ i,j = 2 µ i + µ j, i, j =, 2,..., n. For any pair i, j, there are eigenvectors associated with λ i,j given by z i,j = x i x j and z j,i = x j x i. Proof. Since µ j, x j j =, 2,..., n are the eigenpairs of Ax = µmx, µ j, x j are the eigenpairs of S. We first prove that the eigenvalues of 2. are {λ i,j } n i,j=. Let J be the Jordan normal form of S and P be an invertible matrix such that S = P JP. Then 0 P P = 2 P JP P JP P JP I + I P JP P P = 2 P J P P J P P J P + P P J = 2 P P J P J P P J P + P P J [ = P P I J + J I ]. 2 This implies that 2. and 2 I J + J I have the same eigenvalues. Due to the special structure of the Jordan normal form J, I J + J I is an upper triangular matrix whose diagonal entries are {µ i + µ j } n i,j=. Consequently, the eigenvalues of 2 I J + J I are { 2 µ i + µ j } n i,j= = {λ i,j} n i,j=. Therefore, the eigenvalues of 2. are {λ i,j} n i,j= as well. Second, we show that z i,j is an eigenvector of 2. associated with the eigenvalue λ i,j. For any pair i, j i, j =, 2,..., n, x i x j = S I + I Sx i x j = Sx i x j + x i Sx j = + x i x j, µ i µ j and 0 x i x j = 2S Sx i x j = 2Sx i Sx j = 2 µ i µ j x i x j. 3

4 Therefore, z i,j = µ i + µ j µiµ j 2 0z i,j = λ i,j 0 z i,j. Similarly, we can show that z j,i = λ i,j 0 z j,i If µ is real, then Reµ = µ = 2 µ + µ = λ, ; if µ is not real i.e., µ = µ 2 and x = x 2, then Reµ = 2 µ + µ = 2 µ + µ 2 = λ,2 = λ 2,. In both cases, by Theorem 2., Reµ is an eigenvalue of 2.. We next show that Reµ is the eigenvalue of 2. with smallest modulus. Theorem 2.2. Assume all the eigenvalues of Ax = µmx lie in the left half of the complex plane. Then the eigenvalue of 2. with smallest modulus is Reµ. Proof. Let µ j = a j + ib j. Then 0 > a a 2... a n. If the rightmost eigenvalue of Ax = µmx is real, then Reµ = λ,, and since 0 > a a 2... a n, it follows that λ, 2 = 4 a + a 2 4 [ ai + a j 2 + b i + b j 2] = λ i,j 2 for any pair i, j. Alternatively, if the rightmost eigenvalues of Ax = µmx consist of a complex conjugate pair, then a = a 2, b = b 2, Reµ = λ,2 = λ 2,, and similarly, λ,2 2 = λ 2, 2 = 4 [ a + a 2 + b b 2] 4 [ ai + a j 2 + b i + b j 2] = λ i,j 2 for any pair i, j. In both cases, Reµ is the eigenvalue of 2. with smallest modulus. Simple example: Consider a 4 4 example of Ax = µmx whose eigenvalues are µ,2 = ± 5i, µ 3 = 2 and µ 4 = 3 see Figure 2.a. The eigenvalues of the corresponding 6 6 eigenvalue problem 2. are plotted in Figure 2.b and listed in Table 2.. As seen in Figure 2.b, λ, = 5i, λ,2 = λ 2, = and λ 2,2 = + 5i are the leftmost eigenvalues of 2., and λ,2 = λ 2, are the eigenvalues of 2. with smallest modulus. Table 2.: The eigenvalues of 2. corresponding to the 4 4 example λ, = 5i λ 2, = λ,2 = λ 2,2 = + 5i λ 3, = λ,3 =.5 2.5i λ 3,2 = λ 2,3 = i λ 3,3 = 2 λ 4, = λ,4 = 2 2.5i λ 4,2 = λ 2,4 = i λ 4,3 = λ 3,4 = 2.5 λ 4,4 = 3 Assume Ax = µmx has a complete set of eigenvectors {x j } n j=. Then 2. also has a complete set of eigenvectors {z i,j } n i,j=. By Theorem 2.2, the distance between the imaginary axis and the rightmost eigenvalues, Reµ, can be found by inverse iteration applied to 2.. Unfortunately, this approach is not suitable for large n because it involves solving linear systems of order n 2. In [8, 5], an n 2 n 2 eigenvalue problem similar in structure to 2. is dealt with by rewriting an equation of Kronecker sums into an equation of Lyapunov form, i.e.,.3. Here, similarly, we can rewrite 2. into SZ + ZS T + λ2szs T = Any eigenpair λ, z of 2. is related to a solution λ, Z of 2.2, which we also refer to as an eigenpair of 2.2, by z = vecz. By Theorem 2. and the relation between 2. and 2.2, λ i,j, Z i,j i, j =, 2,..., n are the eigenpairs of 2.2 where Z i,j = x j x T i ; in addition, by Theorem 2.2, 4

5 imaginary axis imaginary axis real axis a The spectrum of Ax = µmx real axis b The spectrum of 2. Fig. 2.: The spectrum of Ax = µmx and 2. for the 4 4 example : double eigenvalues. Reµ is the eigenvalue of 2.2 with smallest modulus. Furthermore, under certain conditions, Reµ is an eigenvalue of 2.2 whose associated eigenvector is real, symmetric and of low rank. Assume the following: a for any < i n, if Reµ i = Reµ, then µ i = µ ; a2 µ is a simple eigenvalue of Ax = µmx. Consequently, if µ is real, Reµ is a simple eigenvalue of 2. with the eigenvector z, = x x ; otherwise, Reµ is a double eigenvalue of 2. with the eigenvectors z,2 = x x and z 2, = x x. When the eigenvectors of 2.2 are restricted to the subspace of C n n consisting of symmetric matrices Z, then by Theorem 2.3 from [5], Reµ has a unique up to a scalar multiplier, real and symmetric eigenvector x x T if µ is real, or x x + x x T if µ is not real where x denotes the conjugate transpose of x. Therefore, we can apply Lyapunov inverse iteration see [8,5] to 2.2 to find Reµ, the eigenvalue of 2.2 with smallest modulus: Algorithm Lyapunov inverse iteration for 2.2. Given V 0 R n with V 0 2 = and d 0 =. 2. For l =, 2, 2.. Rank reduction : compute S = Vl T SV l and solve for the eigenvalue λ of S Z + Z S T + λ 2 S Z S T = with smallest modulus and its eigenvector Z = Ṽ DṼ T, where Ṽ Rd l r, D R r r with D F =, and r = l = or 2 l Set λ l = λ and Z l = V l DV T l, where V l = V l Ṽ If λ l, Z l is accurate enough, then stop Else, solve for Y l from When l =, 2.3 is a scalar equation and its eigenpair is SY l + Y l S T = 2SZ l S T λ, Z = S,.

6 in factored form: Y l = V l D l Vl T, where V l R n d l is orthonormal and D l R d l d l. As the iteration proceeds, the iterate λ l, Z l T = V l DV l will converge to Reµ, VDV T where D F = and V = x if µ is real or V R n 2 is an orthonormal matrix whose columns span {x, x } if µ is not real. Besides estimates of Reµ, we can also obtain from Algorithm estimates of µ, x by solving the small or 2 2 eigenvalue problem V T l SV l y = θy 2.5 and taking µ l = θ and xl = V l y. As V l converges to V, µ l, x l will converge to µ, x. At each iteration of Algorithm, a large-scale Lyapunov equation 2.4 needs to be solved. We can rewrite 2.4 as SY l + Y l S T = P l C l P T l 2.6 see [8] for details where P l is orthonormal and is of rank l = or 2 l >. The solution to 2.6, Y l, is real and symmetric and frequently has low rank see [, 6]. Since S is large, direct methods such as [2, 2] are not suitable. An iterative method that solves Lyapunov equations with large coefficient matrix and low-rank right-hand side is needed. Krylov-type methods for 2.6, such as the standard Krylov subspace method [3, 7], the Extended Krylov Subspace Method EKSM [8] and the Rational Krylov Subspace Method RKSM [6, 7], construct approximate solutions of the form Y approx l = W XW T where W is an orthonormal matrix whose columns span the Krylov subspace and X is the solution to the small, projected Lyapunov equation W T SW X + XW T SW T = W T P l C l W T P l T, which can be obtained using direct methods. For example, the standard Krylov subspace method [3, 7] builds the mp-dimensional Krylov subspace K m S, P l = span { P l, SP l,..., S m } P l 2.7 where m is the number of block Arnoldi steps and p is the block size i.e., rank of P l. The main cost of solving 2.6 using Krylov-type methods is m p linear solves with coefficient matrix aa + bm, where values of the scalars a, b depend on the Krylov method used. In step 2. rank reduction of Algorithm, although it may look like computing the reducedrank matrix S = V T l SV l requires another d l linear solves with coefficient matrix A, in fact, if a Krylov-type method is used to solve 2.6, S can be obtained from the Arnoldi decomposition computed by the Krylov-type method for no additional cost when l >. Assume the standard Krylov method is used to solve 2.6. In the l st iteration of Algorithm, it computes the Arnoldi decomposition SV l = V l H m + W m+ H m+,m E T m 2.8 and the approximate solution Y approx = V l D l V T l, where the columns of V l form an orthonormal basis for the Krylov subspace K m S, P l and are orthogonal to W m+. In addition, H m R mr mr is block upper Hessenberg, W m+ R n r is orthonormal, H m+,m R r r and E m holds the last r columns of the identity matrix of order mr, where r = or 2. This implies that in the l th iteration of Algorithm, S is simply H m, which has been computed already. The Arnoldi decomposition computed by EKSM or RKSM has a form that is more complicated than 2.8; nonetheless, it produces the matrix S needed for the next iteration as well. 6

7 3. Numerical experiments. In this section, we test Algorithm on several problems arising from fluid dynamics. Note that when.2 comes from a standard e.g., finite element discretization of the imcompressible Navier-Stokes equations, the mass matrix M is singular, leading to infinite eigenvalues of. and singular S = A M. As in [8], we use the shifted, nonsingular mass matrix proposed in [4], which maps the infinite eigenvalues of. to finite ones away from the imaginary axis and leaves the finite eigenvalues of. unchanged. From here on, M refers to this shifted mass matrix. 3.. Example : driven-cavity flow. Linear stability analysis of this flow is studied in many papers, for example, [9]. The Q 2 -Q mixed finite element discretization with a mesh of the Navier-Stokes equations gives rise to a generalized eigenvalue problem. of order n = 9539, where the parameter α is the Reynolds number denoted by Re of the flow. The Reynolds number of this flow is defined to be Re = ν, where ν is the kinematic viscosity. Figure 3.a depicts the path traced out by the eight rightmost eigenvalues of. for Re = 2000, 4000, 6000, 7800, at which the steady-state solution to.2 is stable. As the Reynolds number increases, the following trend can be observed: the eight rightmost eigenvalues all move towards the imaginary axis, and they become more clustered as they approach the imaginary axis. In addition, although the rightmost eigenvalue starts off being real, one conjugate pair of complex eigenvalues whose imaginary parts are about ±3i move faster towards the imaginary axis than the other eigenvalues and eventually they become the rightmost. They first cross the imaginary axis at Re 7929, causing instability in the steady-state solution of.2 see [8]. Finding the conjugate pair of rightmost eigenvalues of. at a high Reynolds number for example, at Re = 7800 can be difficult. Suppose we are trying to find the rightmost eigenvalues at Re = 7800 by conventional methods, such as computing k eigenvalues of. with smallest modulus using the Implicitly Restarted Arnoldi IRA method [9]. If we use the Matlab function eigs which implements the IRA method with its default setting, then k has to be as large as 250, since there are many eigenvalues that have smaller modulus than the rightmost pair. This leads to at least 500 linear solves with coefficient matrix A, and in practice, many more. More importantly, note that the decision k = 250 is made based on a priori knowledge of where the rightmost eigenvalues lie. In general, we cannot identify a good value for k that guarantees that the rightmost eigenvalues will be found. For four various Reynolds numbers between 2000 and 7800, we apply Algorithm with RKSM as the Lyapunov solver to calculate the distance between the rightmost eigenvalues of. and the imaginary axis. The results are reported in Table 3.2 see Table 3. for notation. The initial guess V 0 is chosen to be a random vector of unit norm in R n, the stopping criterion for the eigenvalue residual is R l F < 0 8, and the stopping criterion for the Lyapunov solve is R l F < 0 9 P l C l Pl T F = 0 9 C l F. Note that both residual norms R l F and R l F are cheap to compute see [8] for details. Therefore, the main cost of each iteration is about d l linear solves of order n. All linear systems are solved using direct methods. As shown in Table 3.2, the distances between the rightmost eigenvalues of. and the imaginary axis at Re = 2000, 4000, 6000, 7800 are , , , , respectively. We also obtain estimates of the rightmost eigenvalue of. at the four Reynolds numbers: , , , and i. We note two trends seen in these results. First, surprisingly, for all the Reynolds numbers considered, Algorithm converges to the desired tolerance R l F < 0 8 in only 2 iterations. 7

8 imaginary axis 0 imaginary axis real axis a real axis b Fig. 3.: a The eight rightmost eigenvalues for driven-cavity flow at different Reynolds numbers : Re = 2000, : Re = 4000, : Re = 6000, : Re = b The 300 eigenvalues with smallest modulus at Re = 7800 : the rightmost eigenvalues. Table 3.: Notation for Algorithm Symbol Definition λ l the estimate of Reµ, i.e., the eigenvalue of 2.2 with smallest modulus Z l the estimated eigenvector of 2.2 associated with Reµ µ l the estimated rightmost eigenvalue of Ax = µmx computed from 2.5 Y approx l the approximate solution to the Lyapunov solution 2.6 R l SZ l + Z l S T + λ l 2SZ l S T, the residual of the Lyapunov eigenvalue problem 2.2 R l SY approx l + Y approx l S T P l C l Pl T, the residual of the Lyapunov equation 2.6 d l dimension of the Krylov subspace, i.e., rank of Y approx l That is, only the first Lyapunov equation SY + Y S T = P C P T 3. needs to be solved, where P R n and C R. Second, as the Reynolds number increases, it becomes more expensive to solve the Lyapunov equation to the same order of accuracy R l F < 0 9 C l F, since Krylov subspaces of increasing dimension are needed 56, 24, 307 and 366 for the four Reynolds numbers. We also tested Algorithm using the standard Krylov method [7] to solve the Lyapunov systems. To solve 3. to the same accuracy, this method requires subspaces of dimension 525, 64, 770 and 896 for the four Reynolds numbers, which are much larger than those required by RKSM see Figure 3.2 for comparison. As a result, the standard method requires many more linear solves. 8

9 Table 3.2: Algorithm applied to Example Lyapunov solver: RKSM l λ l µ l R l F R l F d l Re= e e e- Re= e e e-0 Re= e e e-0 Re= e e i e Example 2: flow over an obstacle. For linear stability analysis of this flow, see [8]. The Q 2 -Q mixed finite element discretization with a mesh of the Navier-Stokes equations gives rise to a generalized eigenvalue problem. of order n = 952. Figure 3.3a depicts the path traced out by the six rightmost eigenvalues of. for Re = 00, 200, 300, 350 in the stable regime, and Figure 3.3b shows the 300 eigenvalues of. with smallest modulus at Re = 350. In this example, the Reynolds number Re = 2 ν. As for the previous example, as the Reynolds number increases, the six rightmost eigenvalues all move towards the imaginary axis, and the rightmost eigenvalue changes from being real at Re = 00 to complex at Re = 200, 300, 350. The rightmost pair of eigenvalues of. cross the imaginary axis and the steady-state solution to.2 loses its stability at Re 373. We again apply Algorithm to estimate the distance between the rightmost eigenvalues of. and the imaginary axis for the four Reynolds numbers mentioned above. The results are reported in Table 3.3. The stopping criteria for both Algorithm and the Lyapunov solve 2.6 remain unchanged, i.e., R l F < 0 8 and R l F < 0 9 C l F. For all four Reynolds numbers, Algorithm converges rapidly. In fact, we will show in section 3.4 that if the Lyapunov equation 3. is solved more accurately, Algorithm will converge in two iterations in all four cases as observed in the previous example. Again we compare the performance of the standard Krylov method and RKSM in solving 3.. As for the cavity flow, the Krylov method needs a significantly larger subspace than RKSM to compute a solution of the same accuracy see Figure Example 3: double-diffusive convection problem. This is a model of the effects of convection and diffusion on two solutions in a box heated at one boundary see Chapter 8 of [2]. The governing equations use Boussinesq approximation and are given in [3] and [5]. Linear stability analysis of this problem is considered in [0]. The imaginary parts of the rightmost eigenvalues of. near the critical point u c, α c have fairly large magnitude, and as a result, the rightmost eigenvalues are further away from zero than many of the real eigenvalues close to the imaginary axis. Conventional methods, such as IRA with a zero shift, tend to converge to the real eigenvalues close to the imaginary axis instead of the rightmost pair. We consider an artificial version Ax = µx of this problem, where A is tridiagonal of order 9

10 0 0 standard Krylov RKSM 0 0 standard Krylov RKSM R F R F the dimension of the Krylov subspace a Re = the dimension of the Krylov subspace b Re = standard Krylov RKSM standard Krylov RKSM R F R F the dimension of the Krylov subspace the dimension of the Krylov subspace c Re = 6000 d Re = 7800 Fig. 3.2: Comparison of the standard Krylov method and RKSM for solving 3. in Example n = 0, 000 with eigenvalues µ,2 = 0.05 ± 25i and µ j = j 0. for all 3 j n. The 300 eigenvalues of A with smallest modulus are plotted in Figure 3.5 left. A similar problem is studied in [4]. If we use the Matlab function eigs with zero shift to compute its rightmost eigenvalues, at least 25 eigenvalues of A have to be computed to ensure that µ,2 will be found. This approach requires a minimum 502 linear solves under the default setting of eigs, and again in practice many more will be needed. We apply Algorithm to this problem with the same stopping criteria for the inner and outer iterations as in the previous two examples and the results are reported in Table 3.4. It converges in just 3 iterations, requiring 90 linear solves to solve the two Lyapunov equations to desired accuracy. As in the previous examples, RKSM needs a Krylov subspace of significantly smaller dimension than the standard Krylov method see Figure 3.5 right. 0

11 imaginary axis imaginary axis real axis a real axis b Fig. 3.3: a The six rightmost eigenvalues for flow over an obstacle at different Reynolds numbers : Re = 00, : Re = 200, : Re = 300, : Re = 350. b The 300 eigenvalues with smallest modulus at Re = 350 : the rightmost eigenvalues Analysis of the convergence of Algorithm. In the numerical experiments, we have shown that Algorithm converges rapidly. In particular, it converges in just two iterations in many cases, for example, the driven-cavity flow at Re = 2000, 4000, 6000, In other words, only one Lyapunov solve 3. is needed to obtain an eigenvalue estimate of desired accuracy in these cases. The analysis below provides some insight into this fast convergence. We introduce some notation to be used in the analysis. Assuming. and therefore 2. have complete sets of eigenvectors, let {λ k, z k } n2 k= denote the eigenpairs of 2., where z k 2 = and {λ k } have increasing moduli, i.e., λ k λ k2 if k < k 2. By Theorem 2., for each k n 2, there exist i, j n such that λ k = λ i,j = 2 µ i + µ j and z k = z i,j = x i x j. 2 Let Z k = x j x T i, i.e., vecz k = z k, so that {λ k, Z k } n2 k= are the eigenpairs of 2.2. In addition, let L p = {λ, λ 2,..., λ p } contain the p eigenvalues of 2. with smallest modulus, and let E p = {µ i, µ i2,..., µ id } be the smallest subset of eigenvalues of Ax = µmx that satisfies the following: for any λ k L p, there exist µ is, µ it E p such that λ k = λ is,i t. Let X p = {x i, x i2,..., x id } hold the eigenvectors of Ax = µmx associated with E p. For a concrete example, consider again the 4 4 example in section 2. From Table 2., L 7 = {λ,2, λ 2,, λ 3,3, λ,3, λ 3,, λ 2,3, λ 3,2 }, E 7 = {µ, µ 2, µ 3 }, and X 7 = {x, x 2, x 3 }. We first look at standard inverse iteration applied to 2.. Let the starting guess be z = v v R n2, where v R n is a random vector of unit norm. Since {z k } n2 k= are linearly independent, z can be written as n 2 k= ξ kz k with ξ k C assume ξ k 0 for any k. Note that the coefficients {ξ k } n2 k= have the following properties: if z k = z k2, then ξ k = ξ k2 ; moreover, if z k = x i x j and 2 Both sets of symbols {λ i,j, z i,j } n i,j= and {λ k, z k } n2 k= denote the eigenpairs of 2.. The double subscripts indicate the special structure of the eigenpairs, whereas the single subscripts arrange the eigenvalues in ascending order of their moduli. Our choice between the two notations depends on the context.

12 Table 3.3: Algorithm applied to Example 2 Lyapunov solver: RKSM l λ l µ l R l F R l F d l Re= e e e e e e e e e e e e e-9 Re= e e i e e i.30869e e i.47390e-9 Re= e e i e e i 2.888e-0 Re= e e i e e i.46747e- Table 3.4: Algorithm applied to Example 3 Lyapunov solver: RKSM l λ l µ l R l F R l F d l e e i e e i.5600e-3 z k2 = x j x i for some pair i, j, then ξ k = ξ k2. The first property is due to the fact that z is real, and the second one is a result of the special tensor structure of z. In the first step of inverse iteration, we solve the linear system The solution to 3.2 is y = 0z = n 2 ξ k k= y = 0 z. 3.2 λ k z k. Let y p be a truncated approximation of y consisting of its p dominant components, i.e., y p = p ξ k k= λ k z k for some p n 2. Next, we consider Algorithm applied to 2.2. Let the starting guess be Z = vv T, where v is the vector that determines the starting vector for standard inverse iteration. At step 2.4 of the first iteration, we solve the Lyapunov equation 3. where vec P C P T = vec 2SZ S T = 2

13 standard Krylov RKSM standard Krylov RKSM R F 0 4 R F the dimension of the Krylov subspace a Re = the dimension of the Krylov subspace b Re = standard Krylov RKSM standard Krylov RKSM R F 0 4 R F the dimension of the Krylov subspace the dimension of the Krylov subspace c Re = 300 d Re = 350 Fig. 3.4: Comparison of the standard Krylov method and RKSM for solving 3. in Example 2 0 z see 2.4 and vecy = y, i.e., Y = n 2 k= ξ k λ k Z k. 3.3 Let Y p denote the truncation of Y that satisfies vec Y p = yp, i.e., Y p = p ξ k k= λ k Z k. Assume p is chosen such that if λ i,j L p, then λ i,j, λ j,i L p as well. Under this assumption and by properties of the coefficients {ξ k } n2 k=, Y p p is real and symmetric and can be written as Y = UGU T where U R n d is an orthonormal matrix whose columns span X p and G R d d is symmetric. Recall from section 2 that the target eigenpair of 2.2 sought by Algorithm is λ, VDV T, where λ = Reµ, and V R n r with r = or 2 is x if µ is real or an orthonormal matrix 3

14 standard Krylov RKSM imaginary axis R F real axis the dimension of the Krylov subspace Fig. 3.5: Left: the 300 eigenvalues with smallest modulus : the rightmost eigenvalues. Right: Comparison of the standard Krylov method and RKSM for solving 3. in Example 3 whose columns span {x, x } if µ is not real. Since λ, VDV T is an eigenpair of 2.2, S VDV T + VDV T S T + λ 2S VDV T S T = For any p, x, x X p. Thus, U can be taken to have the form U = [V, V ] where V T V = 0. Assume that step 2.4 of the first iteration of Algorithm 2 produces an approximate solution to 3. of the form Y p by some means that is, the approximate solution consists of the p dominant terms of 3.3 where p satisfies the assumption above. Then at step 2. rank reduction of the second iteration, we solve the d d projected eigenvalue problem U T SU Z + Z U T SU T + λ 2 U T SU Z U T SU T = for the eigenvalue with smallest modulus, λ, and its associated real, symmetric and rank-r eigenvector Z r = or 2. We then obtain an estimate of the target eigenpair of 2.2, namely λ, U Z U T. It can be shown that this estimate is in fact exact, that is, λ, U Z U T = λ, VDV T. Theorem 3.. If [ ] D D = 0, d d then λ, D is an eigenpair of 3.5. Proof. Left-multiply 3.4 by U T and right-multiply by U: U T SVDV T U + U T VDV T S T U + λ 2U T SVDV T S T U = 0. Since U = [V, V ], it follows that VDV T = UD U T. Therefore, U T SU D + D U T S T U + λ 2 U T SU D U T S T U = 0. 4

15 Proposition 3.2. The eigenvalue of 3.5 with smallest modulus is λ = λ. Proof. Let S = U T SU. Since the columns of U span X p, the eigenvalues of S are µ i, µ i 2,..., µ i d. Let = S I + I S and 0 = 2 S S. By Theorem 2., the eigenvalues of the d 2 d 2 eigenvalue problem z = λ 0 z 3.6 are λ s,t = 2 µ i s + µ it for any s, t d. Since µ, µ E p, Theorem 2.2 implies that the eigenvalue of 3.6 with smallest modulus is λ = λ, the eigenvalue of 2. with smallest modulus. Since 3.5 and 3.6 have the same eigenvalues, the eigenvalue of 3.5 with smallest modulus is λ as well. Recall that we solve the Lyapunov equation 3. using an iterative solver such as RKSM, which produces a real, symmetric approximate solution Y approx. By Theorem 3. and Proposition 3.2, if Y approx = Y p, then after the rank-reduction step in the second iteration of Algorithm, we obtain the exact eigenpair λ, VDV T of 2.2 that we are looking for. In reality, it is unlikely that the approximate solution Y approx we compute will be exactly Y p p. However, since Y consists of the p dominant terms of the exact solution 3.3, if Y approx is accurate enough, then Y approx Y p for some p. This analysis suggests that the eigenvalue residual norm R 2 F can be made arbitrarily small as long as the residual norm of the Lyapunov system R F is small enough. Therefore, we propose the following modified version of Algorithm given τ lyap, τ eig > 0: Algorithm 2 Modified Algorithm. Given V 0 R n with V 0 2 =. Set l = and firsttry = true. 2. Rank reduction: compute S = Vl T SV l and solve for the eigenvalue λ of 2.3 with smallest modulus and its eigenvector Z = Ṽ DṼ T. 3. Set λ l, Z l T = λ, V l DV l where V l = V l Ṽ, and compute R l F. 4. While R l F > τ eig : 4. if firsttry compute an approximate solution Y approx = V D V T to 3. such that R F < τ lyap C F ; set l = 2 and firsttry = false; 4.2 else solve 3. more accurately and update V ; 4.3 repeat steps 2 and 3 to compute λ l, Z l and R l F. In this algorithm, if λ 2, Z 2 is not an accurate enough eigenpair R 2 F τ eig, this is fixed by improving the accuracy of the Lyapunov system 3.. The discussion above shows that this will be enough to produce an accurate eigenpair. Moreover, it is possible to get an improved solution to 3. by augmenting the solution we have in hand. Assume that at step 4. of Algorithm 2, we compute an approximate solution Y approx = V D V T to 3. where the columns of V span the Krylov subspace K m S, P see 2.7 for definition, and then obtain an iterate λ 2, Z 2 in steps 2 and 3. If R 2 F τ eig, we perform one more block Arnoldi step to extend the existing Krylov subspace to K m+ S, P, obtain a new approximate solution Y approx = V D V T to 3. where the columns of V now span the augmented Krylov subspace K m+ S, P, and check convergence 5

16 in steps 2 and 3 again. We keep extending the Krylov subspace at our disposal until the outer iteration converges to the desired tolerance τ eig. We test Algorithm 2 on Example 2 and Example 3, where Algorithm converged in more than two iterations see Tables 3.3 and 3.4. As in the previous experiments, we choose τ lyap = 0 9 and τ eig = 0 8. The results are reported in Tables 3.5 and 3.6, from which it can be seen that if 3. is solved accurately enough, Lyapunov inverse iteration converges to the desired tolerance in only two iterations, as observed for the driven-cavity flow see Table 3.2. In other words, it requires only one Lyapunov solve 3.. By comparing Tables 3.4 and 3.6, for example, we can see that in order to compute an accurate enough approximate solution Y approx to 3., the dimension of the Krylov subspace used must be increased from 40 to 43. Table 3.5: Algorithm 2 applied to Example 2 Lyapunov solver: RKSM l λ l µ l R l F R l F d l Re= e e e-9 Re= e e i e-9 Re= e e i e-9 Re= e e i e-9 Table 3.6: Algorithm 2 applied to Example 3 Lyapunov solver: RKSM l λ l µ l R l F R l F d l e e i e-9 4. Computing k rightmost eigenvalues. In section 2, we showed that when all the eigenvalues of. lie in the left half of the complex plane, the distance between the rightmost eigenvalues and the imaginary axis, Reµ, is the eigenvalue of 2.2 with smallest modulus. As a result, this eigenvalue can be computed by Lyapunov inverse iteration, which also gives us estimates of the rightmost eigenvalues of.. In section 3, various numerical experiments demonstrate the robustness and efficiency of the Lyapunov inverse iteration applied to 2.2. In particular, we showed in section 3.4 that if the first Lyapunov equation 3. is solved accurately enough, then Lyapunov inverse iteration will converge in only two steps. As seen in sections 3. and 3.2, when we march along the solution path S, it may be the case that an eigenvalue that is not the rightmost moves towards the imaginary rapidly, becomes the rightmost eigenvalue at some point and even- 6

17 tually crosses the imaginary axis first, causing instability in the steady-state solution. Therefore, besides the rightmost eigenvalues, it is helpful to monitor a few other eigenvalues in the rightmost part of the spectrum as well. In this section, we show how Lyapunov inverse iteration can be applied repeatedly in combination with deflation to compute k rightmost eigenvalues of., where < k n. We continue to assume that we are at a point u 0, α 0 in the stable regime of the solution path S and that the eigenvalue problem Ax = µmx with A = J α 0 has a complete set of eigenvectors {x i } n i=. For any i k, we also assume the following as in assumptions a and a2 in section 2: a if Reµ j = Reµ i and j i, then µ j = µ i ; a2 µ i is a simple eigenvalue. Let E t = {µ, µ 2,..., µ t } be the set containing t rightmost eigenvalues of Ax = µmx and X t = [x, x 2,..., x t ] C n t be the matrix that holds the t corresponding eigenvectors. Here t is chosen such that t < k and if µ i E t, then µ i E t as well. We will show that given X t, we can use the methodology described in section 2 to find Reµ t+, that is, that Reµ t+ is the eigenvalue with smallest modulus of a certain n 2 n 2 eigenvalue problem with a Kronecker structure like that of 2., and it can be computed using Lyapunov inverse iteration. Lemma 4.. Assume all the eigenvalues of Ax = µmx lie in the left half of the complex plane. Then in the subset {λ i,j } i,j>t of all the eigenvalues of 2., the one with smallest modulus is Reµ t+. Proof. If µ t+ is real, then Reµ t+ = λ t+,t+. If µ t+ is not real, by assumptions a, a2 and the choice of t, µ t+2 = µ t+, which implies that Reµ t+ = λ t+,t+2 = λ t+2,t+. The rest of the proof is very similar to that of Theorem 2.2. Consequently, if we can formulate a problem with a Kronecker structure like that of 2. whose eigenvalues are {λ i,j } i,j>t, then Reµ t+ can be computed by Lyapunov inverse iteration applied to this problem. We will show how such a problem can be concocted and establish some of its properties that are similar to those of 2.. Let Θ t be the diagonal matrix whose diagonal elements are µ, µ 2,..., µ t, so that SX t = X t Θ t. Since Ax = µmx has a complete set of eigenvectors, there exists an orthonormal matrix Q t R n t such that X t = Q t G t, where G t C t t is nonsingular. Let Ŝ = I Q t Q T t S, = Ŝ I + I Ŝ, and 0 = 2Ŝ Ŝ. We claim that the distance between µ t+ and the imaginary axis, Reµ t+, is the eigenvalue of z = λ 0 z, z Range 0 4. with smallest modulus. To prove this claim, we first study the eigenpairs of Ŝ. Lemma 4.2. The matrix I Q t Q T t where Q t is defined above and I R n n is the identity matrix has the following properties:. I Q t Q T t Qt = 0; 2. I Q t Q T i t = I Qt Q T t for any integer i ; 3. I Q t Q T i t S I Qt Q T j t = I Qt Q T t S for any integers i, j. Proof. The first two properties hold for any orthonormal matrix and the proof is omitted here. To prove the third property, we first show that I Q t Q T t S I Qt Q T t = I Qt Q T t S. Since 7

18 SX t = X t Θ t and X t = Q t G t, SQ t Q T t I Qt Q T t S I Qt Q T t = Q t G t Θ t G t Q T t G t is invertible. Thus, = I Qt Q T t S I Qt Q T t SQt Q T t = I Q t Q T t S I Qt Q T t = I Q t Q T t S Qt G t Θ t G t Q T t by the first property. This together with the second property establishes the third property. Lemma 4.3. Let θ i = 0 for i t and θ i = µ i for i > t. Let x i = x i for i t and x i = I Q t Q T t xi for i > t. Then θi, x i i =, 2,..., n are the eigenpairs of Ŝ. Proof. Let g i be the i th column of G t. If i t, x i = Q t g i, thus Ŝx i = I Q t Q T t SQt g i = I Q t Q T t Qt G t Θ t G t g i = 0 by the first property in Lemma 4.2. If i > t, Ŝ I Q t Q T t xi = I Q t Q T t S I Qt Q T t xi = I Q t Q T t Sxi = I Qt Q T t µ i by the third property in Lemma 4.2. Knowing the eigenpairs of Ŝ, we can find the eigenpairs of 0 and with no difficulty. Lemma 4.4. The eigenvalues of are. 0, if i, j t; 2. µ i, if i > t and j t; 3. µ j, if i t and j > t; 4. µ i + µ j, if i, j > t. The eigenvalues of 0 are. 0, if i t or j t; 2 2. µ iµ j, if i, j > t. Moreover, for each eigenvalue of 0 or, there are eigenvectors associated with it given by ẑ i,j = x i x j and ẑ j,i = x j x i. Proof. See the proof of Theorem 2.. Under the assumption that Ax = µmx has a complete set of eigenvectors, 0 also has a complete set of eigenvectors {ẑ i,j } n i,j=. By Lemma 4.4, Range 0 = span {ẑ i,j } i,j>t. Theorem 4.5. The eigenvalues of 4. are {λ i,j } i,j>t. For any λ i,j with i, j > t, there are eigenvectors ẑ i,j and ẑ j,i associated with it. Proof. The proof follows immediately from Lemma 4.4 and the proof of Theorem 2.. Theorem 4.6. Assume all the eigenvalues of Ax = µmx lie in the left half of the complex plane. Then the eigenvalue of 4. with smallest modulus is Reµ t+. Proof. By Theorem 4.5, it suffices to show that Reµ t+ λ i,j for any i, j > t, which is true by Lemma 4.. If we can restrict the search space of eigenvectors to Range 0 z to compute Reµ t+. Let to z = λ xi 0, we can apply inverse iteration P t = {Z C n n Z = I Q t Q T t X I Qt Q T t where X C n n }. 8

19 Since Range 0 = span { x i x j } i,j>t = span { I Q t Q T t xi I Q t Q T } t xj i,j>t, if Z P t, then z = vecz Range 0, and vice versa. Therefore, 4. can be rewritten in the form of a matrix equation, ŜZ + ZŜT + λ 2ŜZŜT = 0, Z P t. 4.2 By Theorem 4.6, Reµ t+ is the eigenvalue of 4.2 with smallest modulus. As in section 2, under certain conditions, we can show that Reµ t+ is an eigenvalue of 4.2 with a unique, real, symmetric and low-rank eigenvector. Let P s t = { Z P t Z = Z } T be the subspace of P t consisting of symmetric matrices. As a result of assumptions a and a2, when the eigenspace of 4.2 is restricted to P s t, Reµ t+ is an eigenvalue of 4.2 that has the unique up to a scalar multiplier, real and symmetric eigenvector I Qt Q T t xt+ x T t+ I Qt Q T t or I Qt Q T t xt+ x t+ + x t+ x T t+ I Qt Q T t. Therefore, if we can restrict the search space for the target eigenvector of 4.2 to P s t, Lyapunov inverse iteration can be applied to 4.2 to compute Reµ t+. Moreover, the analysis of section 3.4 which applies to 2.2 can be generalized directly to 4.2. This means that to compute Reµ t+, it suffices to find an accurate solution to ŜY + Y Ŝ T = 2ŜZ ŜT 4.3 in P s t. In general, solutions to 4.3 are not unique: any matrix of the form Y + Q t XQ T t where X C n n is also a solution, since ŜQ t = 0 by Lemma 4.3. However, in the designated search space P s t, the solution to 4.3 is indeed unique. In addition, we can obtain estimates for the eigenpair µ t+, x t+ of Ŝ in the same way we compute estimates for µ, x in section 2 see 2.5. The analysis above leads to the following algorithm for computing k rightmost eigenvalues of Ax = µmx: Algorithm 3 compute k rightmost eigenvalues of Ax = µmx. Initialization: t = 0, E t =, X t =, Q t = 0, and Ŝ = I Q t Q T t S. 2. While t < k: 2.. Solve 4.2 for the eigenvalue with smallest modulus, Reµ t+, and its corresponding eigenvector in P s t Compute an estimate µ approx t+, x approx t+ for µt+, x t Update: if µ approx t+ is real: E t+ { E t, µ approx t+ }, Xt+ [ ] Xt, x approx t+, t t + ; else: E t+2 { E t, µ approx t+, conj µ approx } t+, Xt+2 t t + 2. [ Xt, x approx t+, conj approx x ] t+, 2.4. Compute the thin QR factorization of Xt : [Q, R] = qr Xt, 0, and let Q t = Q, Ŝ = I Q t Q T t S. 9

20 At each iteration of Algorithm 3, we compute the t + st rightmost eigenvalue µ t+, or the t+ st and t+2 nd rightmost eigenvalues µ t+, µ t+. The iteration terminates when k rightmost eigenvalues have been found. In this algorithm, we need to compute the eigenvalue with smallest modulus for several Lyapunov eigenvalue problems 4.2 corresponding to different values of t. One way to do this is to simply apply Algorithm 2 to each of these problems. In the next section, we will discuss the way step 2. of Algorithm 3 is implemented, which is much more efficient. Note that the technique for computing Reµ t+ t > 0 introduced in this section is based on the assumption that Q t, whose columns form an orthonormal basis for {x j } t j=, is given. In Algorithm 3, Q t is taken to be a matrix whose columns form an orthonormal basis for the columns of X t. Such an approach is justified in the next section as well. We apply this algorithm to compute a few rightmost eigenvalues for some cases of the examples considered in section 3. The results for step 2. in each iteration of Algorithm 3 are reported in Table 4.2 see Table 4. for notation. For example, consider the driven-cavity flow at Re = From Table 4.2, we can find the eight rightmost eigenvalues of Ax = µmx: µ,2 = ± i, µ 3 = , µ 4,5 = ± i, µ 6,7 = ±.78863i, and µ 8 = see Figure 3.a. Table 4.: Notation for Algorithm 3 Symbol Definition λ t the estimate of Reµ t+, i.e., the eigenvalue of 4.2 with smallest modulus Z t the estimated eigenvector of 4.2 associated with Reµ t+ µ approx t+ the estimated t + st rightmost eigenvalue of Ax = µmx, i.e., µ t+ ŜZ R t + Z t Ŝ T + λ t 2 ŜZ t Ŝ T, the residual of the Lyapunov eigenvalue t problem Implementation details of Algorithm 3. In the previous section, we proposed an algorithm that finds k rightmost eigenvalues of. by computing the eigenvalue with smallest modulus for a series of Lyapunov eigenvalue problems 4.2 corresponding to different values of t. In this section, more details of how to implement this algorithm efficiently will be discussed. 5.. Efficient solution of the Lyapunov eigenvalue problems. We first make a preliminary observation. Proposition 5.. The unique solution to 4.3 in P s t is Y = I Q t Q T t Y I Qt Q T t, where Y is the solution to 3.. Proof. The proof is straightforward with the help of Lemma 4.2. By Proposition 5., if we know the solution Y to 3., we can formally write down the unique solution Y to 4.3 in P s t. In practice, we do not know Y ; instead, we solve 3. using an iterative solver such as RKSM, which produces an approximate solution Y approx. The following proposition shows that we can obtain from Y approx an approximate solution to 4.3 that is essentially as accurate a solution to 4.3 as Y approx is to 3.. Proposition 5.2. Let Y approx = I Q t Q T t Y approx I Qt Q T t and R = ŜYapprox + Ŝ T + 2ŜZŜT. Then R F 4 R F, where R = SY approx + Y approx S T + 2SZ S T. Y approx 20

21 Table 4.2: Algorithm 3 applied to Examples, 2 and 3 t λ t µ approx t+ R t F t λ t µ approx t+ R t F Example Re=6000, k = 8 Example Re=7800, k = e i e i e e i.57820e i.0620e e i 6.832e i e e-0 Example 2 Re=300, k = 6 Example 2 Re=350, k = i e i e e e e e e e e e-07 Example 3, k = i e e e e e-06 Proof. Using Lemma 4.2, we can show easily that R = I Q t Q T t R I Qt Q T t. Therefore, R F I Qt Q T t 2 F R F + Q t 2 F 2 R F = 4 R F. This analysis suggests the following strategy for step 2. of Algorithm 3: when t = 0, we compute Reµ by applying Algorithm 2 to 2.2, in which a good approximate solution Y approx to 3. is computed; in any subsequent iteration where 0 < t k, instead of applying Lyapunov inverse iteration again to 4.2, we simply get the approximate solution Y approx to 4.3 specified in Proposition 5.2, from which Reµ t+ can be computed. Details of this approach are described below. Recall that Y approx computed by an iterative solver such as RKSM is of the form V D V T, is orthonormal and d = rankv n. We first rewrite Y approx in a sim- where V R n d ilar form UΣW T D W ΣU T, where UΣW T is the thin singular value decomposition SVD of I Qt Q T t V. Then we can compute an estimate for Reµ t+ in the same way we compute estimates for Reµ in Algorithm or 2. That is, we solve the small, projected Lyapunov eigenvalue problem S h Z + Z ST h + λ 2 S h Z ST h = 0 5. for its eigenvalue with smallest modulus, where S h = U T ŜU. Recall that the matrix S = V T SV in 2.3 can be obtained with no additional cost from the Arnoldi decomposition for instance,

University of Maryland Department of Computer Science TR-4975 University of Maryland Institute for Advanced Computer Studies TR March 2011

University of Maryland Department of Computer Science TR-4975 University of Maryland Institute for Advanced Computer Studies TR March 2011 University of Maryland Department of Computer Science TR-4975 University of Maryland Institute for Advanced Computer Studies TR-2011-04 March 2011 LYAPUNOV INVERSE ITERATION FOR IDENTIFYING HOPF BIFURCATIONS

More information

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems AMSC 663-664 Final Report Minghao Wu AMSC Program mwu@math.umd.edu Dr. Howard Elman Department of Computer

More information

Efficient iterative algorithms for linear stability analysis of incompressible flows

Efficient iterative algorithms for linear stability analysis of incompressible flows IMA Journal of Numerical Analysis Advance Access published February 27, 215 IMA Journal of Numerical Analysis (215) Page 1 of 21 doi:1.193/imanum/drv3 Efficient iterative algorithms for linear stability

More information

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS Department of Mathematics University of Maryland, College Park Advisor: Dr. Howard Elman Department of Computer

More information

A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line

A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line Karl Meerbergen en Raf Vandebril Karl Meerbergen Department of Computer Science KU Leuven, Belgium

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Rational Krylov methods for linear and nonlinear eigenvalue problems

Rational Krylov methods for linear and nonlinear eigenvalue problems Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Matrices. Chapter Definitions and Notations

Matrices. Chapter Definitions and Notations Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

A Divide-and-Conquer Method for the Takagi Factorization

A Divide-and-Conquer Method for the Takagi Factorization A Divide-and-Conquer Method for the Takagi Factorization Wei Xu 1 and Sanzheng Qiao 1, Department of Computing and Software, McMaster University Hamilton, Ont, L8S 4K1, Canada. 1 xuw5@mcmaster.ca qiao@mcmaster.ca

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Lars Eldén Department of Mathematics Linköping University, Sweden Joint work with Valeria Simoncini February 21 Lars Eldén

More information

Lecture 6: Geometry of OLS Estimation of Linear Regession

Lecture 6: Geometry of OLS Estimation of Linear Regession Lecture 6: Geometry of OLS Estimation of Linear Regession Xuexin Wang WISE Oct 2013 1 / 22 Matrix Algebra An n m matrix A is a rectangular array that consists of nm elements arranged in n rows and m columns

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5 Practice Exam. Solve the linear system using an augmented matrix. State whether the solution is unique, there are no solutions or whether there are infinitely many solutions. If the solution is unique,

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

Singular value decomposition

Singular value decomposition Singular value decomposition The eigenvalue decomposition (EVD) for a square matrix A gives AU = UD. Let A be rectangular (m n, m > n). A singular value σ and corresponding pair of singular vectors u (m

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

ABSTRACT OF DISSERTATION. Ping Zhang

ABSTRACT OF DISSERTATION. Ping Zhang ABSTRACT OF DISSERTATION Ping Zhang The Graduate School University of Kentucky 2009 Iterative Methods for Computing Eigenvalues and Exponentials of Large Matrices ABSTRACT OF DISSERTATION A dissertation

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION

A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

8 A pseudo-spectral solution to the Stokes Problem

8 A pseudo-spectral solution to the Stokes Problem 8 A pseudo-spectral solution to the Stokes Problem 8.1 The Method 8.1.1 Generalities We are interested in setting up a pseudo-spectral method for the following Stokes Problem u σu p = f in Ω u = 0 in Ω,

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Fast solvers for steady incompressible flow

Fast solvers for steady incompressible flow ICFD 25 p.1/21 Fast solvers for steady incompressible flow Andy Wathen Oxford University wathen@comlab.ox.ac.uk http://web.comlab.ox.ac.uk/~wathen/ Joint work with: Howard Elman (University of Maryland,

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

ECS130 Scientific Computing Handout E February 13, 2017

ECS130 Scientific Computing Handout E February 13, 2017 ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ

More information