Rational Krylov Decompositions: Theory and Applications. Berljafa, Mario. MIMS EPrint:

Size: px
Start display at page:

Download "Rational Krylov Decompositions: Theory and Applications. Berljafa, Mario. MIMS EPrint:"

Transcription

1 Rational Krylov Decompositions: Theory and Applications Berljafa, Mario 2017 MIMS EPrint: Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: And by contacting: The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK ISSN

2 RATIONAL KRYLOV DECOMPOSITIONS: THEORY AND APPLICATIONS A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Science & Engineering 2017 Mario Berljafa School of Mathematics

3 2

4 Contents List of Tables 7 List of Figures 9 List of Algorithms 11 List of RKToolbox Examples 13 Abstract 15 Declaration 17 Copyright Statement 19 Publications 21 Acknowledgements 23 1 Introduction & background Introduction Background material Polynomial Krylov methods Rational Krylov spaces and RADs The rational Arnoldi algorithm Rational Arnoldi decompositions A rational implicit Q theorem Complex poles for real-valued matrices Matrix pencils and nonstandard inner products

5 2.6 RKToolbox corner Rational Krylov subspace extraction Approximate eigenpairs Functions of matrices times a vector Continuation pairs and parallelisation Continuation pairs Near-optimal continuation pairs Parallel rational Arnoldi algorithm Numerical experiments Generalised rational Krylov decompositions Rational Krylov decompositions Connection with polynomial Krylov spaces RKToolbox corner Rational Krylov fitting The RKFIT algorithm Numerical experiments (with l = 1) Other rational approximation algorithms Tuning degree parameters m and k Extensions and complete algorithm Numerical experiments (with l > 1) RKToolbox corner Working with rational functions Evaluation, pole and root finding Basic arithmetic operations Obtaining the partial fraction basis RKToolbox corner Conclusions 165 4

6 Bibliography 167 Word count

7 6

8 List of Tables 4.1 Numerical results for the transient electromagnetics problems Numerical quantities for the 3D waveguide example Default RKFIT parameters

9 8

10 List of Figures 2.1 Sketch illustrating the proof of Theorem Nonzero pattern of the reduced pencil from a quasi-rad Approximate eigenvalues for a symmetric matrix Polynomial Arnoldi approximants to A 1 2 l b Rational Arnoldi approximants to A 1 2 l b Adaptive rational Arnoldi approximants to A 1 2 l b Evaluating the quality of the near-optimal continuation strategy Near-optimal continuation strategy on a nonnormal matrix Executing the parallel rational Arnoldi algorithm Canonical continuation matrices Numerical results for the transient electromagnetics examples Numerical quantities for the 3D waveguide example CPU timings for the 3D waveguide example Explicit pole placement on an example Transforming a quasi-rad into a polynomial RAD RKFIT: Fitting an artificial frequency response RKFIT: Square root of a symmetric matrix RKFIT: Exponential of a nonnormal matrix RKFIT: Degree reduction for a rational function RKFIT: Degree reduction for a non-rational function RKFIT: MIMO dynamical system RKFIT: Pole optimization for exponential integration Chebyshev type 2 filter

11 10

12 List of Algorithms 1.1 Polynomial Arnoldi algorithm Rational Arnoldi algorithm (rat krylov) Real-valued rational Arnoldi algorithm (rat krylov) Rational Arnoldi with automated pole selection for f(a)b Parallel rational Arnoldi for distributed memory architectures RAD structure recovery (util recover rad) Implicit pole placement (move poles impl) RAD poles reordering (util reorder poles) Explicit pole placement (move poles expl) (Quasi-)RAD to polynomial RAD (util hh2th) High-level description of RKFIT Vector fitting Rational Krylov Fitting (rkfit) Evaluating an RKFUN (rkfun.feval) Conversion to partial fraction form (rkfun.residue)

13 12

14 List of RKToolbox Examples 2.1 Constructing RADs Generating and extending an RAD Polynomial Arnoldi algorithm Moving poles implicitely and roots of orthogonal rational functions Moving poles explicitly (to my birth date) Moving poles implicitly to infinity Using RKFIT Cumputing with RKFUNs Chapter heading MATLAB implementation of Algorithm

15 14

16 The University of Manchester Mario Berljafa Doctor of Philosophy Rational Krylov Decompositions: Theory and Applications January 9, 2017 Numerical methods based on rational Krylov spaces have become an indispensable tool of scientific computing. In this thesis we study rational Krylov spaces by considering rational Krylov decompositions; matrix relations which, under certain conditions, are associated with these spaces. We investigate the algebraic properties of such decompositions and present an implicit Q theorem for rational Krylov spaces. We derive standard and harmonic Ritz extraction strategies for approximating the eigenpairs of a matrix and for approximating the action of a matrix function onto a vector. While these topics have been considered previously, our approach does not require the last pole to be infinite, which makes the extraction procedure computationally more efficient. Typically, the computationally most expensive component of the rational Arnoldi algorithm for computing a rational Krylov basis is the solution of a large linear system of equations at each iteration. We explore the option of solving several linear systems simultaneously, thus constructing the rational Krylov basis in parallel. If this is not done carefully, the basis being orthogonalized may become poorly conditioned, leading to numerical instabilities in the orthogonalization process. We introduce the new concept of continuation pairs which gives rise to a near-optimal parallelization strategy that allows to control the growth of the condition number of this nonorthogonal basis. As a consequence we obtain a more accurate and reliable parallel rational Arnoldi algorithm. The computational benefits are illustrated using our high performance C++ implementation. We develop an iterative algorithm for solving nonlinear rational least squares problems. The difficulty is in finding the poles of a rational function. For this purpose, at each iteration a rational Krylov decomposition is constructed and a modified linear problem is solved in order to relocate the poles to new ones. Our numerical results indicate that the algorithm, called RKFIT, is well suited for model order reduction of linear time invariant dynamical systems and for optimisation problems related to exponential integration. Furthermore, we derive a strategy for the degree reduction of the approximant obtained by RKFIT. The rational function obtained by RKFIT is represented with the aid of a scalar rational Krylov decomposition and an additional coefficient vector. A function represented in this form is called an RKFUN. We develop efficient methods for the evaluation, pole and root finding, and for performing basic arithmetic operations with RKFUNs. Lastly, we discuss RKToolbox, a rational Krylov toolbox for MATLAB, which implements all our algorithms and is freely available from RKToolbox also features an extensive guide and a growing number of examples. In particular, most of our numerical experiments are easily reproducible by downloading the toolbox and running the corresponding example files in MATLAB. 15

17 16

18 Declaration No portion of the work referred to in the thesis has been submitted in support of an application for another degree or qualification of this or any other university or other institute of learning. 17

19 18

20 Copyright Statement i. The author of this thesis (including any appendices and/or schedules to this thesis) owns certain copyright or related rights in it (the Copyright ) and s/he has given The University of Manchester certain rights to use such Copyright, including for administrative purposes. ii. Copies of this thesis, either in full or in extracts and whether in hard or electronic copy, may be made only in accordance with the Copyright, Designs and Patents Act 1988 (as amended) and regulations issued under it or, where appropriate, in accordance with licensing agreements which the University has from time to time. This page must form part of any such copies made. iii. The ownership of certain Copyright, patents, designs, trade marks and other intellectual property (the Intellectual Property ) and any reproductions of copyright works in the thesis, for example graphs and tables ( Reproductions ), which may be described in this thesis, may not be owned by the author and may be owned by third parties. Such Intellectual Property and Reproductions cannot and must not be made available for use without the prior written permission of the owner(s) of the relevant Intellectual Property and/or Reproductions. iv. Further information on the conditions under which disclosure, publication and commercialisation of this thesis, the Copyright and any Intellectual Property and/or Reproductions described in it may take place is available in the University IP Policy (see in any relevant Thesis restriction declarations deposited in the University Library, The University Library s regulations (see and in The University s Policy on Presentation of Theses. 19

21 20

22 Publications The material in Sections , Section 5.1, and part of the material in Sections is based on the paper: [10] M. Berljafa and S. Güttel, Generalized rational Krylov decompositions with an application to rational approximation, SIAM J. Matrix Anal. Appl., 36 (2015), pp (This paper won a 2016 SIAM Student paper prize.) Chapter 4 is based on the paper: [12] M. Berljafa and S. Güttel, Parallelization of the rational Arnoldi algorithm, MIMS EPrint , Manchester Institute for Mathematical Sciences, The University of Manchester, UK, Submitted for publication. Chapter 6 is based on the paper: [11] M. Berljafa and S. Güttel, The RKFIT algorithm for nonlinear rational approximation, MIMS EPrint , Manchester Institute for Mathematical Sciences, The University of Manchester, UK, Submitted for publication. The RKToolbox corner sections in Chapters 2, 5 7 are, in part, based on the technical report: [9] M. Berljafa and S. Güttel, A Rational Krylov Toolbox for MATLAB, MIMS EPrint , Manchester Institute for Mathematical Sciences, The University of Manchester, UK, (Last updated September 2015.) Sections , Chapter 3, Section 5.2 and Chapter 7 (review existing and) present results not included in the above list. 21

23 22

24 Acknowledgements I sincerely thank my supervisor Stefan Güttel for his guidance and unrelenting patience through these 3 years. I also acknowledge Françoise Tisseur for her useful advice throughout the years. Finally, I wish to thank Massimiliano Fasi and Ana Šušnjara as well as the examiners Bernhard Beckermann and Nicholas Higham for their valuable comments and corrections which significantly improved this thesis. 23

25 24

26 1 Introduction & background 1.1 Introduction Published in 1984, Axel Ruhe s paper Rational Krylov sequence methods for eigenvalue computation presents a new class of algorithms which is based on rational functions of the matrix [86]. In fact, what the author does is essentially to suggest replacing, within iterative eigenvalues algorithms, the space span{b, Ab,..., A m 1 b}, where A C N,N and b C N, with the more general space span{ψ 1 (A)b, ψ 2 (A)b,..., ψ m (A)b}, where ψ 1, ψ 2,..., ψ m are arbitrary functions. He soon realises that besides polynomials, which we have treated, the only feasible choice computationally is rational functions. The paper obtains almost no attention in the following decade, and this lack of interest is probably due to two main factors. First, the paper reports no numerical experiments, and, thus, competitiveness and reach of the method remain unclear. Moreover, adequate guidance for choosing the best or at least good rational functions was not provided; this second problem remains an active area of current (and future) research. Fortunately, Ruhe himself reconsiders the method and his subsequent work [87, 88, 89] published in 1994 lay the foundation for the theory of rational Krylov methods as we know it today. His initial investigation of the topic converges in the 1998 paper [90], and by that time other researchers have started contributing to the theory and application of rational Krylov methods; see, e.g., [24, 37, 73]. Originally devised for the solution of large sparse eigenvalue problems, these methods have proved themselves a key tool for an increasing number of applications over the last two decades. Examples of rational Krylov applications can be found in model 25

27 26 CHAPTER 1. INTRODUCTION & BACKGROUND order reduction [34, 37, 49, 51, 71], computation of the action of matrix functions on vectors [7, 31, 33, 35, 40, 56], solution of matrix equations [8, 32, 75], nonlinear eigenvalue problems [59, 67, 91, 109], and nonlinear rational least squares fitting [10, 11]. The use of rational functions is justified by their approximation properties, which are often superior to linear schemes such as polynomial interpolation, in particular when approximating functions near singularities or on unbounded regions of the complex plane; see, e.g., [18, 105]. Computationally, the most costly part of rational Krylov methods is the solution of shifted linear systems of the form (A ξ j I)x j = b j for x j, for many indices j, where the matrix A, and vectors b j are given (I denotes the identity matrix). The parameters ξ j C are called poles of the rational Krylov space, and the success of rational Krylov methods heavily depends on their choice. If good poles are available, using just a few of them may suffice to solve the problem at hand. Otherwise, the solution of a large number of shifted linear systems may be needed, rendering the process computationally unfeasible. Finding good pole parameters is highly non-trivial and problem-dependent. Despite the large number of applications, rational Krylov methods are not yet fully understood. One of our main contributions is the development of a new theory of rational Arnoldi decompositions, which provides a better understanding of rational Krylov spaces, and ultimately allows rational Krylov methods themselves to be used, in an inverse manner, to find near-optimal pole parameters in certain applications. The rational Arnoldi algorithm used to construct an orthonormal basis for a rational Krylov space with a matrix A leads to a decomposition of the form AV m+1 K m = V m+1 H m, called rational Arnoldi decomposition (RAD). The range R(V m+1 ) of V m+1 spans the rational Krylov space in question. We provide a better understanding of rational Krylov spaces and the interplay of their defining parameters (starting vector b and poles ξ j ) by studying such, and related, decompositions. Specifically, in Chapter 2 we describe the complete set of RADs associated with rational Krylov spaces, and present a new rational implicit Q theorem about the uniqueness of RADs. In practice, the rational implicit Q theorem is useful as it allows for certain transformations of RADs to be performed at a reduced computational cost. Such transformations consist of two steps. First, the transformation is applied to the reduced pencil (H m, K m ), instead

28 1.1. INTRODUCTION 27 of the operator A, and second, the RAD structure is recovered and reinterpreted. Concrete examples and applications are discussed in Chapters 5 6. Furthermore, we consider the variant of the rational Arnoldi algorithm for real-valued matrices with complex-conjugate poles which constructs real-valued decompositions of a form similar to RADs [87]. The presentation of [87] is extended and formalised and an implicit Q theorem for the obtained quasi-rads is proposed. Finally, we discuss decompositions of the form AV m+1 K m = BV m+1 H m which correspond to rational Krylov spaces related to a matrix pencil (A, B) instead of a single matrix A. In particular, we show how to reduce them to RADs so that the established theory can be transferred directly. The use of nonstandard inner products is included in Chapter 2 as well. In Chapter 3 we review known strategies, based on projections, for extracting information from RADs, and develop new ones, highlighting their potential benefit. Specifically, for an RAD of the form AV m+1 K m = V m+1 H m, one can, for instance, approximate some of the eigenvalues of A with some of the eigenvalues of the smaller matrix V m+1av m+1, while f(a)b may be approximated by V m+1 f(v m+1av m+1 )V m+1b, which requires the computation of the potentially much smaller matrix function f(v m+1av m+1 ). Forming the projected matrix V m+1av m+1 at each iteration m of the rational Arnoldi algorithm may, however, be computationally too expensive. If V m+1 is orthonormal and the mth pole ξ m = is infinity, then the last row of K m is zero. Consequently, the RAD reduces to AV m K m = V m+1 H m, and thus V mav m = V mav m = H m K 1 m, which allows us to bypass the explicit projection V mav m. As this is applicable only when the last, mth, pole is infinite, the authors in [58] have considered adding and removing a temporary infinite pole after each iteration of the rational Arnoldi algorithm. We suggest new formulas that do not depend in such a manner on the poles. For instance, we show that f(a)b may be approximated as (V m+1 K m )f(k mh m )(V m+1 K m ) b, independently of any of the poles or their order of appearance. Rational functions can be decomposed into partial fractions, and this simple property makes rational Krylov methods highly parallelisable; several basis vectors spanning the rational Krylov space can be computed at once. Unfortunately, the basis constructed in this way may easily become ill-conditioned [98, 99]. Chapter 4 is devoted to the study of the influence of internal parameters when constructing an RAD in order to

29 28 CHAPTER 1. INTRODUCTION & BACKGROUND monitor the condition number of the basis. We also provide a high performance C++ implementation which shows the benefits of the parallelisation. Finally, in Chapter 6 we consider the problem of approximating, in a least squares sense, f(a)b as r(a)b, where r is a rational function. This is a nonlinear optimisation problem, since the poles of r are unknown. We propose an iterative algorithm, called rational Krylov fitting (RKFIT) for its solution. At each iteration an RAD is constructed and a modified linear problem is solved in order to relocate the poles of r to new (hopefully better) ones. The relocation of poles itself is studied in Chapter 5, and it is based on the rational implicit Q theorem. This theoretical observations lead to the notion of rational Krylov decompositions, which are a more general class of decompositions than RADs, and, from a practical point of view, they allow us to monitor the various transformation arising in the RKFIT algorithm. A distinct feature of our RKFIT algorithm is the degree reduction strategy which allows for further fine tuning once a solution r is obtained. We test RKFIT for model order reduction and exponential integration problems and show that the new approach is superior to some existing methods. The rational function r obtained by RKFIT is represented with the aid of a scalar RAD and an additional coefficient vector. A function represented in this form is called a rational Krylov function (RKFUN). In Chapter 7 we show how to use RKFUNs in order to, for instance, evaluate r(z) or perform basic arithmetic operations. Alongside our theoretical contribution, we discuss RKToolbox, a rational Krylov toolbox for MATLAB, which implements all our algorithms and is freely available for download from see also [9]. The main features of the toolbox are the rat krylov and rkfit functions and the RKFUN class. The function rat krylov, for instance, provides a flexible implementation of the rational Arnoldi algorithm. There are 18 different ways to call rat krylov, and furthermore, several parameters can be adjusted. Typing help rat krylov in MATLAB command line provides all the details. RKToolbox also features a large collection of utility functions, basic unit testing, an extensive guide and a growing number of examples. In particular, most of our numerical experiments are easily reproducible by downloading the toolbox and running the corresponding example files in MATLAB. The usage of the main features of the toolbox is explained in the RKToolbox corner sections which conclude

30 1.2. BACKGROUND MATERIAL 29 most of the forthcoming chapters. In the remainder of the chapter we review standard results from (numerical) linear algebra needed for our developments. General results are considered in Section 1.2, while in Section 1.3 we focus on polynomial Krylov methods. 1.2 Background material In this section we review some of the fundamental definitions and matrix properties that we use through the thesis, others are introduced when needed. We stress that this is a brief review, and refer the interested reader to [42, 60, 64, 100] for a thorough discussion on these topics. Matrices and vectors. We shall often denote matrices with uppercase Latin letters while for their elements we shall use the corresponding lowercase Latin letters with indices indicating the row and column they reside in. For instance, a 11 a a 1N a 21 a a 2N A = [a ij ] =... CN,N. a N1 a N2... a NN With A T we denote the transpose of A, i.e., the matrix whose element on the position (i, j) is the (j, i) element a ji of A. Analogously, with A = A T we denote the conjugate transpose of A, where A denotes element-wise conjugation. With 1 1 I N =... diag(1, 1,..., 1) 1 we denote the identity matrix. The subscript N may be removed if the dimension of the matrix is clear from the context. The kth column of I N is denoted by e k, and referred to as a canonical vector. With 0 we shall denote a zero matrix of any size, while for vectors only we may also use 0. We say that a square matrix A C N,N is upper (lower) triangular if a ij = 0 (a ji = 0), whenever i > j. A triangular matrix A is called strictly triangular if a jj = 0 for all j. If A is both upper and lower triangular, we say that it is a diagonal matrix.

31 30 CHAPTER 1. INTRODUCTION & BACKGROUND On the other hand, we say that a rectangular matrix A C N,M trapezoidal if a ij = 0 (a ji = 0), whenever i > j. is upper (lower) Eigenvalues and eigenvectors. Let A C N,N. If (λ, 0 x ) C C N satisfies Ax = λx, (1.1) then λ is called an eigenvalue of A and x its corresponding eigenvector. Any matrix A C N,N has N eigenvalues, not necessarily mutually distinct, and they are the zeros of the characteristic polynomial χ A (z) = det(a λi) of A. Here, det : C N,N C denotes the determinant of the matrix; see, e.g., [64, p. 8]. We denote the set containing all the eigenvalues of A by Λ(A) = {z C det(a λi) = 0}. The matrix A can be expressed in the Jordan canonical form Z 1 AZ = J = diag(j 1, J 2,..., J l ), with λ k 1. λ.. J k = J k (λ k ) = k... 1 Cn k,n k, λ k (1.2) where Z is nonsingular, λ k are the eigenvalues of A, and n 1 + n n l = N. The matrix J k is called a Jordan block. The Jordan canonical form is typically useful from a theoretical viewpoint. Since the Jordan form is not continuous and is thus numerically unstable, when designing numerical algorithms one usually resorts to the so called Schur form Q AQ = T, where Q C N,N is a unitary matrix and T is upper triangular. A matrix Q C N,N is called unitary if QQ = I. Note that Λ(A) = {t jj } N j=1, and Q can be chosen so that the elements on the diagonal of T appear in any order. Generalised eigenvalues and eigenvectors. Let A, B C N,N. The pair (A, B) is called a pencil. If (λ, 0 x ) C C N satisfies the equation Ax = λbx, (1.3) then λ is called a generalised eigenvalue of (A, B) and x its corresponding generalised eigenvector. The set of all generalised eigenvalues of (A, B) is denoted by Λ(A, B) = {z C det(a zb) = 0}.

32 1.2. BACKGROUND MATERIAL 31 Clearly, Λ(A, I) = Λ(A). The analogue of the Schur form for a matrix to pencils is the generalised Schur form (T, S) = (Q AZ, Q BZ), where Q, Z C N,N are unitary and T, S C N,N are upper triangular. If for some j, t jj and s jj are both zero, then Λ(A, B) = C. Otherwise, we have Λ(A, B) = {t jj /s jj s jj 0}. When A, B R N,N, the generalised real Schur form (T, S) = (Q AZ, Q BZ), where Q, Z R N,N are orthogonal, T is upper quasi-triangular and S is upper triangular, may be of interest instead of the generalised Schur form. A matrix Q R N,N is said to be orthogonal if QQ T = I, while T = [T ij ] R N,N is said to be upper quasi-triangular if it is block upper triangular and T jj are either of size 1-by-1 or of size 2-by-2. Functions of matrices. Let A C N,N have the Jordan canonical form (1.2). We say that the function f is defined on the spectrum of A if the values f (j) (λ k ), j = 0, 1,..., n k 1, k = 1, 2,..., l exist. If f is defined on the spectrum of A, then f(a) := Zf(J)Z 1 = Z diag(f(j 1 ), f(j 2 ),..., f(j l ))Z 1, where f(λ k ) f (λ k )... (n k 1)!. f(j k ) = f(λ k ) f (λ k ) f(λ k ) f (n k 1) (λ k ) Cn k,n k. There exist other, equivalent, definitions of f(a). For our purposes, we state the definition of f(a) related to Hermite interpolation. Note that the minimal polynomial of A is defined as the unique monic polynomial ψ of lowest degree such that ψ(a) = 0. By considering the Jordan canonical form (1.2) we can see that ψ(z) = s (z λ j ) ν j, (1.4) j=1 where λ 1, λ 2,..., λ s are the distinct eigenvalues of A and ν j is the dimension of the largest Jordan block where λ j appears. Finally, if f is defined on the spectrum of A, and (1.4) is the minimal polynomial of A, then f(a) := p(a),

33 32 CHAPTER 1. INTRODUCTION & BACKGROUND where p is the unique polynomial of degree less then deg ψ such that f (j) (λ k ) = p (j) (λ k ), j = 0, 1,..., ν k 1, k = 1, 2,..., s. The polynomial p is called the Hermite interpolating polynomial. equivalence between the two definitions see, e.g., [60, Theorem 1.12]. For a proof of LU factorisation. If zero is not an eigenvalue of A C N,N, then A is said to be nonsingular and there exists a unique matrix A 1 C N,N such that AA 1 = A 1 A = I. The matrix A 1 is called the inverse of A. A common task in numerical linear algebra is to solve a linear system of equation Ax = b, where A is nonsingular and b C N is a given vector. The sought-after vector x C N is given by x = A 1 b, and can be computed by forming the LU factorisation A = LU of A, if it exists. Here, L is a unit lower triangular matrix, i.e., it is lower triangular with all diagonal elements being equal to one. The matrix U is upper triangular. In practise, the factorisation P A = LU, where P is a permutation matrix, that is, an orthogonal matrix with elements being equal to either zero or one, is more often used, since it always exists if A is nonsingular, and, moreover, it enjoys better numerical properties. If P A = LU, then x = U 1 [L 1 (P b)] can be formed by permuting the elements of b, followed by forward substitution and then back substitution; see, e.g., [42, Section 3.1]. QR factorisation. Let A C N,M. The factorisation A = QR, with unitary Q C N,N and upper trapezoidal R C N,M is called the QR factorisation of A. If N > M and Q = [ ] Q 1 Q 2 with Q1 C N,M, then Q 1 is called an orthonormal matrix. If, furthermore, R = [ R1 T 0 ] T with R1 C M,M, then A = Q 1 R 1 is called the thin QR factorisation of A. Singular value decomposition. Let A C N,M. The decomposition A = UΣV, Σ = diag(σ 1, σ 2,..., σ p ) R N,M, p = min{n, M}, where U and V are unitary and σ 1 σ 2... σ p 0, is called the singular value decomposition of A. The scalars σ j are called the singular values of A. The columns of U and V are the left and right singular vectors of A, respectively. The rank of A is equal to the number r of nonzero singular values of A. The pseudoinverse A of A is defined as A = V diag(σ 1 1, σ 1 2,..., σ 1 r, 0, 0,..., 0)U.

34 1.3. POLYNOMIAL KRYLOV METHODS Polynomial Krylov methods We now provide a brief overview of polynomial Krylov methods, the predecessor of rational Krylov methods. More detailed expositions can be found in, e.g., [76, 94, 95]. Let A C N,N be a matrix and 0 b C N a nonzero starting vector. For any m N 0, the polynomial Krylov space of order m + 1 for (A, b) is defined as K m+1 (A, b) := span{b, Ab, A 2 b,..., A m b}. There exists a uniquely defined integer 1 d d(a, b) N such that K 1 (A, b) K 2 (A, b) K d 1 (A, b) K d (A, b) = K d+1 (A, b). We call d(a, b) the invariance index for (A, b). We shall typically assume that m < d(a, b), so that K m+1 (A, b) is of full dimension m + 1 and is isomorphic to P m, i.e., any w K m+1 (A, b) corresponds to a polynomial p P m satisfying w = p(a)b, and vice versa. Polynomial Arnoldi algorithm. With the polynomial Arnoldi algorithm given in Algorithm 1.1, one can compute an orthonormal basis {v 1, v 2,..., v m+1 } for K m+1 (A, b). The starting vector b is normalised to v 1 in line 1, and then a new direction Av j is added to the basis, cf. line 3. The Gram Schmidt procedure is employed in lines 4 5 to orthonormalise the newly added vector. The process is repeated for j = 1, 2,..., m. By introducing V m+1 = [ ] v 1 v 2... v m+1 C N,m+1, and H m = [ ] h 1 h 2... h m C m+1,m, where h j = [ hj T h j+1,j ] T, we obtain the decomposition AVm = V m+1 H m. Here, V m+1 is orthonormal while H m is an unreduced upper Hessenberg matrix. Recall that a matrix H m C m+1,m is called upper Hessenberg if all the elements below the first subdiagonal are zero, i.e., if i > j + 1 implies h ij = 0. Further, we say that H m is unreduced if none of the elements on the first subdiagonal are zero, i.e., h j+1,j 0. Implicit Q theorem. Let us now recall the implicit Q theorem (see, e.g., [42, 102]) which plays an important role for the practical application of the polynomial Arnoldi algorithm.

35 34 CHAPTER 1. INTRODUCTION & BACKGROUND Algorithm 1.1 Polynomial Arnoldi algorithm. Input: A C N,N, b C N, and m < d(a, b). Output: Decomposition AV m = V m+1 H m, with V m+1v m+1 = I m Set v 1 := b/ b for j = 1, 2,..., m do 3. Compute w j+1 := Av j. 4. Orthogonalize v j+1 := w j+1 V j h j, where h j := V j w j Normalize v j+1 := v j+1 /h j+1,j, where h j+1,j := v j end for Theorem 1.1. Let Q C N,N be a unitary matrix, and Q AQ = H be an unreduced upper Hessenberg matrix. Then the first column of Q determines uniquely, up to unimodular scaling, the other columns of Q. One of the applications of the implicit Q theorem is the efficient implementation of the shifted QR iteration (see, e.g., [42, 102]) for the decomposition AV m = V m+1 H m, which may accelerate the convergence of specific Ritz values. Instead of the shifted QR iteration for A, the theorem allows for the shifted QR iteration to be applied on the typically smaller matrix H m in an implicit two-step process. First, we change the leading vector V m+1 e 1 of V m+1 by applying a suitable transformation V m+1 GG 1 H m, and second, we recover the upper Hessenberg structure of G 1 H m without affecting the leading column of V m+1 G. This is further discussed for the more general, rational, case at the end of Section 5.1. Gram Schmidt procedure. The Gram Schmidt procedure used in Algorithm 1.1 is often referred to as classical Gram Schmidt, and in finite precision arithmetic may cause numerical instabilities. A more robust approach is that of the modified Gram Schmidt procedure, where, instead of line 4, we have: for k = 1, 2,..., j do Compute h kj = vk w j+1, and update w j+1 := w j+1 h kj v k. end for In this case line 5 reduces to: Orthogonalize v j+1 := w j+1 /h j+1,j, where h j+1,j := v j+1 2. Furthermore, it is common to perform the orthogonalization, with both methods, twice. Interested discussions and analyses on this topic can be found in, e.g., [13, 14, 38, 39, 47].

36 1.3. POLYNOMIAL KRYLOV METHODS 35 In our forthcoming discussions we shall keep the presentation as in Algorithm 1.1, but one should be aware that a more sophisticated implementation is needed in practice. Solving linear systems and eigenproblems. The polynomial Arnoldi algorithm may be used for solving large and sparse or structured linear systems of equations. If AV m = V m+1 H m and H m = [ I m 0 ] H m, then x m := V m H 1 m V mb provides an approximation to A 1 b, provided that A and H m are nonsingular. This procedure is known as the full orthogonalization method (FOM). An alternative is the generalised minimal residual method (GMRES), where V m+1 H mv mb A 1 b is used instead. Moreover, some of the eigenvalues of H m may provide good approximations to eigenvalues of A. In applications, these are typically the eigenvalues having larger module. Therefore, by replacing A with (A ξi) 1 in the polynomial Arnoldi method, one may obtain good approximations to eigenvalues of A close to any ξ C. This is referred to as the shift-and-invert Arnoldi algorithm, and the rational Krylov method of Ruhe [86, 87, 88, 89, 90], which we cover in Chapter 2, generalises it by allowing the parameter ξ to change from one iteration to the next. Because of this connection to the polynomial Arnoldi algorithm, we shall refer to the rational Krylov method as the rational Arnoldi algorithm.

37 36 CHAPTER 1. INTRODUCTION & BACKGROUND

38 2 Rational Krylov spaces and related decompositions In this chapter we study various algebraic properties of rational Krylov spaces, using as starting point a rational Arnoldi decomposition AV m+1 K m = V m+1 H m, (2.1) where A C N,N is a given matrix and the matrices V m+1 C N,m+1 and K m, H m C m+1,m are of maximal column rank. The rational Arnoldi algorithm by Ruhe [89, 87, 90] naturally generates decompositions of the form (2.1) in which case it is known (by construction) that the columns of V m+1 are an (orthonormal) basis of a rational Krylov space. Different choices of the so called continuation combinations in the rational Arnoldi algorithm give rise to different decompositions, but all of them correspond to the same rational Krylov space. We answer the converse question of when a decomposition (2.1) is associated with a rational Krylov space, and, furthermore, discuss its uniqueness. The goal is to provide fundamental properties, important for the developments of forthcoming chapters, of decompositions (2.1) related to rational Krylov spaces. The outline of this chapter is as follows: in Section 2.1 we review the rational Arnoldi algorithm and derive the related decomposition (2.1). The notion of a rational Arnoldi decomposition is formally introduced in Section 2.2. We relate these decompositions to the poles and the starting vector of a rational Krylov space and establish some of their properties. Section 2.3 provides a rational implicit Q theorem about the uniqueness of such decompositions, while Section 2.4 is devoted to a variant of (2.1) with all the matrices being real-valued. Rational Krylov spaces were initially proposed for the purpose of solving large sparse generalised eigenvalue problems [86, 89, 87, 90]; in 37

39 38 CHAPTER 2. RATIONAL KRYLOV SPACES AND RADS Section 2.5 we consider the possibility of working with a pencil (A, B), with A, B C N,N, instead of A only. This leads to decompositions of the form AV m+1 K m = BV m+1 H m, and naturally opens the question of considering nonstandard inner products. Finally, in Section 2.6 we show how to use the RKToolbox to construct decompositions of the form (2.1), highlighting the flexibility and freedom the RKToolbox provides yet still keeping the exposition concise. 2.1 The rational Arnoldi algorithm Let A C N,N be a matrix, 0 b C N a nonzero starting vector, and let q m P m be a nonzero polynomial that has no roots in Λ(A), with m N 0. The rational Krylov space of order m for (A, b, q m ) is defined as [86, 89] Q m+1 (A, b, q m ) := q m (A) 1 K m+1 (A, b). (2.2) The roots of q m are the poles of Q m+1 (A, b, q m ). Note that q m (A) is nonsingular since no root of q m is an eigenvalue of A and therefore Q m+1 (A, b, q m ) is well defined. Clearly, Q m+1 (A, b, q m ) is independent of nonzero scaling of b and/or q m. Further, the spaces Q m+1 (A, b, q m ) and K m+1 (A, b) are of the same dimension for all m. Therefore Q m+1 (A, b, q m ) is A-variant if and only if m + 1 < d(a, b). We shall often denote the poles of a the rational Krylov space by {ξ j } m j=1 and thus may also use the notation Q m+1 (A, b, {ξ j } m j=1) = Q m+1 (A, b, q m ). If deg(q m ) < m, then m deg(q m ) of the poles are set to infinity. In this case we refer to infinity as a formal (multiple) root of q m. To handle both finite and infinite poles in a unifying way we may also use the representation ξ = µ/ν, for an adequate choice of scalars µ, ν C. The rational Arnoldi algorithm [89, 90] constructs an orthonormal basis V m+1 for (2.2) in a Gram Schmidt fashion as described in Algorithm 2.2. In line 1 we normalise the starting vector b. The main part of the algorithm consists of lines 2 11 where an orthonormal basis for Q m+1 (A, b, q m ) is constructed iteratively. In line 3 we select a continuation pair (η j /ρ j, t j ) which is used in line 4 to expand the space R(V j ). Definition 2.1. We call (η j /ρ j, t j 0) C C m a continuation pair of order j. The value η j /ρ j is its continuation root, and t j its continuation vector.

40 2.1. THE RATIONAL ARNOLDI ALGORITHM 39 Algorithm 2.2 Rational Arnoldi algorithm. RKToolbox: rat krylov Input: A C N,N, b C N, poles {µ j /ν j } m j=1 C \ Λ(A), with m < d(a, b). Output: Decomposition AV m+1 K m = V m+1 H m, with V m+1v m+1 = I m Set v 1 := b/ b for j = 1, 2,..., m do 3. Choose an admissible continuation pair (η j /ρ j, t j ) C C j. 4. Compute w j+1 := (ν j A µ j I) 1 (ρ j A η j I)V j t j. 5. Orthogonalize v j+1 := w j+1 V j c j, where c j := Vj w j Normalize v j+1 := v j+1 /c j+1,j, where c j+1,j := v j Set k j := ν j c j ρ j t j and h j := µ j c j η j t j, where t j = 8. end for [ tj 0 ], and c j = [ cj c j+1,j ]. The notion of continuation vector has already been used in the literature, though not consistently. For instance, in [90] the author refers to V j t j as the continuation vector, while in [73] the term is used to denote (ρ j A η j I)V j t j. The terminology of continuation combinations is adopted in [10, 109, 90] for the vectors t j. With the notion of continuation pair, we want to stress that the two components are equally important; see Chapter 4. The Möbius transformation (ν j A µ j I) 1 (ρ j A η j I) with fixed pole µ j /ν j and the chosen (continuation) root η j /ρ j µ j /ν j is applied onto V j t j in order to produce w j+1. The continuation pair must be such that w j+1 R(V j ), as otherwise we cannot expand the space. Such admissible continuation pairs exist as long as j < d(a, b); a thorough discussion on the selection of continuation pairs is included in Chapter 4. For now it is sufficient to add that (admissible) continuation pairs correspond to linear parameters and do not affect the space (in exact arithmetic, at least). Lines 5 6 correspond to the Gram Schmidt process, where w j+1 is orthogonalised against v 1, v 2,..., v j to produce the unit 2-norm vector v j+1. From lines 4 6 we deduce w j+1 = V j+1 c j = (ν j A µ j I) 1 (ρ j A η j I)V j t j, and hence (2.3a) (ν j A µ j I)V j+1 c j = (ρ j A η j I)V j t j. (2.3b) Rearranging the terms with and without A we obtain AV j+1 (ν j c j ρ j t j ) = V j+1 (µ j c j η j t j ), (2.4) which justifies the notation k j := ν j c j ρ j t j, and h j := µ j c j η j t j, (2.5)

41 40 CHAPTER 2. RATIONAL KRYLOV SPACES AND RADS used in line 7. Note that h j+1,j = µ j c j+1,j and k j+1,j = ν j c j+1,j, with c j+1,j 0. Hence, h j+1,j /k j+1,j is equal to the jth pole. Concatenating (2.4) for j = 1, 2,..., m provides AV m+1 K m = V m+1 H m, (2.6) with the jth column of H m C m+1,m being [hj T 0 T ] T C m+1, and analogously for the matrix K m. It is convenient to consider (2.6) even if m = 0, in which case one can think of the pencil (H m, K m ) as being of size 1-by-0, and we only have the matrix A and the normalised starting vector v 1. This corresponds to the initial stage of Algorithm 2.2, i.e., right after line 1. The rational Arnoldi algorithm is a generalisation of the polynomial and shift-andinvert Arnoldi algorithms, and the latter two can be recovered with a specific choice of poles and continuation pairs, as the following two examples demonstrate. Example 2.2. Let µ j /ν j 1/0, and (η j /ρ j, t j ) (0/ 1, e j ), for j = 1, 2,..., m. Then in line 4 of Algorithm 2.2 we compute w j+1 = AV j e j = Av j. Furthermore, the formulas for k j = e j, and h j = c j simplify. Overall, we retrieve AV m+1 I m = AV m = V m+1 H m, same as with the polynomial Arnoldi algorithm; cf. Section 1.3. Example 2.3. Recall the shift-and-invert Arnoldi decomposition (A σi) 1 V m = V m+1 C m. By multiplying it from the left with (A σi) and rearranging the terms we obtain (2.6) with K m = C m, and H m = σc m + I m. This can be obtained with Algorithm 2.2 by setting µ j /ν j σ/1, and (η j /ρ j, t j ) ( 1/0, e j ), for all iterations j = 1, 2,..., m. The polynomial Arnoldi algorithm uses repeatedly a pole at infinity, while the shift-and-invert Arnoldi algorithm uses a finite pole σ at each iteration. The rational Arnoldi algorithm allows for poles to change from one iteration to the next. The success of rational Krylov methods heavily depends on these parameters. If good poles are available, only a few may suffice to solve the problem at hand. Otherwise, the solution of a large number of shifted linear systems may be needed to construct the space, thus rendering the process computationally unfeasible. Finding good pole parameters is highly non-trivial and problem-depend. We discuss the selection of poles in Chapters 4 6.

42 2.2. RATIONAL ARNOLDI DECOMPOSITIONS Rational Arnoldi decompositions In the following we aim to establish a correspondence between rational Krylov spaces and matrix decompositions of the form (2.6). As a consequence, we are able to study the algebraic properties of rational Krylov spaces using these decompositions. Definition 2.4. Let K m, H m C m+1,m be upper Hessenberg matrices. We say that the pencil (H m, K m ) is an unreduced upper Hessenberg pencil if h j+1,j + k j+1,j = 0 for all j = 1, 2,..., m. We are now ready to introduce the notion of a rational Arnoldi decomposition, which is a generalisation of decompositions generated by Ruhe s rational Arnoldi algorithm [89, 90]. Although these decompositions have been considered before, ours is the most general definition (cf. Theorem 2.10 below). Other approaches typically exclude the possibility to have poles at both zero and infinity, by requiring H m to be unreduced; see, e.g., [24, 55, 90]. The introduction of unreduced pencils allows us to bypass this restriction. Definition 2.5. Let A C N,N. A relation of the form (2.6) is called a rational Arnoldi decomposition (RAD) of order m if V m+1 C N,m+1 is of full column rank, (H m, K m ) is an unreduced upper Hessenberg pencil of size (m + 1)-by-m, and none of the quotients {h j+1,j /k j+1,j } m j=1, called poles of the decomposition, is in Λ(A). The columns of V m+1 are called the basis of the RAD and they span the space of the RAD. If V m+1 is orthonormal, we say that (2.6) is an orthonormal RAD. The terminology of basis and space of an RAD is inspired by [101, 103] where decompositions related to the polynomial Arnoldi algorithm are studied. It is noteworthy that both H m and K m in the RAD (2.6) are of full rank, which follows from the following lemma (for β = 0 and α = 0, respectively). Lemma 2.6. Let (2.6) be an RAD, and let α, β C be such that α + β 0. The matrix αh m βk m is of full column rank m. Proof. Consider auxiliary scalars α = 1 and any β C such that αh j+1,j βk j+1,j 0 for j = 1, 2,..., m. Multiplying the RAD (2.6) by α and subtracting βv m+1 K m from both sides gives ( αa βi)v m+1 K m = V m+1 ( αhm βk ) m. (2.7) The choice of α and β is such that αh m βk m is an unreduced upper Hessenberg

43 42 CHAPTER 2. RATIONAL KRYLOV SPACES AND RADS matrix, and as such of full column rank m. In particular, the right-hand side of (2.7) is of full column rank m. Thus, the left-hand side, and in particular K m, is of full column rank. This proves the statement for the case α = 0. For the case α 0, consider α = α and β = β in (2.7). If αh m βk m is unreduced, then it is of full column rank and the statement follows. If, however, αh m βk m is not unreduced, then we have αh j+1,j βk j+1,j = 0 for at least one index j {1, 2,..., m}. Equivalently, β/α = h j+1,j /k j+1,j ; that is, β/α equals the jth pole of (2.6) and hence αa βi is nonsingular. Finally, since V m+1 and K m are of full column rank, the left-hand side of (2.7) is of full column rank. It follows that αh m βk m is of full column rank as well, and the proof is complete. Furthermore, any RAD (2.6) can be transformed into an orthonormal RAD using the thin QR factorization V m+1 = Q m+1 R m+1. Setting V m+1 = Q m+1, Km = R m+1 K m, and Ĥm = R m+1 H m, we obtain the decomposition A V m+1 Km = V m+1 Ĥ m, (2.8) satisfying R( V j+1 ) = R(V j+1 ), and h j+1,j /k j+1,j = ĥj+1,j/ k j+1,j for all j = 1, 2,..., m. Definition 2.7. The RADs (2.6) and (2.8) are called equivalent if they span the same space and have the same poles. Note that we do not impose equal ordering of the poles for two RADs to be equivalent. Additionally, it follows from Lemma 2.8 below that equivalent RADs have the same starting vector, up to nonzero scaling. We shall often assume, for convenience, the RAD to be orthonormal. We now show that the poles of a rational Krylov space are uniquely determined by the starting vector and vice versa. Lemma 2.8. Let Q m+1 (A, b, q m ) be a given A-variant rational Krylov space. Then the poles of Q m+1 (A, b, q m ) are uniquely determined by R(b), or equivalently, the starting vector of Q m+1 (A, b, q m ) is uniquely, up to scaling, determined by the (formal) roots of the polynomial q m. Proof. We first show that for a given A-variant polynomial Krylov space K m+1 (A, q), all vectors w K m+1 (A, q) that satisfy K m+1 (A, q) = K m+1 (A, w) are of the form w = αq, for a nonzero scalar α C. Assume, to the contrary, that there exists a polynomial p j with 1 deg(p j ) = j m such that w = p j (A)q. Then A m+1 j w K m+1 (A, w), but for the same vector we have A m+1 j w = A m+1 j p j (A)q K m+1 (A, q). This is a contradiction to K m+1 (A, q) = K m+1 (A, w).

44 2.2. RATIONAL ARNOLDI DECOMPOSITIONS 43 To show that the poles are uniquely determined by the starting vector b, assume that Q m+1 (A, b, q m ) = Q m+1 (A, b, q m ). Using the definition of a rational Krylov space (2.2), this is equivalent to K m+1 (A, q m (A) 1 b) = K m+1 (A, q m (A) 1 b). Multiplying the latter with q m (A) q m (A) = q m (A)q m (A) from the left provides the equivalent K m+1 (A, q m (A)b) = K m+1 (A, q m (A)b). This space is A-variant, hence by the above argument we know that q m (A)b = α q m (A)b, for a nonzero scalar α C. This vector is an element of K m+1 (A, b) which is isomorphic to P m. Therefore q m = α q m and hence q m and q m have identical roots. Similarly one shows that if Q m+1 (A, b, q m ) = Q m+1 (A, b, q m ), then b = α b with α 0. The rational Arnoldi algorithm generates RADs of the form (2.6), in which case it is known (by construction) that R(V m+1 ) spans a rational Krylov space. In Theorem 2.10 below we show that the converse also holds; we show that for every rational Krylov space Q m+1 (A, b, q m ) there exists an RAD (2.6) spanning Q m+1 (A, b, q m ) and conversely, if such a decomposition exists it spans a rational Krylov space. In particular, this shows that our Definition 2.5 indeed describes the complete set of RADs associated with rational Krylov spaces. To proceed it is convenient to write the polynomial q m in factored form, and to label separately all the leading factors j ( q 0 (z) = 1, and q j (z) = hl+1,l k l+1,l z ), j = 1, 2,..., m, (2.9) l=1 with some scalars {h l+1,l, k l+1,l } m l=1 C such that ξ l = h l+1,l /k l+1,l. Since (2.2) is independent of the scaling of q m any choice of the scalars h l+1,l and k l+1,l is valid as long as their ratio is ξ l. When we make use of (2.9) without specifying the order of appearance of the poles, we mean any order. The fact that q j q j+1 gives rise to a sequence of nested rational Krylov spaces, as we now show. Proposition 2.9. Let Q m+1 (A, b, q m ) be a rational Krylov space of order m, and let (2.9) hold. Then Q 1 Q 2 Q m+1, (2.10) where Q j+1 = Q j+1 (A, b, q j ) for j = 0, 1,..., m. Proof. Let l {0, 1,..., m}. We need to show that Q l Q l+1. Let v Q l be arbitrarily. By the definition of Q l, there exists a polynomial p l P l such that v = q l (A) 1 p l (A)b. Then, p l+1 P l+1 defined by p l+1 (z) := ( h l+1,l k l+1,l z ) p l (z) is such that v = q l+1 (A) 1 p l+1 (A)b, which shows that v Q l+1.

Generalized rational Krylov decompositions with an application to rational approximation. Berljafa, Mario and Güttel, Stefan. MIMS EPrint: 2014.

Generalized rational Krylov decompositions with an application to rational approximation. Berljafa, Mario and Güttel, Stefan. MIMS EPrint: 2014. Generalized rational Krylov decompositions with an application to rational approximation Berljafa, Mario and Güttel, Stefan 2014 MIMS EPrint: 2014.59 Manchester Institute for Mathematical Sciences School

More information

Parallelization of the rational Arnoldi algorithm. Berljafa, Mario and Güttel, Stefan. MIMS EPrint:

Parallelization of the rational Arnoldi algorithm. Berljafa, Mario and Güttel, Stefan. MIMS EPrint: Parallelization of the rational Arnoldi algorithm Berljafa, Mario and Güttel, Stefan 2016 MIMS EPrint: 2016.32 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester

More information

The parallel rational Arnoldi algorithm

The parallel rational Arnoldi algorithm The parallel rational Arnoldi algorithm Mario Berljafa Stefan Güttel May 016 Contents 1 Introduction 1 Parallelization strategies 1 3 Basic usage 1 Near-optimal continuation strategy 3 5 Links to numerical

More information

Solving large sparse eigenvalue problems

Solving large sparse eigenvalue problems Solving large sparse eigenvalue problems Mario Berljafa Stefan Güttel June 2015 Contents 1 Introduction 1 2 Extracting approximate eigenpairs 2 3 Accuracy of the approximate eigenpairs 3 4 Expanding the

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Quadratic Matrix Polynomials

Quadratic Matrix Polynomials Research Triangularization Matters of Quadratic Matrix Polynomials February 25, 2009 Nick Françoise Higham Tisseur Director School of of Research Mathematics The University of Manchester School of Mathematics

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3

Chasing the Bulge. Sebastian Gant 5/19/ The Reduction to Hessenberg Form 3 Chasing the Bulge Sebastian Gant 5/9/207 Contents Precursers and Motivation 2 The Reduction to Hessenberg Form 3 3 The Algorithm 5 4 Concluding Remarks 8 5 References 0 ntroduction n the early days of

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Matrix functions and their approximation. Krylov subspaces

Matrix functions and their approximation. Krylov subspaces [ 1 / 31 ] University of Cyprus Matrix functions and their approximation using Krylov subspaces Matrixfunktionen und ihre Approximation in Krylov-Unterräumen Stefan Güttel stefan@guettel.com Nicosia, 24th

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the

More information

Fitting an artificial frequency response

Fitting an artificial frequency response Fitting an artificial frequency response Mario Berljafa Stefan Güttel February 215 Contents 1 Introduction 1 2 Problem setup 1 3 Testing RKFIT 2 4 The rkfun class 3 5 Some other choices for the initial

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again View at edx Let us revisit the example from Week 4, in which we had a simple model for predicting

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Barycentric interpolation via the AAA algorithm

Barycentric interpolation via the AAA algorithm Barycentric interpolation via the AAA algorithm Steven Elsworth Stefan Güttel November 217 Contents 1 Introduction 1 2 A simple scalar example 1 3 Solving a nonlinear eigenproblem 2 4 References 6 1 Introduction

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

On the solution of large Sylvester-observer equations

On the solution of large Sylvester-observer equations NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 200; 8: 6 [Version: 2000/03/22 v.0] On the solution of large Sylvester-observer equations D. Calvetti, B. Lewis 2, and L. Reichel

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

MAA507, Power method, QR-method and sparse matrix representation.

MAA507, Power method, QR-method and sparse matrix representation. ,, and representation. February 11, 2014 Lecture 7: Overview, Today we will look at:.. If time: A look at representation and fill in. Why do we need numerical s? I think everyone have seen how time consuming

More information

Rational Krylov methods for linear and nonlinear eigenvalue problems

Rational Krylov methods for linear and nonlinear eigenvalue problems Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational

More information

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence Linear Combinations Spanning and Linear Independence We have seen that there are two operations defined on a given vector space V :. vector addition of two vectors and. scalar multiplication of a vector

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information