This is a postprint version of the following published document:

Size: px
Start display at page:

Download "This is a postprint version of the following published document:"

Transcription

1 This is a postprint version of the following published document: Gonzalez-Rodriguez, P., Moscoso, M., & Kindelan, M. (05). Laurent expansion of the inverse of perturbed, singular matrices. Journal of Computational Physics, 99, pp DOI:0.06/j.jcp Elsevier 05 This work is licensed under a Creative Commons Attribution-NonCommercial- NoDerivatives 4.0 International License.

2 Laurent expansion of the inverse of perturbed, singular matrices Pedro Gonzalez-Rodriguez, Miguel Moscoso, Manuel Kindelan a b s t r a c t In this paper we describe a numerical algorithm to compute the Laurent expansion of the inverse of singularly perturbed matrices. The algorithm is based on the resolvent formalism used in complex analysis to study the spectrum of matrices. The input of the algorithm are the matrix coefficients of the power series expansion of the perturbed matrix. The matrix coefficients of the Laurent expansion of the inverse are computed using recursive analytical formulae. We show that the computational complexity of the proposed algorithm grows algebraically with the size of the matrix, but exponentially with the order of the singularity. We apply this algorithm to several matrices that arise in applications. We make special emphasis to interpolation problems with radial basis functions. Keywords: Laurent series, Inverse, Radial basis functions, Interpolation.. Introduction In many situations one has to compute the solution of perturbed equations A(δ) x = b, where the unperturbed matrix A(0) is singular. The perturbed matrix A(δ) may or may not be singular for δ 0. Some important applications where this problem arises are finite difference methods for the solution of singular perturbation problems [9,0], differential equations with singular constant coefficients [7], least square problems [6], asymptotic linear programming [,], and Markov chains [9,,8]. In this paper, we consider the problem of inverting analytically perturbed matrices A(δ) = A 0 + δ k A k, () k= where A 0 = A(0) is singular. If the inverse A (δ) exists in some punctured disc around δ = 0, we compute its Laurent series expansion in the form A (δ) = δ k H k, () k= s where H s 0, and s is the order of the singularity. If the inverse A (δ) does not exist, we compute the Laurent series of the Drazin inverse [5]. Once the coefficients H k of the Laurent series have been obtained, the inverse can be computed accurately for any value of δ, no matter how small. In fact, Avrachenkov and Lasserre [4] proposed an iterative algorithm for inverting perturbed operators using analytic perturbation theory for linear operators (see the books [0,6] for details). * Corresponding author at: Universidad Carlos III de Madrid, Avenida de la Universidad 30, 89 Leganés, Spain. Fax: addresses: pgonzale@ing.uc3m.es (P. Gonzalez-Rodriguez), moscoso@math.uc3m.es (M. Moscoso), kinde@ing.uc3m.es (M. Kindelan).

3 This work has a twofold aim. On the one hand, we describe in detail a numerical algorithm to implement this procedure. The order s of the singularity is obtained as a by-product of the algorithm. We present a detailed analysis of the computational complexity and show that it grows exponentially with s. On the other hand, we discuss what we think is a novel and very interesting application. We apply this algorithm to interpolation problems with Radial Basis Functions (RBF) in the limit when they become flat as the perturbation parameter δ approaches zero. In fact, it is well known [,3,5,7] that in this limit RBF interpolation converges to polynomial interpolation. With the proposed approach, i.e., by computing the Laurent expansion of the inverse of the RBF matrix, we can explicitly derive the exact form of the interpolating polynomial obtained in the limit of infinitely flat basis functions. We have already applied the algorithm described here to the solution of partial differential equations with the RBF method [8]. The RBF matrix is a real symmetric matrix and, therefore, we are particularly interested in this type of matrices. In fact, the computation of the widely used Moore Penrose pseudoinverse involves the inverse of a symmetric matrix. In addition to RBF interpolation problems, we will also show examples from different applications where the matrices to be inverted are not real symmetric and/or where the inverse A (δ) does not exist. The algorithm described here can be applied to any square matrix with the only restriction that the zero-eigenvalues are semi-simple. This means that their algebraic and geometric multiplicities agree, which is the case for symmetric matrices. The paper is organized as follows. Section contains the main mathematical concepts and definitions used throughout the paper. In Section 3 we consider the case of regularly perturbed matrices. In Section 4 we consider the case of singularly perturbed matrices. Section 5 describes the numerical algorithm in detail and analyzes the computational complexity. Section 6 presents results obtained with matrices that arise in RBF interpolation and in other applications. Section 7 contains the main conclusions of the paper.. Basic concepts and definitions Here, we introduce some mathematical concepts and results that will be used throughout the paper. In particular, we briefly review the resolvent formalism which applies concepts from complex analysis to the study of the spectrum of operators. In this section, we consider the case of unperturbed operators that do not depend on any parameter. These operators can be invertible or not. Any matrix A can be written as the spectral sum p A = λ i P i + D i, (3) i=0 where P i and D i are the eigenprojection and the eigennilpotent associated with the eigenvalue λ i, respectively. They have the properties { { P P i P j = j if i = j D, and P 0 if i j i D j = j if i = j. (4) 0 if i j For ease of presentation, and because our main application is RBF interpolation, we will consider real symmetric matrices in what follows. We will give the modifications for more general matrices when needed. A symmetric matrix is diagonalizable and, therefore, the nilpotents D i are zero. Hence, the spectral representation of an N N symmetric matrix with eigenvalues λ,..., λ p (not necessarily all simple) takes the form p A = λ i P i. (5) i=0 Furthermore, if A is symmetric, the projections P i onto the λ i -eigenspaces E i, i = 0,,..., p, are orthogonal projections. If all the eigenvalues are different from 0 and, therefore, A is invertible, the spectral decomposition of A is given by p A P i =. (6) λ i i= Throughout this paper we use the index i = 0only for the eigenvalue λ 0 = 0. All other eigenvalues are numbered starting from i =. We now define the matrix-valued function R(λ) = (A λi), called the resolvent of A. The resolvent is an analytic function of the complex variable λ in the complementary set of the spectrum σ (A) of A. The eigenvalues λ j of the matrix A are the poles of the resolvent. If zero is not an eigenvalue of A and, therefore A is invertible, R(0) is the inverse of A. In this case, the spectral decomposition of the resolvent can be written as p P i R(λ) = λ i λ. (8) i= (7)

4 It is easy to check (8) by using (4) and (5). Indeed, ( p ) p P (A λi)r(λ) = (λ i λ)p i j p = P i = I. (9) λ j λ i= j= i= Since the resolvent is an analytic function, we may use contour integration. If C j is a positively-oriented contour enclosing the eigenvalue λ j but not other eigenvalues of A, then R(λ)dλ = π i π i C j C j p i=0 P i λ i λ dλ = π i C j P j λ j λ dλ = P j. (0) Therefore, the contour integral of the resolvent around the eigenvalue λ j of A gives the eigenprojection P j = R(λ)dλ, π i C j () for the eigenvalue λ j. Given any N N complex matrix A of index k, C N admits the direct decomposition C N = range(a k ) ker(a k ). The index of a matrix is the smallest integer k for which range(a k+ ) = range(a k ). The index is 0if and only if A is nonsingular. If A is singular, the index of A is equal to the multiplicity of the zero eigenvalue as a root of the minimal polynomial, i.e. the size of the largest block with zero diagonal in its Jordan form. The eigenprojection P 0 corresponding to the eigenvalue 0is the idempotent matrix such that range(p 0 ) = ker(a k ) and ker(p 0 ) = range(a k ). In other words, P 0 is the projection onto ker(a k ) along range(a k ). Thus, if A is symmetric, P 0 is an orthogonal projection (if A is symmetric, the index is and range(a) = row(a)). Also A and A (if it exists) can be written in terms of the resolvent. It is straightforward to show that A = λr(λ)dλ, and A = R(λ) dλ, () π i π i λ C C where C is a positively-oriented contour enclosing all the eigenvalues of A. If A is not invertible, λ 0 = 0is an eigenvalue and, therefore, λ = 0is a pole of the resolvent. If λ = 0is a simple pole, then we can write R(λ) = P 0 λ + Q (λ), where Q (λ) = p P i i= λ i λ is the analytic part of R(λ) which admits the power expansion Q (λ) = A # + λ p (A # ) p+, p= (3) (4) ( ) n for small λ. In (4), we have used the fact that λ Q (λ) = n!q (λ) n+ (see page 37 of [0]). Explicit formulas for P 0 and A # Q (0) are P 0 = R(λ)dλ and (5) π i A # = R(λ) λ dλ, (6) π i where is a positively-oriented contour enclosing λ 0 = 0but no other eigenvalues of A. Equation (6) is obtained from (3) using Cauchy s residue theorem. This equation will be essential throughout the paper. If A is symmetric or has index, A # is the inverse of A restricted to the range of A. In this case, A # is called the group inverse. For more general matrices, A # is the inverse of A restricted to the range of A k and is denoted Drazin inverse, A D. 3. Regular perturbation case In this section, we consider the case in which an operator depends on a parameter δ and admits a power series of the form A(δ) = A 0 + δ A + δ A (7) 3

5 Furthermore, we consider the case in which the operator (7) is regularly perturbed. By a regular perturbation we mean that the dimension of the kernel of A(δ) with δ 0is the same as the dimension of the kernel of A(0). Note that because the operator depends on a parameter δ, so do the eigenvalues, projection operators, and the inverse A # (δ). In what follows, the goal is to compute the matrix coefficients of the power series of A # (δ). As before, the resolvent of A(δ) is defined as R(λ, δ) = (A(δ) λi). (8) Now, R(λ, δ) is an analytical function in the two variables λ and δ in all domains that do not contain an eigenvalue λ(δ) of the perturbed matrix A(δ). Therefore, we can write R(λ, δ) as a power series in δ with coefficients R k (λ) depending on λ. That is, R(λ, δ) = R(λ, 0) + δ k R k (λ). (9) k= Formula (9) is called the second Neumann series for the resolvent, and it is uniformly convergent for sufficiently small δ and λ C\σ (A), where σ (A) denotes the spectrum of A. The coefficients R k (λ) in (9) are given by (see Appendix A) R k (λ) = k ( ) p p= ν +ν + +ν p =k R(λ)A ν R(λ)A ν...a νp R(λ). (0) In (0), we have denoted R(λ) = R(λ, 0). Because the kernel of a regularly perturbed operator A(δ) does not change when δ changes from 0to a value different than 0, we can consider a contour that only encloses the eigenvalue λ 0 = 0, but not other eigenvalues of A(δ). Therefore, (6) becomes A # (δ) = π i R(λ, δ) dλ λ () for all δ. Substitution of the power series (9) in (), and integrating term by term we obtain the power series A # (δ) = A # 0 + δ n A n #. n= () The coefficients A n # are given by A # 0 = A# (0) and A n # = π i R n (λ) dλ, (3) λ with R n (λ) given in (0). Following [4], the matrices A n #, n =,,..., can be expressed as sums of matrix products as (see Appendix B) n A n # = ( ) p p= ν + +ν p =n μ + +μ p+ =p+ S μ A ν S μ...a νp S μp+, (4) where S 0 P 0, S k (A # 0 )k for k =,,..., ν i > 0 and μ i 0. The main point here is that the matrix coefficients A n # can be computed in terms of P 0 and A # 0 only. 4. Singular perturbation case Now, we consider a singularly perturbed matrix A(δ). Expansion (7) is a singular perturbation if there is a splitting of the zero eigenvalue of the unperturbed matrix A 0 when δ 0. This is the case of interest in many applications, such as RBF interpolation, because the matrices are such that A 0 is singular but A(δ) is not for δ 0. In this section we show that, by using a reduction process, a singular perturbation problem can be transformed into a regular one [0]. The key is to construct the inverse of the operator restricted to the subspace determined by the total projection P A0 (δ) onto the total eigenspace associated to the set of eigenvalues of A(δ) that depart from zero at δ = 0 (P A0 (δ) is a sum of eigenprojections). The group of the perturbed eigenvalues λ i (δ) such that λ i (δ) 0as δ 0is denoted as the 0-group. We define the operator A p (δ) = A(δ)P A0 (δ) = P A0 (δ)a(δ) = P A0 (δ)a(δ)p A0 (δ), (5) 4

6 which admits the Taylor series A p (δ) = δ k à k. k= Now, the problem of finding the inverse of A(δ) restricted to range(i P A0 (δ)) is equivalent to the one of finding the inverse of A p (δ). Note that P A0 (δ) commutes with A(δ). Hence, range(p A0 (δ)) and range(i P A0 (δ)) are invariant under A(δ) and, therefore, the vector space C n can be decomposed into the direct sum of these subspaces, so C n = range(i P A0 (δ)) range(p A0 (δ)). Thus, A # (δ) = (A(δ)[I P A0 (δ)]+a(δ)p A0 (δ)) # = (A(δ)[I P A0 (δ)]) # + (A(δ)P A0 (δ)) # = A g (δ) + δ ( δ A(δ)P A0(δ)) #, (7) where A g (δ) is the inverse of the matrix restricted to the complementary subspace of range(p A0 ), and is given by () (4). Hence, it remains to compute the inverse of A(δ) restricted to range(p A0 (δ)). That is, the problem is reduced to one of finding the inverse of the operator B(δ) = δ A(δ)P A0(δ) = δ P A0(δ)A(δ) = δ P A0(δ)A(δ)P A0 (δ) = δ k à k+. (8) Let us study in more detail the behavior of the eigenvalues of A(δ) that split from zero at δ = 0. In general, they constitute one or several branches of analytic functions of δ that depart from λ 0 = 0 at different speeds. Let r be the slowest rate at which the eigenvalues grow with delta. Then, this branch of eigenvalues grows as λ(δ) = αδ r + o(δ r ), where r is a positive rational number [0]. The operator (8) is regularly perturbed if r =. The process of reducing the problem for A(δ) to the problem for B(δ) is called the reduction process. It follows that the slowest growth rate of the eigenvalues of the new operator (8) is reduced to r. For convenience, we rewrite the Taylor series for B(δ) as B(δ) = B 0 + δ n B n. (30) n= Thus, the unperturbed operator is now B 0 = à = P A0 (0)A P A0 (0) (see expansions (7) and (6)). Notice that the operator B(δ) can be calculated using the Cauchy integral formula B(δ) = λr(λ, δ)dλ, (3) π iδ where is a contour enclosing only the eigenvalues that depart from zero at δ = 0. Using the second Neumann series for the resolvent (9) with coefficients given in (0), we get B 0 = P A0 (0)A P A0 (0) and n+ B n = ( ) p p= ν + +ν p =n+ μ + +μ p+ =p k=0 (6) (9) S μ A ν S μ...a νp S μp+, (3) with S 0 P 0, S k (A # 0 )k for k =,,..., ν i > 0 and μ i 0. Once the Taylor series for B(δ) is known, we can apply to B(δ) the same algorithm as to A(δ) to find B # (δ) = B # 0 + δ n B n # n= (33) using (4) with S 0 P B0, and S k (B # 0 )k instead. The reduction process finishes if r =. Otherwise, we can apply the reduction process to the operator B(δ). Thus, we compute the matrix coefficients C n of the new matrix C(δ) = δ B(δ)P B0(δ) = + δ n C n, (34) n= where P B0 (δ) is the total projection corresponding to the 0-group of B(δ). To find the matrices C n we use (3) with B νi instead of A νi and S k = (B # 0 )k. When the reduction process is applied a finite number of times r = s, we obtain a regularly 5

7 perturbed operator Z(δ) and the process comes to an end. When the process finishes, we get an expression for the Laurent series of A in the form A (δ) = + δ C # (δ) + δ B# (δ) + A # (δ). (35) The matrices H k in the equation () can be expressed in terms of the A #, B #,... For example, for a singularity of order s = we obtain H = C # 0 H = B # 0 + C # H 0 = A # 0 + B# + C # (36) Once H k with k = s, s +,..., 0are found, the regular terms H, H,... in () can be easily obtained using the recursive formula [4] m+p p H m+ = H j A i+ j+ Hm i, m = 0,,.... (37) i=0 j=0 Notice that, given any matrix A, we can always calculate the Laurent series of the Moore Penrose pseudoinverse A (δ) = (A T (δ)a(δ)) # A T (δ) where A T (δ)a(δ) is symmetric. Furthermore, if the inverse A (δ) exists, then A (δ) = A (δ). 5. Algorithm Here, we describe in detail the algorithm to compute the first M terms of the Laurent series expansion of the perturbed inverse (), given the power series of a matrix (). The matrix coefficients H k in () are computed using (36) and (37). If the order s of the singularity is known, we apply the reduction process s times (from i = to s). Thus, we start with i = and use matrices A n to compute matrices A n # using (4), and matrices B n using (3). For i =, we use matrices B n to compute matrices B n # and matrices C n. We repeat this process s times. Notice from equation (36) that we need to compute A n # up to n = 0, B# n up to n =, C# n up to n =. Thus, if we use i as an index for the different letters (i = for A, i = for B, and so on), then the maximum subindex that we have to use in (4) to compute letter(i) n # is n = i. The maximum index that we have to use in equation (3) to compute letter(i) n is n = s i +. Iteration i of the reduction process is described below. It computes matrices letter(i) n # and letter(i + ) n given matrices letter(i) n following the next steps:. Compute the pseudo-inverse letter(i) # 0 of the matrix letter(i) 0.. Compute S 0 = P 0, where P 0 is the projection onto the null space of letter(i) Compute S k = (letter(i) # 0 )k, k =, 3,..., s i Compute letter(i) # : a. First loop: compute letter(i) n # from n = to i using (4). i. Second loop: sum from p = to n. A. Compute the ν s: the l ν combinations of p natural numbers that sum up to n. B. Compute the μ s: the l μ combinations of p + non-negative integers that sum up to p +. C. Sum all the products ( ) p S μ letter(i) ν S μ...letter(i) νp S μp+. 5. Compute letter(i + ): a. Compute letter(i + ) 0 = P 0 letter(i) P 0. b. First loop: compute letter(i + ) n from n = to s + (i + ) using (3). i. Second loop: sum from p = to n +. A. Compute the ν s: the l ν combinations of p natural numbers that sum up to n +. B. Compute the μ s: the l μ combinations of p + non-negative integers that sum up to p. C. Sum all the products ( ) p S μ letter(i) ν S μ...letter(i) νp S μp+ and multiply by from p = to n +. After the reduction process has been completed for i = to s, steps 4 of the algorithm have to be carried out again with i = s + in order to compute letter(s + ) # which is needed to compute the Laurent series of the inverse (35). Then, the singular terms of the Laurent series of the inverse (H i from i = s to i = 0) are computed using (36), and the regular terms (H i from i = to i = M) are computed using (37). In many cases, the order of the singularity s in () is not known a priori and, therefore, it has to be determined during the iterative process. To this end, we start by assuming a first order singularity (s = ) and compute the Laurent series of the inverse using the algorithm just described. Then, we increase s by and compute the new Laurent series. For each value of s we compute 0 i= s H i A i I. When this norm is smaller than a certain tolerance we stop the process. In this way, we determine both the Laurent series of the inverse and the order of the singularity. 6

8 Table Number of matrix products and computational time. s N p N Time (sec) Symbolic (sec) Steps and in the algorithm require some attention, as the projections onto the null spaces may be sensitive to small numerical errors. To this end, we obtain a truncated singular value decomposition of letter(i) 0 = U V T, where is a diagonal matrix whose diagonal entries are the singular values larger than a certain threshold (we use threshold = 0 0 ). Then, the pseudoinverse is given by letter(i) # 0 = V U T. If the matrix is symmetric or has index, we compute the orthogonal projection P 0 onto the kernel of letter(i) 0 through the projection matrix SS, where S is the matrix whose columns are the right singular vectors corresponding to the singular values smaller than the threshold. If the matrix has index k >, we compute the oblique projection P 0 onto the kernel of letter(i) 0 along the range(letter(i) k 0 ) as follows. We define the matrices M and N whose columns are basis of the kernel of letter(i) 0 and the orthogonal complement of the image of letter(i) 0. Then, the projection P 0 = M(N T M) N T. In the above algorithm, the most computationally expensive step is 4aiC, as it involves a very large number of matrix products (see Section 5.). In terms of memory requirements, storage of the μ s in step 5aiA B is the most demanding part of the algorithm. The maximum size of the matrix where we store all the combinations of μ s to compute letter(i + ) n grows as (4s )!/((s )!(s)!) (s + ). For example, for s = 7 the size is , so we have to store integer numbers. 5.. Numerical complexity The most time-consuming part of this algorithm is the computation of all the matrix products to calculate B. The number of products depends on the order of the singularity s as s n+ ( ) n!(p )! N p = #products to calculate B = (n p + )!((p )!) 3 n= p= This quantity grows approximately as (8.8636) s. If we want to calculate the number of flops to calculate B we have to multiply N p by N 3 where N N is the dimension of the square matrix A. After calculating B you also have to calculate B #, C, etc. but this is fast compared with B. Table shows the number of matrix products, N p, to calculate B up to s = 9. The fourth column of the table shows the computational time for computing the Laurent series of the inverse for a perturbed matrix of size N N on an Intel Core i7 GHz processor using a Matlab code not optimized. Notice that the computational time grows exponentially with the order of the singularity, s, and algebraically with the size of the matrix, O(N 3 ). Our method can be considered as semi-analytical since the output of the method is a Laurent series for the inverse in terms of powers of the parameter δ, but the implementation just described involves a very large amount of numerical computations. Thus, the computational times shown in Table should be compared with the computational times corresponding to the other alternative to compute the Laurent series of the inverse, namely, the use of a symbolic language. Column five of Table shows the computational times using Mathematica as symbolic language (we use the command Inverse[A] where A is the power series of the matrix (7)). Notice that as N grows our algorithm becomes much faster than the symbolic language. In fact, for matrices of size greater than 3, Mathematica is not able to compute the inverse while, with our algorithm, we can compute the inverses of rather large matrices, as long as the order of the singularity is smaller than seven. In Ref. [3] a detailed analysis of the computational complexity involved in computing the Laurent series of the inverse using a symbolic language and a comparison with the computational complexity of the reduction process is carried out. 6. Numerical examples 6.. RBF interpolation As a first example, we compute the Laurent series of the inverse of the RBF interpolation matrix in D with three equispaced centers using multiquadrics as RBFs [8]. We use this example to illustrate in detail how the numerical algorithm 7

9 described in Section 5 can be used to derive the Laurent series of the inverse of a nearly singular matrix. For this simple example, the D RBF interpolation matrix with equispaced nodes is given by the symmetric matrix + δ + 4δ A(δ) = + δ + δ. (38) + 4δ + δ The parameter δ is the product of the inter-nodal distance and the shape parameter of the RBF multiquadrics that controls their flatness (for details, see, for example, [8]). The Taylor series of this matrix is given by A(δ) + δ 0 / 0 /8 / 0 / + δ /8 0 /8 + δ 3 0 /6 4 /6 0 /6 + / 0 /8 0 4 / / / δ 4 5/8 0 5/8 + δ 5 7/56 0 7/ (39) 0 5/ /56 0 It is apparent that A(δ) becomes singular in the limit δ 0. Moreover, the leading order terms of the eigenvalues are 3, δ, and δ /3. Hence, the singularity is of order s =, and we have to execute the algorithm described in Section 5 two times, first for i = and then for i =. The matrix coefficients in (39) are the input data for the algorithm. For i = (letter() = A), the algorithm yields matrices A # and B j j. We start by computing matrices A # 0 and P 0 according to steps and of the algorithm. We obtain /9 /9 /9 /3 /3 /3 A # 0 = /9 /9 /9 /9 /9 /9, and P 0 = /3 /3 /3 /3 /3 /3. (40) In step 3 we compute the matrices (A # 0 )k, k =, 3, 4. Step 4 is not executed since the upper limit of the outer loop 4a is i =. Matrices (A # 0 )k, k =, 3are used in step 5 to compute matrix B 0 (step 5a) and matrices B, B and B 3 (step 5b). Note that for i =, B = letter(i) and the upper limit of loop 5b is s + i = 3. Thus, we obtain B(δ) = 0 8/9 /9 0/9 44/7 7/7 64/ δ /9 4/9 /9 + δ 7/7 8/7 7/ /9 /9 8/9 64/7 7/7 44/7 04/7 7/4 66/7 + δ 3 7/4 68/7 7/ (4) 66/7 7/4 04/7 Once B(δ) is obtained, we repeat the same algorithm for i = (B = letter()). We compute matrices B # and C l l, l = 0,,,..., in terms of matrices B j in (4). Matrices B # 0 and P 0 are obtained according to steps and resulting in /4 0 /4 / 0/ B # 0 = 0 0 0, P 0 = 0 0. (4) /4 0 /4 / 0/ In step 3, we compute the matrices (B # 0 )k for k =, and 3. These matrices are used in step 4 to compute /4 0 /4 B # = /4 0 /4 Matrices (B # 0 )k with k = and 3are also used in step 5 to compute in step 5a, and C and C in step 5b (the upper limit of loop 5b is s + i = ). Thus, we obtain /9 /9 /9 0/7 7/7 0/7 3/7 7/4 3/7 C(δ) = /9 4/9 /9 + δ 7/7 8/7 7/7 + δ 7/4 68/7 7/4. /9 /9 /9 0/7 7/7 0/7 3/7 7/4 3/7 Finally, we have to execute steps and 4, with i = s + = 3, in order to compute matrices C # 0 (step ) and C #, C # We obtain /4 / /4 / 5/4 / 5/36 5/44 5/36 C # (δ) = / / + δ 5/4 3 5/4 + δ 5/44 /9 5/ /4 / /4 / 5/4 / 5/36 5/44 5/36 (step 4). 8

10 Fig.. Solid line: infinite norm of the inverse of matrix A(δ) (38) as a function of δ. Dashed line: infinite norm of the error of the inverse of A computed using its Laurent series (43). Equation (36) is then used to compute the singular part of the Laurent series of the inverse whose matrix coefficients are given by H = /4 / /4 / / /4 / /4, H = 3/4 5/4 /4 5/4 3 5/4 /4 5/4 3/4, H 0 = 0 /6 / /6 0 /6 / /6 0 Equation (37) is used to obtain the regular part of the Laurent series of the inverse. In this way, we find that A (δ) = /4 / /4 δ / / + 3/4 5/4 /4 0 /6 / 5/4 3 5/4 + /6 0 /6 + δ /4 / /4 /4 5/4 3/4 / /6 0 /4 /3 3/4 3/4 485/56 7/4 + δ /3 /3 + δ 485/ / (43) 3/4 /3 /4 7/4 485/56 3/4 Fig. shows with a solid line the infinite norm of the inverse of matrix A(δ) (38) computed numerically. Notice, that for δ smaller than 0 8 matrix A becomes ill-conditioned and round-off errors prevent the numerical computation of the inverse. However, the Laurent series can be used to accurately compute the inverse for any value of δ, no matter how small (dashed line). The Laurent series of the inverse obtained in (43) can be used, for example, to find the RBF interpolation weights that result from the multiplication of A (δ) by the values of the function at the nodes. For instance, consider the case in which the values of the function at the nodes are 0,, and 0. Hence, multiplying the Laurent series (43) by the vector [0,, 0] T results in the following Laurent expansion for the interpolation weights: α(δ) = α k δ k = / δ + 5/4 3 + / (44) δ k= / 5/4 /6 The value of the RBF interpolant at position x is obtained by multiplying the weights (44) by the corresponding multiquadrics basis functions [], i.e., r(x; δ) = α(δ) T + δ(x ( )) + δ(x 0) + δ(x ). (45) Of particular interest is the limit of increasingly flat basis functions given by δ 0. It is well known [,3,5,7] that in this limit RBF interpolation converges to polynomial interpolation. We can use the Laurent series to find out the interpolating polynomial in this limit. Expanding (45) in powers of δ(x x i ), x i =, 0,, results in. 9

11 (x + ) (x + )4 + δ δ + 8 r(x; δ) = α(δ) T + δ x x4 δ 8 +. (46) (x ) (x )4 + δ δ + 8 Collecting terms of order unity leads to lim r(x; δ) = δ 0 αt 0 + α T / 0 + α T /8 0 + / /8 + x αt 0 + α T / 0 + x αt / / + α T 3/4 0 = / / 3/4 = x. (47) If the values of the function at the nodes are arbitrary, [ f, f 0, f ] T, the resulting interpolation polynomial in the limit δ 0is p(x) = f x(x ) x(x + ) + f 0 ( + x)( x) + f, which coincides with the Lagrange interpolation polynomial. The same procedure can be used to find the Laurent series of the inverse of the RBF interpolation matrix in D. For instance, using an equispaced five node stencil [(0, 0), (0, ), (, 0), (0, ), (, 0)] the RBF interpolation matrix is, δ + δ + δ + δ + δ + δ + 4δ + δ + A(δ) = δ + δ + δ + 4δ + (48) δ + 4δ + δ + δ + δ + δ + 4δ + δ + Using the same procedure described above results in the following first three terms of the Laurent series of the inverse A (δ) δ δ In this case, the RBF interpolating polynomial in the limit δ 0is p(x, y) = f 0,0 ( x y ) + f 0, y(y + ) (49) + f,0 x(x + ) + f 0, y(y ) x(x ) + f,0, where f 0,0, f 0,, f,0, f 0,, and f,0, are the values of the function at the corresponding nodes. If we add one extra node and consider the six nodes stencil [(0, 0), (0, ), (, 0), (0, ), (, 0), (, )], the resulting RBF interpolation polynomial in the limit δ 0 becomes p(x, y) = f 0,0 ( + xy x y ) + f 0, y(y + x) + f 0, y(y ) + f,0 x(x ) For instance, we can use the six node stencil to interpolate function + f,0 x(x + y) + + f, xy. (50) 0

12 Fig.. Left: function (5). Right: difference between RBF interpolation for δ = 0.05 using the six nodes stencil [(0, 0), (0, ), (, 0), (0, ), (, 0), (, )], and the RBF interpolation in the limit δ 0(Eq.(50)). (For interpretation of the colors in this figure, the reader is referred to the web version of this article.) f (x, y) = e x y sin (x) (5) which is shown in the left side of Fig.. The weights of the RBF interpolation in the case δ = 0.05 are α(δ) T = [ ]. We have also computed the interpolation using the interpolating polynomial in the limit δ 0 (50). The right side of Fig. shows the difference between RBF interpolation for δ = 0.05 and the RBF interpolation in the limit δ 0. Notice that this difference is O(δ). It should be mentioned that the method works equally well for equispaced and non-equispaced nodes. In [8], we use this algorithm to derive the Laurent series of the inverse of RBF-FD matrices in D, D and 3D using both equispaced and non-equispaced nodes for singularities of order up to seven. Other alternatives have been proposed in the past to compute the leading order approximation to RBF interpolants in the limit δ 0 [3 6]. 6.. Other applications The algorithm in Section 5 works also in the case of complex matrices. As a simple example we have considered the matrix [ ( + δ) A(δ) = e π(+δ) ] i ( + δ) e π(+δ) i. (5) This matrix arises in problems of wave scattering by N = point-like scatterers. Matrix (5) is the Foldy Lax matrix [7,4] when the two point-like scatterers are at a distance (d +δ), where d = π/k 0 is the resonant distance and k 0 is the wavenumber. The Taylor series of this matrix is A(δ) = [ ] [ 0 + π i + δ + π i 0 ] + δ 0 π + π i π (53) + π i 0 Applying the algorithm in Section 5 results in a singularity of order with a Laurent series for the inverse given by A (δ) = [ ] [ ] i i i i δ i i i i We now consider the case of non-symmetric matrix. The following example was analyzed in [] and was adapted from [7]. Consider the matrix A(δ) = δ (54)

13 In this example, both A 0 and A are singular although A (δ) exits for δ>0. Applying the algorithm described in Section 5 we find that the Laurent series of the inverse is A (δ) = 4/9 0/9 4/7 8/7 4/9 0/9 + 77/7 46/ (55) δ 7/9 5/9 34/7 4/7 The method can also be used in cases in which the perturbed matrix A(δ) is also singular for all δ. In those cases, the algorithm described in Section 5 computes the Laurent series of the Drazin inverse A D. Consider, for instance, the following perturbed matrix analyzed in [] A(δ) = δ (56) The index of this matrix is (range(a ) = range(a)). Applying the algorithm to matrix (56) results in A D (δ) = A # (δ) = δ (57) δ Terms of order δ or higher are zero in this case. It is straightforward to verify that matrix A D given in (57) satisfies the properties of the Drazin inverse, namely A A D = A, A D A A D = A D, and A A D = A D A. 7. Conclusions In this paper we have discussed the implementation of an algorithm to obtain the Laurent series of the inverse of singularly perturbed matrices semi-analytically. The method is semi-analytical because the Laurent series is derived from analytical formulas, but the implementation involves a significant amount of numerical computations. Among other advantages, this algorithm can be applied to solve ill-conditioned linear systems, as the ill-conditioning is completely eliminated through the semi-analytical computation of the Laurent series of the inverse matrix. The algorithm implemented here is efficient compared to the computation of the inverse of a matrix using a symbolic language such as Mathematica or Maple. We have shown that complexity of the proposed algorithm grows algebraically with the size N of the matrix as cn 3, where c is a constant, but exponentially with the order of the singularity s of the matrix. Acknowledgements This work has been supported by Spanish MICINN Grants FIS R and CSD Appendix A. Computation of the coefficients R k (λ) To compute the coefficients R k (λ) in (9), consider a point (λ, δ) for which (8) exists. Then, we can write ( ( )) R(λ, δ) = A 0 λ I (λ λ )I Ã(δ) ([ ( ) ] ) = I (λ λ )I Ã(δ) R(λ, 0) (A 0 λ I) [ ( ) = R(λ, 0) I (λ λ )I Ã(δ) R(λ, 0)], (A.) where à = A A 0 = n i= δi A i. If we now set λ = λ and denote R(λ) R(λ, 0) we obtain R(λ, δ) = R(λ)[I + Ã(δ)R(λ)] = R(λ) [ Ã(δ)R(λ)] k = k=0 R(λ) R(λ)(δ A + δ A +...)R(λ) + R(λ)(δ A + δ A +...)R(λ)(δ A + δ A +...)R(λ) +...= R(λ) δr(λ)a R(λ) δ R(λ)A R(λ)...+ δ R(λ)A R(λ)A R(λ) + δ 3 (R(λ)A R(λ)A R(λ) + R(λ)A R(λ)A R(λ)) +..., (A.) for sufficiently small δ. From (9) and (A.) and using the definition of Ã, we readily obtain the coefficients (0).

14 Appendix B. Computation of the coefficients A # n s To compute the coefficients A n # s in expansion (4), we substitute the coefficients (0) into (3). That is, A n # = R n (λ) λ dλ π i = ( ) p R(λ)A ν R(λ)A ν...a νp R(λ)dλ π i λ ν +ν + +ν p =k = ( ) p π i λ R(λ)A ν R(λ)A ν...a νp R(λ)dλ ν +ν + +ν p =k ( ) = ( ) p Res λ=0 λ R(λ)A ν R(λ)A ν...a νp R(λ) ν +ν + +ν p =k Now we substitute R(λ) by its Laurent series λ P 0 + n=0 λn (A # ) n+ and collect the terms with λ which have the form: S σ + A ν S σ +...A νp S σp+ + (B.) σ +σ + +σ p+ =0 (B.) then we change indexes to μ i = σ i + for i =,...,p + and we get: S μ A ν S μ...a νp S μp+ μ +μ + +μ p+ =p+ (B.3) From here we substitute in the last expression for Q n and take into account the tensor notation we get eq. (4). References [] K.E. Avrachenkov, J.B. Lasserre, The fundamental matrix of singularly perturbed Markov chains, Adv. Appl. Probab. 3 (999) [] K.E. Avrachenkov, M. Haviv, Perturbation of null spaces with application to the eigenvalue problem and generalized inverses, Linear Algebra Appl. 369 (003) 5. [3] K.E. Avrachenkov, M. Haviv, P.G. Howlett, Inversion of analytic matrix functions that are singular at the origin, SIAM J. Matrix Anal. Appl. (00) [4] K.E. Avrachenkov, J.B. Lasserre, Analytic perturbation of generalized inverses, Linear Algebra Appl. 438 (03) [5] A. Ben-Israel, T.N.E. Greville, Generalized Inverses: Theory and Applications, Springer, New York, NY, ISBN , 003. [6] H. Baumgartel, Analytic Perturbation Theory for Matrices and Operators, Birkhäuser, Basel, 985. [7] S.L. Campbell, C.D. Meyer, N.J. Rose, Applications of the Drazin inverse to linear systems of differential equations with singular constant coefficients, SIAM J. Appl. Math. 3 (976) [8] F. Delebecque, A reduction process for perturbed Markov chains, SIAM J. Appl. Math. 43 (983) [9] F.W. Dorr, The numerical solution of singular perturbations of boundary value problems, SIAM J. Numer. Anal. 7 (970) [0] F.W. Dorr, An example of ill-conditioning in the numerical solution of singular perturbation problems, Math. Comput. 5 (97) [] T.A. Driscoll, B. Fornberg, Interpolation in the limit of increasingly flat radial basis functions, Comput. Math. Appl. 43 (00) [] G.E. Fasshauer, Meshfree Approximation Methods with MATLAB, World Scientific Publishing Co., Singapore, 007. [3] B. Fornberg, G. Wright, Stable computation of multiquadric interpolation for all values of the shape parameters, Comput. Math. Appl. 48 (004) [4] B. Fornberg, C. Piret, Stable algorithm for flat radial basis functions on a sphere, SIAM J. Sci. Comput. 30 (008) [5] B. Fornberg, E. Larsson, N. Flyer, Stable computations with Gaussian radial basis functions, SIAM J. Sci. Comput. 33 (0) [6] B. Fornberg, E. Lehto, C. Powell, Stable calculation of Gaussian-based RBF-FD stencils, Comput. Math. Appl. 65 (03) [7] L.L. Foldy, The multiple scattering of waves, Phys. Rev. 67 (945) [8] P. González, V. Bayona, M. Moscoso, M. Kindelan, Laurent series based RBF-FD method to avoid ill-conditioning, Eng. Anal. Bound. Elem. 5 (05) 4 3. [9] R. Hassin, M. Haviv, Mean passage times and nearly uncoupled Markov chains, SIAM J. Discrete Math. 5 (99) [0] T. Kato, Perturbation Theory for Linear Operators, Springer, 995. [] B.F. Lamond, A generalized inverse method for asymptotic linear programming, Math. Program. 43 (989) [] B.F. Lamond, An efficient basis update for asymptotic linear programming, Linear Algebra Appl. 84 (993) [3] E. Larsson, B. Fornberg, Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions, Comput. Math. Appl. 49 (005) [4] M. Lax, Multiple scattering of waves, Rev. Mod. Phys. 3 (95) [5] R. Schaback, Multivariate interpolation by polynomials and radial basis functions, Constr. Approx. (005) [6] G.W. Stewart, On the perturbation of pseudo-inverses, projections and linear least squaresproblems, SIAM Rev. 5 (977) [7] G.B. Wright, B. Fornberg, Scattered node compact finite difference-type formulas generated from radial basis functions, J. Comput. Phys. (006)

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 9: Conditionally Positive Definite Radial Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH

More information

ADDITIVE RESULTS FOR THE GENERALIZED DRAZIN INVERSE

ADDITIVE RESULTS FOR THE GENERALIZED DRAZIN INVERSE ADDITIVE RESULTS FOR THE GENERALIZED DRAZIN INVERSE Dragan S Djordjević and Yimin Wei Abstract Additive perturbation results for the generalized Drazin inverse of Banach space operators are presented Precisely

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

The DMP Inverse for Rectangular Matrices

The DMP Inverse for Rectangular Matrices Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal

More information

Dragan S. Djordjević. 1. Introduction

Dragan S. Djordjević. 1. Introduction UNIFIED APPROACH TO THE REVERSE ORDER RULE FOR GENERALIZED INVERSES Dragan S Djordjević Abstract In this paper we consider the reverse order rule of the form (AB) (2) KL = B(2) TS A(2) MN for outer generalized

More information

The Drazin inverses of products and differences of orthogonal projections

The Drazin inverses of products and differences of orthogonal projections J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

MATH 5640: Functions of Diagonalizable Matrices

MATH 5640: Functions of Diagonalizable Matrices MATH 5640: Functions of Diagonalizable Matrices Hung Phan, UMass Lowell November 27, 208 Spectral theorem for diagonalizable matrices Definition Let V = X Y Every v V is uniquely decomposed as u = x +

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 1 Part 3: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

STABLE CALCULATION OF GAUSSIAN-BASED RBF-FD STENCILS

STABLE CALCULATION OF GAUSSIAN-BASED RBF-FD STENCILS STABLE CALCULATION OF GAUSSIAN-BASED RBF-FD STENCILS BENGT FORNBERG, ERIK LEHTO, AND COLLIN POWELL Abstract. Traditional finite difference (FD) methods are designed to be exact for low degree polynomials.

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Representation of doubly infinite matrices as non-commutative Laurent series

Representation of doubly infinite matrices as non-commutative Laurent series Spec. Matrices 217; 5:25 257 Research Article Open Access María Ivonne Arenas-Herrera and Luis Verde-Star* Representation of doubly infinite matrices as non-commutative Laurent series https://doi.org/1.1515/spma-217-18

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Radial basis function partition of unity methods for PDEs

Radial basis function partition of unity methods for PDEs Radial basis function partition of unity methods for PDEs Elisabeth Larsson, Scientific Computing, Uppsala University Credit goes to a number of collaborators Alfa Ali Alison Lina Victor Igor Heryudono

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous):

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous): Chapter 2 Linear autonomous ODEs 2 Linearity Linear ODEs form an important class of ODEs They are characterized by the fact that the vector field f : R m R p R R m is linear at constant value of the parameters

More information

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Moore Penrose inverses and commuting elements of C -algebras

Moore Penrose inverses and commuting elements of C -algebras Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

A NOTE ON THE JORDAN CANONICAL FORM

A NOTE ON THE JORDAN CANONICAL FORM A NOTE ON THE JORDAN CANONICAL FORM H. Azad Department of Mathematics and Statistics King Fahd University of Petroleum & Minerals Dhahran, Saudi Arabia hassanaz@kfupm.edu.sa Abstract A proof of the Jordan

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

On interpolation by radial polynomials C. de Boor Happy 60th and beyond, Charlie!

On interpolation by radial polynomials C. de Boor Happy 60th and beyond, Charlie! On interpolation by radial polynomials C. de Boor Happy 60th and beyond, Charlie! Abstract A lemma of Micchelli s, concerning radial polynomials and weighted sums of point evaluations, is shown to hold

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Nonsingularity and group invertibility of linear combinations of two k-potent matrices

Nonsingularity and group invertibility of linear combinations of two k-potent matrices Nonsingularity and group invertibility of linear combinations of two k-potent matrices Julio Benítez a Xiaoji Liu b Tongping Zhu c a Departamento de Matemática Aplicada, Instituto de Matemática Multidisciplinar,

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level. Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

A Simple Derivation of Right Interactor for Tall Transfer Function Matrices and its Application to Inner-Outer Factorization Continuous-Time Case

A Simple Derivation of Right Interactor for Tall Transfer Function Matrices and its Application to Inner-Outer Factorization Continuous-Time Case A Simple Derivation of Right Interactor for Tall Transfer Function Matrices and its Application to Inner-Outer Factorization Continuous-Time Case ATARU KASE Osaka Institute of Technology Department of

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

On V-orthogonal projectors associated with a semi-norm

On V-orthogonal projectors associated with a semi-norm On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai

More information

NATIONAL BOARD FOR HIGHER MATHEMATICS. M. A. and M.Sc. Scholarship Test. September 25, Time Allowed: 150 Minutes Maximum Marks: 30

NATIONAL BOARD FOR HIGHER MATHEMATICS. M. A. and M.Sc. Scholarship Test. September 25, Time Allowed: 150 Minutes Maximum Marks: 30 NATIONAL BOARD FOR HIGHER MATHEMATICS M. A. and M.Sc. Scholarship Test September 25, 2010 Time Allowed: 150 Minutes Maximum Marks: 30 Please read, carefully, the instructions on the following page 1 INSTRUCTIONS

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

MATH 5524 MATRIX THEORY Problem Set 4

MATH 5524 MATRIX THEORY Problem Set 4 MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.

More information

MULTICENTRIC CALCULUS AND THE RIESZ PROJECTION

MULTICENTRIC CALCULUS AND THE RIESZ PROJECTION JOURNAL OF NUMERICAL ANALYSIS AND APPROXIMATION THEORY J. Numer. Anal. Approx. Theory, vol. 44 (2015) no. 2, pp. 127 145 ictp.acad.ro/jnaat MULTICENTRIC CALCULUS AND THE RIESZ PROJECTION DIANA APETREI

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation. 1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear

More information

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 10, 2011

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 10, 2011 University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra PhD Preliminary Exam June 2 Name: Exam Rules: This is a closed book exam Once the exam begins you

More information

Columbus State Community College Mathematics Department Public Syllabus

Columbus State Community College Mathematics Department Public Syllabus Columbus State Community College Mathematics Department Public Syllabus Course and Number: MATH 2568 Elementary Linear Algebra Credits: 4 Class Hours Per Week: 4 Prerequisites: MATH 2153 with a C or higher

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 2: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Stability constants for kernel-based interpolation processes

Stability constants for kernel-based interpolation processes Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report 59 Stability constants for kernel-based interpolation processes Stefano De Marchi Robert Schaback Dipartimento

More information

Some additive results on Drazin inverse

Some additive results on Drazin inverse Linear Algebra and its Applications 322 (2001) 207 217 www.elsevier.com/locate/laa Some additive results on Drazin inverse Robert E. Hartwig a, Guorong Wang a,b,1, Yimin Wei c,,2 a Mathematics Department,

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Dragan S. Djordjević. 1. Motivation

Dragan S. Djordjević. 1. Motivation ON DAVIS-AAN-WEINERGER EXTENSION TEOREM Dragan S. Djordjević Abstract. If R = where = we find a pseudo-inverse form of all solutions W = W such that A = R where A = W and R. In this paper we extend well-known

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled Experiments are not graded, and should not be turned in. They are designed for

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Mathematical Sciences 205, 2, p. 20 M a t h e m a t i c s MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I Yu. R. HAKOPIAN, S. S. ALEKSANYAN 2, Chair

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

Displacement rank of the Drazin inverse

Displacement rank of the Drazin inverse Available online at www.sciencedirect.com Journal of Computational and Applied Mathematics 167 (2004) 147 161 www.elsevier.com/locate/cam Displacement rank of the Drazin inverse Huaian Diao a, Yimin Wei

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Mathematical Methods for Engineers and Scientists 1

Mathematical Methods for Engineers and Scientists 1 K.T. Tang Mathematical Methods for Engineers and Scientists 1 Complex Analysis, Determinants and Matrices With 49 Figures and 2 Tables fyj Springer Part I Complex Analysis 1 Complex Numbers 3 1.1 Our Number

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Stability of Kernel Based Interpolation

Stability of Kernel Based Interpolation Stability of Kernel Based Interpolation Stefano De Marchi Department of Computer Science, University of Verona (Italy) Robert Schaback Institut für Numerische und Angewandte Mathematik, University of Göttingen

More information

Preface. Figures Figures appearing in the text were prepared using MATLAB R. For product information, please contact:

Preface. Figures Figures appearing in the text were prepared using MATLAB R. For product information, please contact: Linear algebra forms the basis for much of modern mathematics theoretical, applied, and computational. The purpose of this book is to provide a broad and solid foundation for the study of advanced mathematics.

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. UMER. AAL. Vol. 43, o. 2, pp. 75 766 c 25 Society for Industrial and Applied Mathematics POLYOMIALS AD POTETIAL THEORY FOR GAUSSIA RADIAL BASIS FUCTIO ITERPOLATIO RODRIGO B. PLATTE AD TOBI A. DRISCOLL

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

Journal of Complexity. On strata of degenerate polyhedral cones, II: Relations between condition measures

Journal of Complexity. On strata of degenerate polyhedral cones, II: Relations between condition measures Journal of Complexity 26 (200) 209 226 Contents lists available at ScienceDirect Journal of Complexity journal homepage: www.elsevier.com/locate/jco On strata of degenerate polyhedral cones, II: Relations

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

BASIC EXAM ADVANCED CALCULUS/LINEAR ALGEBRA

BASIC EXAM ADVANCED CALCULUS/LINEAR ALGEBRA 1 BASIC EXAM ADVANCED CALCULUS/LINEAR ALGEBRA This part of the Basic Exam covers topics at the undergraduate level, most of which might be encountered in courses here such as Math 233, 235, 425, 523, 545.

More information

ANSWERS. E k E 2 E 1 A = B

ANSWERS. E k E 2 E 1 A = B MATH 7- Final Exam Spring ANSWERS Essay Questions points Define an Elementary Matrix Display the fundamental matrix multiply equation which summarizes a sequence of swap, combination and multiply operations,

More information

CONSTRUCTION OF A COMPLEX JACOBI MATRIX FROM TWO-SPECTRA

CONSTRUCTION OF A COMPLEX JACOBI MATRIX FROM TWO-SPECTRA Hacettepe Journal of Mathematics and Statistics Volume 40(2) (2011), 297 303 CONSTRUCTION OF A COMPLEX JACOBI MATRIX FROM TWO-SPECTRA Gusein Sh Guseinov Received 21:06 :2010 : Accepted 25 :04 :2011 Abstract

More information

Kernel B Splines and Interpolation

Kernel B Splines and Interpolation Kernel B Splines and Interpolation M. Bozzini, L. Lenarduzzi and R. Schaback February 6, 5 Abstract This paper applies divided differences to conditionally positive definite kernels in order to generate

More information

Introduction to Techniques for Counting

Introduction to Techniques for Counting Introduction to Techniques for Counting A generating function is a device somewhat similar to a bag. Instead of carrying many little objects detachedly, which could be embarrassing, we put them all in

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Sec. 6.1 Eigenvalues and Eigenvectors Linear transformations L : V V that go from a vector space to itself are often called linear operators. Many linear operators can be understood geometrically by identifying

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information