NONNEGATIVE AND SKEW-SYMMETRIC PERTURBATIONS OF A MATRIX WITH POSITIVE INVERSE

Similar documents
c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Finite difference method for elliptic problems: I

An Extrapolated Gauss-Seidel Iteration

Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

Computational math: Assignment 1

Second Order Elliptic PDE

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Interval solutions for interval algebraic equations

A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators

Sophomoric Matrix Multiplication

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

Properties of Matrices and Operations on Matrices

Spectral radius, symmetric and positive matrices

CLASSICAL ITERATIVE METHODS

Numerical Method for Solving Fuzzy Nonlinear Equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Section 3.9. Matrix Norm

Key words. Index, Drazin inverse, group inverse, Moore-Penrose inverse, index splitting, proper splitting, comparison theorem, monotone iteration.

CAAM 454/554: Stationary Iterative Methods

Transactions on Modelling and Simulation vol 8, 1994 WIT Press, ISSN X

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

2 Two-Point Boundary Value Problems

Linear Hyperbolic Systems

MATH 3321 Sample Questions for Exam 3. 3y y, C = Perform the indicated operations, if possible: (a) AC (b) AB (c) B + AC (d) CBA

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

We have two possible solutions (intersections of null-clines. dt = bv + muv = g(u, v). du = au nuv = f (u, v),

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

x n+1 = x n f(x n) f (x n ), n 0.

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

On Optimal Alternating Direction Parameters for Singular Matrices

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

ASYMPTOTIC STRUCTURE FOR SOLUTIONS OF THE NAVIER STOKES EQUATIONS. Tian Ma. Shouhong Wang

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

COINCIDENCE SETS IN THE OBSTACLE PROBLEM FOR THE p-harmonic OPERATOR

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate

SUBELLIPTIC CORDES ESTIMATES

It is known that Morley element is not C 0 element and it is divergent for Poisson equation (see [6]). When Morley element is applied to solve problem

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

A global solution curve for a class of free boundary value problems arising in plasma physics

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Laplacian spectral radius of trees with given maximum degree

GQE ALGEBRA PROBLEMS

Symmetric Matrices and Eigendecomposition

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

MIT Algebraic techniques and semidefinite optimization February 16, Lecture 4

EQUICONTINUITY AND SINGULARITIES OF FAMILIES OF MONOMIAL MAPPINGS

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

MATH 5640: Functions of Diagonalizable Matrices

Math Matrix Algebra

Numerical Methods - Numerical Linear Algebra

Absolute value equations

Linear Algebra Practice Problems

1 Last time: least-squares problems

. ffflffluary 7, 1855.

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Ayuntamiento de Madrid

First, we review some important facts on the location of eigenvalues of matrices.

Lecture 18 Classical Iterative Methods

Arc Search Algorithms

Systems of Algebraic Equations and Systems of Differential Equations

The Riccati transformation method for linear two point boundary problems

Lecture Note 1: Background

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

R, 1 i 1,i 2,...,i m n.

Part II NUMERICAL MATHEMATICS

ON SOME ELLIPTIC PROBLEMS IN UNBOUNDED DOMAINS

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Multivariable Calculus

A simple dynamic model of labor market

LINEAR ALGEBRA W W L CHEN

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

ON POSITIVE SEMIDEFINITE PRESERVING STEIN TRANSFORMATION

Spectral gradient projection method for solving nonlinear monotone equations

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

Eigenvalue comparisons in graph theory

Problems for M 10/26:

ETNA Kent State University

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, February 2, Time Allowed: Two Hours Maximum Marks: 40

M e t ir c S p a c es

Linear Algebra: Matrix Eigenvalue Problems

Review of Some Concepts from Linear Algebra: Part 2

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

A Smoothing Newton Method for Solving Absolute Value Equations

Nonlinear Programming Algorithms Handout

7 Planar systems of linear ODE

Transcription:

mathematics of computation volume 54, number 189 january 1990, pages 189-194 NONNEGATIVE AND SKEW-SYMMETRIC PERTURBATIONS OF A MATRIX WITH POSITIVE INVERSE GIUSEPPE BUFFONI Abstract. Let A be a nonsingular matrix with positive inverse and B a nonnegative matrix. Let the inverse of A + vb be positive for 0 < v < v < +00 and at least one of its entries be equal to zero for v = v* ; an algorithm to compute v* is described in this paper. Furthermore, it is shown that if A + A is positive definite, then the inverse of A + v (B- B ) is positive for 0 < v < v*. Let 1. Introduction (1) A + vb be an n x n real matrix, where A is a nonsingular matrix with positive inverse ([5, 2, 1]), B (B / 0) a nonnegative matrix and v a nonnegative real parameter, (2) A~l>0, v3>0, B 0, v> The parameter v may be considered as a measure of the size of the nonnegative perturbation vb of the matrix A. Let (3) Z(v) = (A + vb)-l=[zu(v)]. For v = 0, we have Z(0) = A~x > 0; thus, det(a + vb) f 0 and Z(v) > 0 in a sufficiently small neighborhood of This paper addresses the problem of finding the largest, possibly infinite, number v* such that A + vb is nonsingular and Z(v) > 0 in [0, v*). We will describe an algorithm (the iterative process (6)) to compute v* if v* < +0 In the case v* = +00, the successive approximations defined by (6) form a sequence diverging monotonically to + CX3. We shall consider also matrices of the type (4) C(v) = A + v(b-bt); here the matrix A is perturbed by a skew-symmetric matrix which may be written as B - B with B > It will be shown that if A + A is positive definite, then C~ (v) > Z(v) > 0 in [0, v*), where Z is defined by (3). Received July 26, 1988; revised November 23, 1988 and January 18, 1989. 1980 Mathematics Subject Classification (1985 Revision). Primary 65F3 189 1990 American Mathematical Society 0025-5718/90 $1.00+ $.25 per page

190 GIUSEPPE BUFFONI Numerical calculations have been performed by using the matrix involved in the discrete analog of the integro-differential equation (5) ^ = A 1 ' dt dx du Pd~x + q[u0 - u] + v / K(x, x')[u0(x') - u(x')] dx' Jo with boundary conditions «(0) = u(\) = 0, where p(x) > 0, q(x) > 0, u0(x) > 0, and K(x, x') > Equation (5) is a model for a spatially distributed community whose migration has both a random and a special deterministic component; more complicated models ( «-species communities, nonlinear) can be obtained including birth-death processes, competition and predator-prey interactions [4]. A direct finite difference approach to (5) provides a discrete approximation u of the steady state solution u satisfying an equation of the type (A + vb)u = f > 0, where A + vb is of type (2); the positivity of its inverse assures the positivity and the stability of u. 2. The inverse of A + vb Lemma 1. Assume (2) and let det(a + vb) ^ 0 and Z(v) > 0, with Z(v) as defined in (3). Then Z'{v) < 0 and Z"(v) > Proof. From the identity (A + vb)z(v) = I we obtain Z' = -ZBZ, Z" = -2ZBZ' = 2ZBZBZ, where Z' = dz/dv = \z\j\ and Z" = dz'/dv = [z"u].as B > 0, B 0, and Z(v) > 0, there follows Z'(v) < 0 and Z"(v) > D Lemma 2. Under the assumptions of Lemma 1, let va be the largest number such that det(a + vb) ^ 0 in the interval [0, va). Then, either va = +00, or an element of Z(v) must change sign in [0, va). Proof. As v va, at least one entry of Z(v) must become infinite. Otherwise, in any interval [0, v ) where Z(v) > 0 we have Z'(v) < 0 (Lemma 1); therefore, Z(v) is bounded in [0,u ), 0<Z(v) <Z(Q) = A~X. It follows that v* = maxi7 < vn, with strict inequality if v* < 4-00, because 0 < Z(v*) < A~l. When v* < +00, the thesis follows from Z'(v*) < 0 and Lemma 1 (note that the entries z Av) cannot vanish identically). D Theorem 1. Let v* be the largest, possibly infinite, number such that Z(v) > 0 in [0,v*). Then v* is the limit of the sequence {vk} given by (6) vk+i = vk +. min,n zku wku. fc = 0,l,2,...,«;t;0 = 0, where Zk = Z(vk) = [zkij], Wk = -Z'(vk) = ZkBZk = [wkij]. Proof. Let v* be the smallest value of v for which z, iv) = 0, if such a value exists, or +00 otherwise. We have v* - min,,«".. In [0,v*), the matrix Z(v) does not have singularities (Lemma 2) and its entries are strictly

PERTURBATIONS OF A MATRIX WITH POSITIVE INVERSE 191 decreasing and convex functions of v (Lemma 1). These regularity conditions on the entries z, (i>) allow us to obtain the sequence {vk}, given by (6), as follows: we compute the Newton steps for the elements of the equation Z(v) 0 and use the smallest of them to update v. The first iteration, with starting value vq - 0, produces the equations z, (0)+ vz'ij(0) - 0, where z,..(0) > 0 and z (0) < The smallest solution of these equations is the first approximation vx in (6) and it is the largest value of v for which Z(0) + vz'(0) = A'X- va~lba~x > As Z(v) > Z(0) + vz'(0) for 0 < v < v*, we have i>, < v*, i, j = 1,2,...,«; therefore, 0 < vx < v* and Z, > 0, Wx > The successive approximations vk axe defined as follows. Suppose we have computed the approximation vk, for some k > 0, for which we have 0 < vk < v*, Zk > 0, Wk > We compute the Newton steps starting from the value vk, common to all the equations z,. Av) = 0 ; this produces the equations z Avk) + (v - vk)z\ivk) = The approximation vk+x (the smallest solution of these equations) is the largest value of v for which Z(vk) + (v- vk)z'(vk) = Zk-(v- vk)wk > 0 and it is given by (6). As Z(v) > Z(vk) + (v - vk)z'(vk) for vk < v < v*, we have vk+x < v*., i, j = 1,2,...,«; therefore vk < vk+x < v* and Zk+X > 0, Wk+X > We conclude that the sequence {vk} is increasing, bounded from above by v* if v* < +00, and convergent to v* (note that {vk} cannot converge to a limit v* < v* since this would imply (vk+x-vk)» minijzij(v )/\z'ij(v*x)\>0). When v* = +cx>, all the entries z (v) are positive, strictly decreasing, and convex functions of v e [0, +00) (the only possible solution of each equation Zj,(v) = 0 is v* = +00 ). If the sequence {vk} were bounded, then it would be convergent: vk -* vt* < +00 ; as above, we would have (vk+x-vk) > constant > Thus, {vk} is not bounded and it is diverging monotonically to +0 D Remarks, (a) It is possible to show that the sequence {v'k} given by vi+\ = v'k + min zkiilwoii ' k = 0,l,2,...;v'=0, is convergent to v*, if v* < +00, or divergent to +00 otherwise. (b) Only for very small «(the first few integers) can we obtain the analytic expressions of the entries z Av) (i, j - 1,2,...,«) and find their zeros to evaluate v*. The application of the iterative process (6) involves the numerical computation of the inverses Zk, and each iteration requires 0(n ) operations; however, the method has been applied successfully with «equal to 30, 40, and 50 (for example, by using matrices from one-dimensional boundary value problems).

192 GIUSEPPE BUFFONI (c) We can show the quadratic convergence [3, p. 260] of the process (6) when v* < +00 and Z'(v*) > We introduce in (6) zk obtained from Taylor's formula z,,(0 = hij - (w*- vk)wkij + j(v* - vk)2z"ij(vkij). where vk < vkjj <v*. After some manipulations we have v -v.,. = min A + l i,j;wkij>o [{(V ~Vk) Zij(Vkij)-Zij(V )\lwkii> thus, as z j(v*) > 0 and wkij > 0, it follows that where sk+l< max zij(vkij)lwkij^xnm.zij(v )/\z:j(v ), ''J'"'kij>u ''J (7) sk+x=(v*-vk+x)/(vt-vk)2. 3. The inverse of C(v) = A + v(b - BT) Theorem 2. Let the symmetric matrix A + A be positive definite. Then, in [0,î7*) the spectral radius «of the nonnegative matrix (8) H(v) = vz(v)bj is less than 1, and C~ (v) > Z(v) > Proof. The matrix C(v) given by (4) is now written as C(v) = (A + vb)[i-h(v)], where H(v) is given by (8). In [0, v*) we have Z(v) > 0 ; it follows that C~l(v) > Z(v) > 0 if the spectral radius h(v) = r(h) of the nonnegative matrix H(v) is less than 1 [5, p. 83]. To the spectral radius «there corresponds an eigenvector u > 0; from the eigenvalue equation vb u = h(a + vb)u we obtain T T T h - vu /3u/(u Au + vu Bu). We have u Au = [u (A + A )u] > 0, because A + A is assumed positive definite. Thus, as v > 0, u > 0, B > 0, it follows that h<\. u Remarks. By means of simple examples it is possible to show that: (a) The condition A + A positive definite is not necessary to have h(v) < 1, 0 < v < v*. (b) The condition H(v) > 0, 0 < v < v*, is not sufficient by itself to have h(v)< 1. 4. Numerical results As a sample problem we use the matrix A + vb obtained from a finite difference approximation to (5) using central differences and the trapezium rule.

PERTURBATIONS OF A MATRIX WITH POSITIVE INVERSE 193 Here we present the results obtained by assuming in (5) that p = 1, q = 0, and K(x,x) = e\p(-(x - x')2) (sample problem 1). In this case, A is a Stieltjes matrix [5, p. 85] and B is a positive matrix. The inverses Zk are computed by means of the routine LINV2F of the IMSL Library. The results (double-precision computation) are shown in Table 1. The quantities s,, given by (7), tend to a constant value confirming quadratic convergence. Values of v greater than v*, for which some computed entries of Z(v) axe less than zero are reported in the row *. Table 1 Values of vk and of sk for the sample problem 1 for different values of the mesh spacing 1/m. m = 30 v k m = 40 v k V. m 50 0 1 2 3 4 5 6 5.145497 8.172000 8.820264 8.842959 8.842985 8.842985 8.85 0658 0491 0505 0508 0508 5.076475 8.018864 8.633288 8.653839 8.653861 8.653861 8.66 0678 0496 0510 0514 0513 5.035965 7.929456 8.524586 8.543958 8.543978 8.543978 8.55 0690 0499 0517 0517 0516 Table 2 Values of vk and of sk for the sample problem 2 for different values of the mesh spacing 1/m. v. m = 30 v. m - 40 v, m 50 0 1 2 3 4 5 411695 471457 472521 472521 472521 48 49 1.8439 2874 2899 2881 299168 341517 342236 342236 342236 35 36 2.5543 3880 3912 3884 234883 267634 268175 268175 268175 27 28 3.2660 4885 4924 5178 Now we consider the matrix C(v) - A + v(b - B ) obtained by assuming in (5) that p = 1, q = 0, and K(x, x) = x - x (sample problem 2). Here the matrix B is the nonnegative contribution due to K(x, x) for x > x. The

194 GIUSEPPE BUFFONI results are shown in Table 2. Values of v greater than v*, for which some computed entries of Z(v) and of C~l(v) axe less than zero are reported in the rows * and **, respectively. We note that [0, v*) is a sufficiently good approximation of the interval in which C~x(v) > Acknowledgments I wish to thank Professor Walter Gautschi and a referee for their criticism and suggestions. Bibliography 1. G. Buffoni and A. Galati, Matrici essenzialmente positive con inversa positiva, Boll. Un. Mat. liai. (4) 10(1974), 98-103. 2. K. Fan, Topological proofs for certain theorems on matrices with non-negative elements, Monatsh. Math. 62 (1958), 219-237. 3. J. R. Rice, Numerical methods, software and analysis, McGraw-Hill, 1983. 4. Y. M. Svirezhev and D. O. Logofet, Stability of biological communities, MIR, Moscow, 1983. 5. R. S. Varga, Matrix iterative analysis, Prentice-Hall, Englewood Cliffs, N. J. 1965. ENEA CREA S. Teresa, C. P. 316, 1-19100 La Spezia, Italy