On Optimal Alternating Direction Parameters for Singular Matrices

Similar documents
Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Numerical Solutions to Partial Differential Equations

BESSEL MATRIX DIFFERENTIAL EQUATIONS: EXPLICIT SOLUTIONS OF INITIAL AND TWO-POINT BOUNDARY VALUE PROBLEMS

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.

Generalized M-Matrices and Applications

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

EQUIVALENCE OF CONDITIONS FOR CONVERGENCE OF ITERATIVE METHODS FOR SINGULAR LINEAR SYSTEMS

Bounds for Eigenvalues of Tridiagonal Symmetric Matrices Computed. by the LR Method. By Gene H. Golub

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

Weak Monotonicity of Interval Matrices

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate

NONNEGATIVE AND SKEW-SYMMETRIC PERTURBATIONS OF A MATRIX WITH POSITIVE INVERSE

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

Math 308 Practice Final Exam Page and vector y =

Interval solutions for interval algebraic equations

Key words. Index, Drazin inverse, group inverse, Moore-Penrose inverse, index splitting, proper splitting, comparison theorem, monotone iteration.

Fast Matrix Multiplication*

Review of Some Concepts from Linear Algebra: Part 2

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Computational math: Assignment 1

IMPLICIT ALTERNATING DIRECTION METHODS

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

Splitting of Expanded Tridiagonal Matrices. Seongjai Kim. Abstract. The article addresses a regular splitting of tridiagonal matrices.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Problem 1: Solving a linear equation

Zangwill s Global Convergence Theorem

CAAM 454/554: Stationary Iterative Methods

M340L Final Exam Solutions May 13, 1995

NOTES ON BILINEAR FORMS

ON THE NUMERICAL RANGE OF AN OPERATOR

A Spectral Characterization of Closed Range Operators 1

Notation. 0,1,2,, 1 with addition and multiplication modulo

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Foundations of Matrix Analysis

Lecture 10 - Eigenvalues problem

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Edexcel GCE A Level Maths Further Maths 3 Matrices.

Properties of Matrices and Operations on Matrices

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

CLASSICAL ITERATIVE METHODS

Nonlinear Programming Algorithms Handout

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Designing Information Devices and Systems II

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Stat 159/259: Linear Algebra Notes

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

NORMS ON SPACE OF MATRICES

Absolute value equations

(D iär^^r- ) + -^~ + {k^,z)-?jpir,z)=0, (r,,) Q.

Linear vector spaces and subspaces.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Physics 215 Quantum Mechanics 1 Assignment 1

Jacobi s Iterative Method for Solving Linear Equations and the Simulation of Linear CNN

Construction of some Generalized Inverses of Operators between Banach Spaces and their Selections, Perturbations and Applications

Comparison results between Jacobi and other iterative methods

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

JUST THE MATHS UNIT NUMBER 9.9. MATRICES 9 (Modal & spectral matrices) A.J.Hobson

CAAM 336 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination 1

arxiv: v1 [math.ra] 14 Apr 2018

Celebrating Fifty Years of David M. Young s Successive Overrelaxation Method

First, we review some important facts on the location of eigenvalues of matrices.

The Chebyshev Collection Method for Solving Fractional Order Klein-Gordon Equation

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

SEC POWER METHOD Power Method

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

FORMAL TAYLOR SERIES AND COMPLEMENTARY INVARIANT SUBSPACES

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

CONVERGENCE THEOREMS FOR STRICTLY ASYMPTOTICALLY PSEUDOCONTRACTIVE MAPPINGS IN HILBERT SPACES. Gurucharan Singh Saluja

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Elementary maths for GMT

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Example Linear Algebra Competency Test

MAT 610: Numerical Linear Algebra. James V. Lambers

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

Algebra II. Paulius Drungilas and Jonas Jankauskas

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

An Extrapolated Gauss-Seidel Iteration

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

16. Local theory of regular singular points and applications

Math 302 Outcome Statements Winter 2013

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, January 20, Time Allowed: 150 Minutes Maximum Marks: 40

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Optimum Sampling Vectors for Wiener Filter Noise Reduction

Research Article New Examples of Einstein Metrics in Dimension Four

MATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT

A FINITE DIFFERENCE DOMAIN DECOMPOSITION ALGORITHM FOR NUMERICAL SOLUTION OF THE HEAT EQUATION

APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS

FFTs in Graphics and Vision. The Laplace Operator

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

The Linear Sampling Method and the MUSIC Algorithm

FIXED POINT ITERATION FOR PSEUDOCONTRACTIVE MAPS

arxiv: v1 [math.dg] 28 Jun 2008

Transcription:

TECHNICAL NOTES AND SHORT PAPERS On Optimal Alternating Direction Parameters for Singular Matrices By R. B. Kellogg and J. Spanier 1. Introduction. One of the most widely used iterative methods for approximating solutions of partial difference equations in two space dimensions is the method of alternating directions, considered originally by Peaceman and Rachford [6]. Application of this method involves the selection of certain auxiliary numbers, called acceleration parameters, which are chosen to enhance the convergence of the iterative process. Extremely favorable convergence rates may be obtained for this method, but only when the acceleration parameters are properly chosen. The matrix problem to be solved may be written in the form (1) ih+v)x = y0, where H and V are symmetric, positive semi-definite matrices and H + V is also symmetric and positive semi-definite. A necessary and sufficient condition for a solution of ( 1 ) to exist is that y0 be orthogonal to the null space of H + V. Of course, if this null space is empty, H + V is nonsingular and a unique solution x exists but in some applications this is not the case. For example, solving Laplace's equation with Neumann boundary conditions by finite difference methods leads to an equation ( 1 ) in which H + V is singular. The alternating direction method applied to ( 1 ) may be defined by (2) x*+u/2) = ih + ak+xd)~~\iv ak+ld)xk y6], xk+x = iv + ak+xd) [ih ak+1d)xk+a/2) y0], where x0 is an arbitrary initial vector, D is a symmetric positive definite normalizing matrix, usually defined as in [3], [8], or [12], and the ak are the acceleration parameters. It has been shown [4] that there exist parameter sequences for which the iterates xk converge to a solution of (1), even when H + V is singular. However, all algorithms known to the authors for generating acceleration parameters which result in rapid convergence of the iterates xk degenerate to the absurd choice ak = 0. This paper shows how useful acceleration parameters may be chosen for singular problems. 2. Preliminaries. We shall make use of the notation and some of the results of [4] to set the stage for our analysis. We begin with the following assumptions: (a) The vector y0 is orthogonal to nih + V) = the null space of H + V. (b) The sequence [ak\ is monotone and cyclic; i.e., ai ^ 02 S ^ at > 0, ak = ak+t. The integer t will be referred to as the cycle length. Received January 11, 1965. 448

ALTERNATING DIRECTION PARAMETERS FOR MATRICES 449 Now define matrices Tk and Z by Tk = (V + akd)-\h - akd)ih + akdy\v - akd), Z = TtTt-i Tx, so that Z is the iteration matrix after one cycle of t iterations. As in [4], let Hx = D~mHD-w, Vx = D~WVD-W, Tk' = DU2TkD-m = ivx + ajy'ihx - a,/)(/71 + aki)'\vx - aki), Zx = DU2ZD~m = TlT\_x Tx, and let E denote the orthogonal projection on nihx + Vx)., The following results are established in [4] and will be useful in our analysis: (Tl) The matrix Zx coincides with E on»j(//i + Vx)., (T2) Let a, ß be any positive numbers with a ^ ß. Then there exists a positive integer to such that any parameter sequence S of the form S = \ax,, a«0}, ß è ai è ^ a,0 è a, a* = ak+h, will cause the powers Zx" to converge to the projection E and the powers Zn to converge to D~V2ED112, a projection on r ih + V). (T3) With the same assumptions as in (T2), the sequence \xk\ converges to a solution of (1). From this point on we assume that t is fixed and that k is a multiple of t, so that our discussion is aimed at methods in which cycles of t iterations are always completed. If x is any solution of (1), since x is transformed into itself by (2), we have (3) x xk = Zkl\x Xo) = Z"(x xo), n k/t an integer. It is easily seen from (3) and (T2) that, when the xk converge to a solution of (1), they converge to that solution w whose projection on jj(r7 + V) satisfies Setting x = w in (3), we have D-ll2EDll2w w xk = Z"iw = D~ll2EDmxo. Xo) (4) = iz" - D~mEDm)iw - xo) = iz - D-ll2EDll2)n(w - xo). It is common practice (see, e.g., [9, p. 67]) in dealing with iterative methods such as (2) to define the rate of convergence in terms of the spectral radius of the iteration matrix. Thus, if the spectral radius is used as a criterion, equation (4) reveals that we should seek to minimize PÍZ - D-ll2EDm) = sup HZ - D-mED112), the supremum being taken over all eigenvalues of Z D~ll2EDm. But Z - D'mEDm = D~miZx - E)DW

450 R. B. KELLOGG AND J. SPANIER H- ß,! Fia. 1 ß so that it suffices to minimize p(zi E). This minimization will be performed in the next section for a restricted class of pairs of matrices (r7i, Vx). 3. Commutative Analysis. In this section we will show how to select acceleration parameters which minimize pizx E), assuming that HxVx = VxHx. We will also show how to apply these acceleration parameters in the noncommutative case. The assumption of commutativity, while necessary for our analysis, is rarely satisfied in practice. Nevertheless, parameters based on the commutative assumption have worked well in a variety of nonsingular problems [1]. Our own experience with singular problems is along the same lines. A variety of heat conduction problems have been solved using the program HOT [7] in which both matrices H and V were singular. In some of these problems, H + V was also singular. Observed convergence rates, using the parameters suggested by our theorem, were substantially the same as those observed in nonsingular problems. Let a and ß be given with 0 < a < ß. Let X and p be real variables and let R be the point set in the (X, p) plane defined by ß= (0,0) U{(X,0):a g Xg/3J U {(0, p) : a ^ p. g ß] U{(X,M):«á X, p. g ß}. The set R is shown in Figure 1. Let e be the collection of all pairs ihx, Vx) oí symmetric N X N matrices which commute, HxVx = VxHx, and which satisfy the condition that if X and p are eigenvalues of Hx and Vx, respectively, having the same eigenvector, then (X, p) R. Consider a set of positive acceleration parameters ax,, at. As has been seen, the spectral radius pizx E) is a measure of the rate of convergence of the iterations (2) using these acceleration parameters. Therefore, the function riax,,at) = sup [pizx - E): ihx, Vx) G 6}

ALTERNATING DIRECTION PARAMETERS FOR MATRICES 451 is a measure of the least favorable rate of convergence that would be encountered in using the parameters ax,, a, on pairs of matrices drawn from the class 6. The following theorem gives a set of parameters that makes this least favorable convergence rate as good as possible. To state the theorem, we introduce a function fiax,, at) defined by.. (5) fiax,,at) = max II i, i "e'eß í=i I ai + s I Let Si,, Sf.be the unique (in descending order) set of parameters which minimize fiax,, at). The existence and uniqueness of this minimizing set of parameters is proved in [10]. Theorem. If ax,, at is a nonnegative set of parameters, then r(ôi,, o<) g r(«di ',at) = fiat,,a,). Proof. Let (i/i, Vx) 6 and let ax,, at be a nonnegative set of parameters. Since Hx and Vx are symmetric and commute, there is an orthogonal basis of common eigenvectors of Hx and Vx. If w is an element of the basis, and if Hxw = \w, VxW = pw, then w is an eigenvector of Z\ with eigenvalue = TT (flj ~ ^)(ai ~ f) V M ioj + \)(Of + m) ' E is the orthogonal projection on the subspace generated by the basis vectorsjp such that X + p = 0. If X + p = 0, then X = p = 0 and p = 1. Hence, w is an eigenvector of Zx E with eigenvalue 0 if X = p = 0 and with eigenvalue p otherwise. Since (X, p) (E R, it is easily seen that \p\ ^ fiax,, at) when X + p > 0. Hence, pizx E) ^ fiax,, at), and, therefore, (6) riax,,at) g /(oi,, a,). By compactness, the maximum in (5) must be obtained for some s Ç [a, ß]. Let Hi = si, Vx = 0. Then ih, Vx) Ç 6 and p(zi - E) = /(a,,, at). Hence, using (6), riax,, a») = /(oi,, a<). The rest of the theorem follows from the definition of àx,, ät. The parameters äx,, ät may be considered optimal parameters for matrix pairs in the class C. These parameters are determined by the numbers a, ß, and t. There is in the literature a variety of methods for computing them [10, 11] or approximating them [2, 12]. If this theorem is compared with equation (7.42) of [9], it is seen that, using the parameters äi,, ät, a nonsingular problem may be expected to converge twice as fast as a singular problem with the same parameter bounds a and ß. In the noncommutative case, HxVx ^ VxHx, nothing is known about the nature of an optimal set of parameters. Nevertheless one can, with proper interpretation, apply the optimal commutative parameters Si,, ät in the noncommutative case. It suffices to observe that a and ß are, respectively, lower and upper bounds for the least positive and the largest eigenvalues of the matrices Hx and Vx, ihx, Vx) 6 6. If HxVx 5* VxHx we may still construct such lower and upper bounds a and ß and from these obtain parameters äx,, ät for use in (2). The upper bound

452 ADI BEN-ISRAEL ß may be obtained from Gershgorin's theorem. A method of obtaining lower bounds for the least positive eigenvalue of a certain type matrix is discussed in [5]. Bettis Atomic Power Laboratory Westinghouse Electric Corporation West Mifflin, Pennsylvania 1. G. Birkhofp, R. S. Varga «fe D. Young, Alternating Direction Implicit Methods, Advances in Computers, Vol. 3, Academic Press, New York, 1962. 2. C. de Boor & J. R. Rice, "Chebyshev approximation by a n(i r )/(x + s,) and application to ADI iteration," J. Soc. Indust. Appl. Math., v. 11, 1963, pp. 159-169. MR 28 #466. 3. J. Douglas, "Alternating direction v. 4, 1962, pp. 41-63. MR 24 #B2122. methods for three space variables," Numer. Math., 4. J. Douglas & C. Pearcy, "On convergence of alternating direction procedures presence of singular operators," Numer. Math., v» 5, 1963, pp. 175-184. MR 27 #4384. in the 5. W. H. Guilinger & R. B. Kellogg, "Eigenvalue lower bounds," submitted to the Bettis Technical Review. 6. D. W. Peaceman <fe H. H. Rachford, Jr., "The numerical solution of parabolic and elliptic differential equations," J. Soc. Indust. Appl. Math., v. 3, 1955, pp. 28-41. MR 17, 196. 7. R. B. Smith & J. Spanier, HOT-1: A Two Dimensional Steady-State Heat Conduction Program for the Philco-SOOO, WAPD-TM-465, Bettis Atomic Power Laboratory, Pittsburgh, Pa., 1964. 8. J. Spanier <fe W. H. Guilinger, "Matrix conditioning for alternating direction methods," Bettis Technical Review, WAPD-BT-29, 1963, pp. 69-79. 9. R. S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, N. J., 1982. MR 28 # 1725. 10. E. L. Wachspress, "Optimum alternating-direction-implicit iteration parameters for a model problem," J. Soc. Indust. Appl. Math., v. 10,1962, pp. 339-350. MR 27 #921. 11. E. L. Wachspress, "Extended application of alternating direction implicit iteration model problem theory," J. Soc. Indust. Appl. Math., v. 11, 1963, pp. 994-1016. 12. E. L. Wachspress & G. J. Habetler, "An alternating direction implicit iteration technique," /. Soc. Indust. Appl. Math., v. 8, 1960, pp. 403-424. MR 22 #5132. An Iterative Method for Computing the Generalized Inverse of an Arbitrary Matrix By Adi Ben-Israel Abstract. The iterative process, Xn+i = X i2i AX ), for computing A-1, is generalized to obtain the generalized inverse. An iterative method for inverting a matrix, due to Schulz [1], is based on the convergence of the sequence of matrices, defined recursively by (1) Xn+x = X i21 - AX«) (n - 0, 1, ) to the inverse A~l of A, whenever X0 approximates A"1. In this note the process ( 1 ) is generalized to yield a sequence of matrices converging to A +, the generalized inverse of A [2\. Let A denote an m X n complex matrix, A its conjugate transpose, Pru) the perpendicular projection of Em on the range of A, Pru') the perpendicular projection of En on the range of A*, and A+ the generalized inverse of A. Theorem. The sequence of matrices defined by (2) X.+i = X.(2P.u) - AX.) («= 0, 1, ), Received November 9, 1964.