This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Similar documents
(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

Lecture Note 3: Stationary Iterative Methods

Problem set 6 The Perron Frobenius theorem.

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems

An explicit Jordan Decomposition of Companion matrices

Investigation on spectrum of the adjacency matrix and Laplacian matrix of graph G l

THE REACHABILITY CONES OF ESSENTIALLY NONNEGATIVE MATRICES

Partial permutation decoding for MacDonald codes

A Brief Introduction to Markov Chains and Hidden Markov Models

Primal and dual active-set methods for convex quadratic programming

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

BALANCING REGULAR MATRIX PENCILS

Approximated MLC shape matrix decomposition with interleaf collision constraint

Pattern Frequency Sequences and Internal Zeros

Algorithms to solve massively under-defined systems of multivariate quadratic equations

ORTHOGONAL MULTI-WAVELETS FROM MATRIX FACTORIZATION

Generalized Bell polynomials and the combinatorics of Poisson central moments

(f) is called a nearly holomorphic modular form of weight k + 2r as in [5].

New Efficiency Results for Makespan Cost Sharing

CS229 Lecture notes. Andrew Ng

Theory of Generalized k-difference Operator and Its Application in Number Theory

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS

Approximated MLC shape matrix decomposition with interleaf collision constraint

XSAT of linear CNF formulas

Integrating Factor Methods as Exponential Integrators

4 Separation of Variables

Pricing Multiple Products with the Multinomial Logit and Nested Logit Models: Concavity and Implications

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)

A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC

Homogeneity properties of subadditive functions

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS

Semidefinite relaxation and Branch-and-Bound Algorithm for LPECs

Extrapolation and the Cayley Transform

C. Fourier Sine Series Overview

Input-to-state stability for a class of Lurie systems

Cryptanalysis of PKP: A New Approach

arxiv: v1 [math.co] 17 Dec 2018

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An implicit Jacobi-like method for computing generalized hyperbolic SVD

Available online at ScienceDirect. IFAC PapersOnLine 50-1 (2017)

Week 6 Lectures, Math 6451, Tanveer

The EM Algorithm applied to determining new limit points of Mahler measures

Research Article Numerical Range of Two Operators in Semi-Inner Product Spaces

Absolute Value Preconditioning for Symmetric Indefinite Linear Systems

Combining reaction kinetics to the multi-phase Gibbs energy calculation

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

Fitting affine and orthogonal transformations between two sets of points

Do Schools Matter for High Math Achievement? Evidence from the American Mathematics Competitions Glenn Ellison and Ashley Swanson Online Appendix

Coupling of LWR and phase transition models at boundary

A. Distribution of the test statistic

4 1-D Boundary Value Problems Heat Equation

JENSEN S OPERATOR INEQUALITY FOR FUNCTIONS OF SEVERAL VARIABLES

Wavelet shrinkage estimators of Hilbert transform

6 Wave Equation on an Interval: Separation of Variables

ETNA Kent State University

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

Global Optimality Principles for Polynomial Optimization Problems over Box or Bivalent Constraints by Separable Polynomial Approximations

Source and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems

Construction of Supersaturated Design with Large Number of Factors by the Complementary Design Method

Distributed average consensus: Beyond the realm of linearity

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

Math 124B January 17, 2012

ON CERTAIN SUMS INVOLVING THE LEGENDRE SYMBOL. Borislav Karaivanov Sigma Space Inc., Lanham, Maryland

Maejo International Journal of Science and Technology

The Symmetric and Antipersymmetric Solutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B A l X l B l = C and Its Optimal Approximation

Lower Bounds for the Relative Greedy Algorithm for Approximating Steiner Trees

Asynchronous Control for Coupled Markov Decision Systems

Volume 13, MAIN ARTICLES

Research Article The Diagonally Dominant Degree and Disc Separation for the Schur Complement of Ostrowski Matrix

#A48 INTEGERS 12 (2012) ON A COMBINATORIAL CONJECTURE OF TU AND DENG

Smoothers for ecient multigrid methods in IGA

17 Lecture 17: Recombination and Dark Matter Production

CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION

Reichenbachian Common Cause Systems

Alberto Maydeu Olivares Instituto de Empresa Marketing Dept. C/Maria de Molina Madrid Spain

Gaussian Curvature in a p-orbital, Hydrogen-like Atoms

Explicit overall risk minimization transductive bound

Consistent linguistic fuzzy preference relation with multi-granular uncertain linguistic information for solving decision making problems

BP neural network-based sports performance prediction model applied research

Lecture 6: Moderately Large Deflection Theory of Beams

arxiv: v1 [math.ap] 8 Nov 2014

Available online at ScienceDirect. Procedia Computer Science 96 (2016 )

Two-sample inference for normal mean vectors based on monotone missing data

Completion. is dense in H. If V is complete, then U(V) = H.

Mat 1501 lecture notes, penultimate installment

On the Goal Value of a Boolean Function

More Scattering: the Partial Wave Expansion

Intuitionistic Fuzzy Optimization Technique for Nash Equilibrium Solution of Multi-objective Bi-Matrix Games

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

Math 124B January 31, 2012

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness

MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES

Robust Sensitivity Analysis for Linear Programming with Ellipsoidal Perturbation

c 2007 Society for Industrial and Applied Mathematics

PAijpam.eu SOME RESULTS ON PRIME NUMBERS B. Martin Cerna Maguiña 1, Héctor F. Cerna Maguiña 2 and Harold Blas 3

K a,k minors in graphs of bounded tree-width *

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm

Transcription:

This artice appeared in a journa pubished by Esevier. The attached copy is furnished to the author for interna non-commercia research and education use, incuding for instruction at the authors institution and sharing with coeagues. Other uses, incuding reproduction and distribution, or seing or icensing copies, or posting to persona, institutiona or third party websites are prohibited. In most cases authors are permitted to post their version of the artice (e.g. in Word or Tex form) to their persona website or institutiona repository. Authors requiring further information regarding Esevier s archiving and manuscript poicies are encouraged to visit: http://www.esevier.com/copyright

Linear Agebra and its Appications 43 (009) 97 0 Contents ists avaiabe at ScienceDirect Linear Agebra and its Appications journa homepage: www.esevier.com/ocate/aa Nonstationary Extrapoated Moduus Agorithms for the soution of the Linear Compementarity Probem A. Hadjidimos a,, M. Tzoumas b a Department of Computer and Communication Engineering, University of Thessay, 0 Iasonos Street, GR-383 33 Voos, Greece b Department of Mathematics, University of Ioannina, GR-45 0 Ioannina, Greece A R T I C L E I N F O A B S T R A C T Artice history: Received 5 August 008 Accepted February 009 Avaiabe onine 7 March 009 Submitted by V. Mehrmann AMS cassification: Primary 65F0 Keywords: LCP P-matrices Rea symmetric positive definite matrices Iterative schemes Extrapoation (Bock) Moduus Agorithm The Linear Compementarity Probem (LCP) has many appications as, e.g., in the soution of Linear and Convex Quadratic Programming, in Free Boundary Vaue probems of Fuid Mechanics, etc. In the present work we assume that the matrix coefficient M R n,n of the LCP is symmetric positive definite and we introduce the (optima) nonstationary extrapoation to improve the convergence rates of the we-known Moduus Agorithm and Bock Moduus Agorithm for its soution. Two iustrative numerica exampes show that the (Optima) Nonstationary Extrapoated Bock Moduus Agorithm is far better than a the previous simiar Agorithms. 009 Esevier Inc. A rights reserved.. Introduction and preiminaries The Linear Compementarity Probem (LCP) is met in many practica appications. For exampe, in inear and convex quadratic programming, in a probem of the theory of games [4,6], in probems in fuid mechanics [8], in probems in economics [9,3], etc. For more appications see, e.g., [6,7,5,7]. To state the LCP we need some notation. So, for a matrix A R m,n we write A 0 (A > 0) if each eement of A is nonnegative (positive). The inequaity A 0 (A < 0) is defined in an obvious way. Aso, A B (A > B) means A B 0 (A B > 0). Finay, A denotes the matrix whose eements are the modui of the corresponding ones of A. Corresponding author. E-mai addresses: hadjidim@inf.uth.gr (A. Hadjidimos), mtzoumas@cc.uoi.gr (M. Tzoumas). 004-3795/$ - see front matter 009 Esevier Inc. A rights reserved. doi:0.06/j.aa.009.0.04

98 A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 The LCP is defined as foows (see, e.g., [6,7,5] or[7]): Probem: Determine x R n,n, if it exists, satisfying the foowing conditions r :=Mx + q 0, x 0, r T x = 0 with M R n,n, q R n (q / 0). (.) Note: In (.)wesetq / 0 since otherwise we have the trivia soution x = 0, r = q 0. A sufficient and necessary condition for LCP (.) to possess a unique soution, for a q R n, is that M is a P-matrix, that is a its principa minors are positive. The corresponding proof seems to go back to Sameson et a. [0]. Subcasses of P-matrices are the rea positive definite matrices, the M-matrices, the rea H-matrices with positive diagonas, etc. In this work we focus on rea symmetric positive definite matrices. To sove (.) we consider iterative methods, the first of which is attributed to Cryer [8]. Since then many researchers have proposed other iterative methods, e.g., Mangasarian [5], Ahn [] and Pang [8]. Recenty, a growing interest has been shown in them (see, e.g., [4,,3,3,6,9], etc). In the present work we are mainy concerned with the we-known Moduus Agorithm introduced by van Bokhoven [3] and extended by Kappe and Watson [] to the Bock Moduus Agorithm. In these Agorithms the LCP is transformed into a fixed-point probem, where a new unknown" z is introduced so that x = z +z and r = z z, (.) see, e.g., [7]. Then, using (.) and repacing x and r in (.) it is readiy obtained that z = f (z):=d z +b, (.3) z R n, D = (I + M) (I M), b = (I + M) q. (.4) Note that the iteration matrix D is nothing but the Cayey Transform of M [0]or[].. Extrapoating LCP For the iterative soution of (.3) the simpest iterative scheme is the foowing z (m+) = D z (m) +b, m = 0,,,..., with any z (0) 0. (.) For the convergence of (.) to the (unique) soution of (.3) there must hod D <, where denotes the absoute matrix norm induced by the absoute vector norm as foows: For a given A R n,n, A := sup y Rn \{0} Ay. The absoute vector norm, in addition to the three we-known y conditions for a vector norm, satisfies the foowing two: (i) x = x, x R n and (ii) x y x y, x, y R n. (.) For the proof see [3]or[] or Theorem 9.4 of [7]. Note that a vector norms defined by y p = ( n y i p ) p, p, (.3) aso satisfy (.), with the most common ones being those for p =,,. Restricting to symmetric positive definite matrices M, D in (.4)is(rea) symmetric. Letλ i ( > 0), i = ()n, be the eigenvaues of M, then those of D are λ i +λ, i i = ()n. Consequenty, the absoute spectra norm for D is D = ρ(d) = max λi σ(m) λ i +λ i <, and so scheme (.) aways converges. Therefore z (m) tends to the soution z of (.3)ask from which x and r are recovered using (.). To acceerate the convergence of (.) we appy extrapoation to (.). So, we mutipy through by ω(> 0), the extrapoation parameter, in which case (.) becomes (ωr):=(ωm)x + (ωq) 0, x 0, (ωr) T x = 0. (.4)

A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 99 Due to the positivity of ω, reations (.) impy(.4) and vice versa; aso, the matrix properties of M are inherited by ωm and ωq R n \{0} (ωq / 0). The extrapoated iterative scheme based on (.) is constructed from (.4) in the same way as (.) is constructed from (.3). Hence where z (m+) = D ω z (m) +b ω, with any z (0) 0, (.5) D ω = (I + ωm) (I ωm), b ω = (I + ωm) ωq, (.6) with D ω being the Extrapoated Caey Transform of M (see []). Obviousy, iterative scheme (.5) converges for any ω (0, + ) because ωλ i D ω = ρ(d ω ) = max <. (.7) ω > 0, λ i σ(m) + ωλ i The probem of minimization of ρ(d ω ) in (.7) was soved in a more genera form in []from which we borrow the foowing: Theorem. (Formuas (4.3) of []). Let λ min and λ max be the smaest and the argest eigenvaues of the rea symmetric positive definite matrix M. Then, the optima extrapoation parameter ω in (.5) and the corresponding spectra radius of D ω in (.6) are given by ω = λmin λ max, ρ(d ω ) = λmax λ min λmax + λ min. (.8) Coroary.. Under the assumptions of Theorem., ρ(d ω ) is a stricty increasing function of the spectra condition number κ :=κ (M) = λ max λ min. Proof. By dividing both terms of the fraction giving ρ(d ω ) in (.8)by λ min and differentiating with respect to (wrt) the ratio λ max λ the concusion immediatey foows. min 3. Nonstationary Extrapoated Bock Moduus Agorithm (NSEBMA) We begin this section with the discussion of the two Moduus Agorithms: van Bokhoven s Moduus Agorithm (MA): The foowing emma is taken from []. Lemma 3.. Under the notation and the assumptions made so far, if we appy van Bokhoven s MA to scheme (.), with z (0) = 0 R n, then after N iterations, ( ) ρ(d) n N = +ρ(d) n( + n) n(ρ(d)), (3.) one component of z (N) wi become positive (negative), say that corresponding to the index z (N) = max z (N) i, (3.) i=()n and wi remain positive (negative), thereafter. Proof. For the proof see Theorem 3 of [] and the note(s) immediatey after it. By Lemma 3. and (.), if z (N) < 0, then x (N) = 0. If z (N) > 0, then x (N) > 0, forcing r (N) = 0. In the former case we deete the th equation of r = Mx + q and the th coumn of M. In the atter we do the

00 A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 same after pivoting about m. So, the new LCP is reduced in size by one. If we assign the subscript to the origina M, r, D, b, and to the corresponding ones of the new LCP, we wi find N N, since for M, ρ(d ) < ρ(d ) in genera (see Theorems 3., 4. and 4.4). Hence, the tota number of iterations to sove our LCP wi be N + N + +N n, where N N N 3 N n N n. (3.3) Theorem 3.. Under the assumptions of Lemma 3., N is an increasing function of ρ(d). Proof. Let N be the quantity in the ceiing function in (3.), namey ( ) N := n ρ +ρ n( + n), (3.4) n ρ with ρ = ρ(d)(< ). Differentiating N wrt ρ we obtain d N dρ = [ n ρ ρ + N ] > 0. (3.5) ρ Therefore N stricty increases and hence N is an increasing function of ρ. As is obvious, we can appy to van Bokhoven s MA a nonstationary extrapoation with ω being recacuated in the beginning of each cyce. If λ min λ max =, whence ω = by(.8), ρ(d ω ) = ρ(d), otherwise ρ(d ω ) < ρ(d). Therefore, it wi be expected that the tota number of iterations and CPU time to sove the LCP at hand wi be drasticay reduced despite the recacuation of ω i s, i = ()n. To reaize how the Nonstationary Extrapoated Moduus Agorithm (NSEMA) is reated to (.) we wi express the process in matrix form. To simpify matters, assume that of z ( p i= N i), p = ()n, in (3.), is found in the natura order (,, 3,..., n ) and that none of the z ( p i= N i) s is zero. (Note: If z ( p i= N i) = 0, p < n, then a the remaining components of x and r are zero.) Hence NSEMA terminates after n cyces. Beginning the first cyce, (.) is mutipied through by ω to obtain (.4). In (.4), r, M, q are mutipied by ω whie x remains unchanged. Note that the properties of ω r, ω M, ω q do not differ from those of r, M, q. After the first cyce, if x (N ) = 0 then ω r(n ) > 0. So, the first equation and the first coumn of ω M are deeted. If x(n ) > 0, then ω r(n ) = 0. Then, the pivoting foows, with pivot ω m(), where the upper index denotes cyce. (Notes: (i) A the mutipiers in the pivoting process are those that they shoud have been if no extrapoation had been appied. and (ii) By Theorem. and Coroary., the ratios of the extreme eigenvaues of M and of ω M as we as those of the corresponding principa submatrices remain unchanged.) Then, a deetion, such as before, foows. To return to the origina LCP in (.) we can foow one of three aternatives: (i) Divide a n equations, incuding the first one, by ω to recover (.). Then, the first cyce of the NSEMA is competed and the second cyce foows. At the end of the n cyces the actua vaues for x and r are obtained. (ii) Begin the second cyce by mutipying the n equations from the second to the ast by ω ω and so on. In this aternative, setting = ( ) diag ω, ω, ω 3,..., ω n,, (3.6) the Agorithm we use soves the foowing Nonstationary Extrapoated LCP ( r) = ( M)x + ( q) 0, ( r)t x = 0. (3.7) Since x has remained unchanged, ony r has to be premutipied by to recover r. (iii) Mutipy the ast n equations by ω, noting by (.8) that the present ω differs from the previous one by the factor ω, and go on with the second cyce. Setting = diag ω n, ω ω, ω ω ω 3,..., ω n i, ω i, (3.8) i= i=

A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 0 the Agorithm used soves the foowing Nonstationary Extrapoated LCP ( r) = ( M)x + ( q) 0, ( r)t x = 0. (3.9) Obviousy, x remains unchanged and a premutipication of r by recovers r. Two points have to be carified. (i) From the second cyce onwards M and M are not symmetric. This is true, but we shoud reca that the submatrix used in each cyce is a positive mutipe of the origina one. Therefore a the properties of the atter are inherited by the one used. (ii) In a rea situation the ordering of s in a three aternatives woud not be the natura one and so the components of x appear in a permuted order. Let P be the corresponding permutation matrix. Then, the probem we sove, say in aternative (iii), is ( Pr) = ( PMPT )(Px) + ( Pq) 0, ( Pr)T (Px) = 0. (3.0) Obviousy, we have to keep track of the ordering of s, as in the Gauss eimination. Then, x and r are recovered in an obvious way. Kappe and Watson s Bock Moduus Agorithm (BMA): Lemma 3. beowisfrom[]. Lemma 3.. Under the notation and the assumptions made so far, if we appy Kappe and Watson s Bock Moduus Agorithm (BMA) to iterative scheme (.), with z (0) = 0 R n, then after N iterations, where N is given by (3.), not ony the absoutey argest component of z (N) wi preserve its sign thereafter, but aso a other components of it satisfying z (N) T := n ( + ρ(d) ρn (D) ρ(d) ) b. (3.) Proof. For the proof see Theorem 4 of [] and the notes foowing it. In genera, there may be more than one component of z (N) that wi aow to determine the corresponding x (N) and r (N). In such a case, more that one equation (and corresponding coumns of M) wi be deeted and the next LCP wi be drasticay reduced in size. It is then expected that the Kappe and Watson s Agorithm wi produce the soution sought in fewer iterations in each cyce, and maybe in fewer cyces, than that of van Bokhoven s. In what foows we state and prove a theorem which seems to be a negative resut. Theorem 3.. Under the assumptions of Lemma 3., T stricty decreases with ρ(d) increasing. Proof. Since n and b are positive constants it is obvious that dρ dt d T and dρ, with ρ N T := + ρ ρ and ρ = ρ(d), are of the same sign. Differentiating we have (3.) dρ N d T dρ = ( ρ) dρ + ρ N. ( + ρ) ( ρ) (3.3) To find dρ N dρ,weputy = ρ N, take ogarithms, and differentiate wrt ρ to obtain dy y dρ = d N dρ n ρ + N ρ. (3.4) Substituting d N dρ and N, from(3.5) and (3.4), respectivey, as we as y = ρ N into (3.4), we can obtain after some simpe manipuations that dρ N ρ N dρ =. Substituting the ast expression into (3.3) and ρ using (3.) we finay obtain that

0 A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 d T dρ = ρ N + ρ + ρ = T < 0. (3.5) ρ + ρ Consequenty, T and T are stricty decreasing functions of ρ. Remark 3.. The above surprising resut states that ρ shoud increase rather than decrease to get a smaer T and so increase the possibiity to have more than one components of z (N) satisfying (3.). However, we shoud bear in mind that the new feature of the BMA is the expoitation of the fact that z (N) T may be satisfied by more than one. In corroboration to the above remark it shoud be mentioned that in a pethora of exampes we have run, in none of them the simpe MA has beaten the BMA. Aso, a partia answer as to what actuay happens is given theoreticay by the foowing statement. Theorem 3.3. As ρ = ρ(d) decreases in the interva (0, ), the number N in(3.4) decreases faster than what T in(3.) increases. More specificay d( N T) > 0. (3.6) dρ Proof. Considering the derivative in (3.6) and using (3.5) and (3.5) we successivey obtain d( N T) = d N dρ dρ T d T + N dρ = [ n ρ ρ + N ] ( T + N T ) ρ + ρ [ ] T ( + ρ + ρ n ρ) = + N. (3.7) ( + ρ) n ρ ρ ρ ( ) For the coefficient of N in the second term in the brackets above, it is found that d +ρ+ρ n ρ ρ dρ = ρ < 0, ρ and so ( + ρ + ρ n ρ) ( + ρ + ρ n ρ) inf = ρ= =, ρ (0,) ρ ρ meaning that the coefficient in question is aways positive. Hence the right side of the equaities in (3.7) is positive proving our caim in (3.6). It is reaized that the nonstationary extrapoation, with the three aternatives for the MA, can aso be appied to the BMA. Then, one shoud expect to obtain the soution in fewer iterations than those required for the simpe BMA. So, the Nonstationary Extrapoated Bock Moduus Agorithm (NSEBMA) is expected to give optima resuts in terms of iterations and CPU time for a specific LCP. It is understood that one has to dea with bocks instead of with points. For exampe, et p ( n) be the tota number of cyces required to sove the NSEBMA,etn i, with p n i = n, be the number of components in each bock and ω i, i = ()p, be the optima extrapoation parameters. Then, the anaogous to (3.8) extrapoation matrix and that to (3.0) Nonstationary Extrapoated LCP, which is soved, are (b) ( = diag ω I n, ω ω I n, ω ω ω 3 I n 3,..., p i= ω i I n p, ) p i= ω i I n p, ( (b) Pr) = ( (b) PMP T )(Px) + ( (b) Pq) 0, ( (b) Pr) T (Px) = 0. (3.8) 4. Further theoretica background In this section we prove a number of statements that appy to either of the Nonstationary Extrapoated Moduus Agorithms. Bearing in mind the two Notes in the discussion preceding the three aternatives for the NSEBA were presented, our anaysis can put aside the extrapoation parameters ω i s.

A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 03 First we investigate the case of NSEMA and then the resuts obtained are generaized to cover the NSEBMA. Note that going from one cyce of iterations, say the very first one, to the next of MA we do fewer operations per iteration due to the reduced size of the new LCP. Besides, the extrapoation appied to the new LCP wi be faster than that appied to the od probem. To prove this, in view of Theorem. and Coroary. we have to compare the ratios of the argest to the smaest eigenvaue of the coefficient matrices in the two LCPs. To make such a comparison we distinguish two cases depending on the sign of z (N) of M are deeted. If r (N) in (3.). If x (N) is to be zero, then the th equation of the LCP and the th coumn is to be zero, a Gauss eimination takes pace with pivot m before the LCP is reduced in size by one as before. The foowing statements describe what happens in each case. Theorem 4.. Let M R n,n be symmetric and positive definite. The submatrix M obtained by deeting the th row and coumn of M is aso symmetric and positive definite. Proof. It is we known that any principa submatrix of a rea symmetric positive definite matrix is aso symmetric and positive definite (see, e.g., [4,5] or[5]). Theorem 4.. Let M R n,n be symmetric and positive definite with λ min, λ max being its smaest and argest (positive) eigenvaues. Let λ min and λ max be the corresponding eigenvaues of the submatrix M of Theorem 4.. Then, λ min λ min λ max λ max. (4.) Proof. As is known (see, e.g., Theorem.. in [5]), for any w R n \{0} there hod λ min wt Mw w T w λ max, (4.) where equaity hods at the eft (resp. right) end with w being the eigenvector associated with λ min (resp. λ max ). For simpicity, et = and M be partitioned as foows [ ] m M = y T y M with y =[m m 3... m n ] T. (4.3) Defining the vector w w =[0w T n ]T R n, w n R n \{0} w T w = w T n w n, (4.4) we wi have [ ] w T Mw =[0 w T ] m y T n [0 w T y M n ]T = w T n M w n. (4.5) Taking w n to be the eigenvector of M associated with λ min we have λ min = wt n M w n w T n w n = wt Mw w T w λ min. Hence, by virtue of (4.), the eft inequaity in (4.) is proved. Simiary, taking w n to be the eigenvector of M associated with λ max the right inequaity in (4.) is aso proved. Theorem 4.3. Under the assumptions and notation of Theorems 4. and 4. and in view of Coroary. the extrapoation appied to the reduced LCP wi make it converge at east as fast as the extrapoation appied to the origina one. Proof. In view of (4.) and Coroary. the proof is immediate.

04 A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 Remark 4.. Note that we have identica rates of convergence in the od and the new LCPs, namey λ min = λ min and λ max = λ max simutaneousy hod, if and ony if (iff) the eigenvectors w n m and w n M associated with λ min and λ max of M are orthogona to the vector y in (4.3) and, aso, w m = [0 w n T m ] T and w M =[0 w n T M ] T are the eigenvectors of M associated with λ min and λ max, respectivey. Now, we come to the case where a pivoting process takes pace. Theorem 4.4. Let M R n,n be symmetric and positive definite. Appying Gauss eimination to it with pivot any diagona eement m, = ()n, the submatrix M obtained by deeting the th row and coumn of the resuting matrix is aso symmetric and positive definite. Proof. For simpicity we assume that m is taken as pivot in the Gauss eimination. If we aso assume that M is partitioned as in (4.3), then Gauss eimination resuts to [ 0 T ][ ] [ n m y T m y T ] = y I m n y M 0 n M yy T. (4.6) m Since M is symmetric so is the matrix M = M m yy T. (4.7) To prove that M is aso positive definite we consider any vector w =[w w T n ]T R n with w = Then, we successivey have m (w T n y) R, w n R n \{0}. (4.8) 0 < w T Mw = [ ] [ ] (w T m y T [ m n y) wt n (w T y M m n y) wt n = wn T (M yy T )w m n = wn T Mw n, proving our assertion. ] T (4.9) Theorem 4.5. Let M be the matrix of Theorem 4.4 and λ min, λ max be its smaest and argest eigenvaues. Let the smaest and argest eigenvaues of M in(4.7) of Theorem 4.4 be λ min, λ max, respectivey. Then, there wi hod λ min λ min λ max λ max. (4.0) Proof. Let w be the vector w =[w w T n ]T R n with w R, w n R n \{0}, (4.) then we have that w T Mw w T w = m ( w + (w T m n y)) + w T n Mwn w +. wt n w (4.) n Taking as w n the eigenvector of M associated with its smaest eigenvaue λ min and w = (w T m n y), then the vector w has the form in (4.8) and we successivey obtain λ min wt T Mw = w Mw n n w T w (w m n T y) + wn T w n wt n Mw n w T n w n = λ min (4.3)

A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 05 proving the eft inequaity in (4.0). Taking w n to be the eigenvector of M associated with λ max and w = 0, so that the vector w has the form of (4.4), we have λ max wt Mw = (w m n T y) + w T Mw n n w T w wn T w wt Mw n n = λ n wn T w max, (4.4) n proving the right inequaity in (4.0). Remark 4.. It is simiar to Remark 4.. Namey, the equaities λ min = λ min and λ max = λ max simutaneousy hod iff the eigenvectors w n = w n m and w n = w n M of M are orthogona to y, and [0 w n T m ] T and [0 w n T M ] T are the eigenvectors of M associated with λ min and λ max, respectivey. Theorem 4.6. Under the assumptions of Theorems 4.4 and 4.5, if any of the four inequaities in (4.3) and (4.4) does not hod, then, one of the two extreme inequaities in (4.0) of Theorem 4.5 wi be a strict one. Furthermore, the optima spectra radius in (.8) corresponding to the matrix D( M) wi be stricty ess than that corresponding to D(M), with the matrix of the form D( ) being defined in (.4) in terms of M and M, respectivey. Proof. The first part comes directy from the impied strict incusion [ λ min, λ max ] [λ min, λ max ] as a consequence of which we have λ λ max < λ max min λ. The second part comes from the previous strict inequaity min and Coroary.. Coming now to the case of the NSEBMA it is cear that, in genera, we have to dea with a repeated appication of Theorems 4. and 4.5 since more than one components of z (N) may satisfy (3.). Of course, one can use bocks to prove the anaogous propositions to Theorems 4. 4.6. To see what the difference is, we outine beow a bock anaogue of a combination of Theorems 4.4 4.5 and Remark 4.. Theorem 4.7. Let M R n,n be symmetric and positive definite and that the first p r (N) i s are to become zeros ( p < n). Let M be of the bock form [ ] M M = Y T with M Y M R p,p, M R n p,n p, Y R n p,p. (4.5) Then: (i) Appying a bock" Gauss eimination, where a p coumns beow the diagona of M are eiminated, and deeting the first bock row and coumn of the resuting matrix, the submatrix M obtained is symmetric and positive definite. (ii) Let λ min and λ max be the smaest and the argest eigenvaues of M and λ min and λ max be the corresponding ones of M. Then, there wi hod λ min λ min λ max λ max. (4.6) (iii) Equaities in (4.6) hod at both ends iff the pair of eigenvectors w n p = w n pm and w n p = w n M associated with λ min and λ max of M are orthogona to the coumns of Y, and [ wn p T YM w n p T m ]T and T [0 w n p M ]T are the eigenvectors of M associated with λ min and λ max, respectivey. Proof. (i) Reca that the matrices M and M are symmetric positive definite. Hence M admits a Choesky decomposition which can be written as L U, where L is ower trianguar with diag(l ) = I p and U upper trianguar that can be written as diag(u )L T with diag(u ) positive diagona. So, the bock" pivoting process wi be as foows: [ ][ ] [ ] L 0 p,n p M Y T U YM = L Y T, I n p Y M 0 n p,p M M = M YM Y T. (4.7)

06 A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 Hence M is symmetric and positive definite because M and therefore M Rp,p possess both these properties. Letting w =[w T p wt n p ]T R n \{0} with w p = M Y T w n p R p, w n p R n p \{0} (4.8) it is obtained that 0 < w T Mw = [ ] [ wn p T M Y T YM wt n p Y M ] [ w T = w T n p (M YM Y T )w n p = w T n p Mw n p, which proves that M is aso positive definite. (ii) Let w be the vector ] T n p YM wt n p (4.9) w =[w T p wt n p ]T R n \{0}, w p R p, w n p R n p \{0}. (4.0) Forming wt Mw w T w, repacing w from (4.0), using for M the above bock partitioned form and for M the expression from (4.5) in terms of M, after some manipuation, we obtain that w T Mw w T w = M ( w p + M ) Y T w n p + wn p T Mw n p, (4.) M Y T w n p + wt n p w n p where M is the unique rea symmetric positive definite square root of M (see, e.g., Theorem..7 in [5]). Now we work in a simiar way as before in Theorem 4.5. Namey, taking as w n p the eigenvector of M associated with its smaest eigenvaue λ min and w p = M Y T w n p, we can obtain λ min wt T Mw = w Mw n p n p w T w M Y T w n p + wt n p w n p wt n p Mw n p w T n p w n p = λ min (4.) proving the eft inequaity in (4.6). Taking w n p to be the eigenvector of M associated with the argest eigenvaue λ max and w p = 0, we have λ max wt Mw w T w = M Y T w n p + wt n p Mw n p wn p T w n p wt n p Mw n p w T n p w n p = λ max, (4.3) where M is the inverse of M, proving the right inequaity in (4.6). (iii) For the first part of our assertion to hod, the norms in (4.) and (4.3) must be zero. Due to the invertibiity of M and M this hods iff the associated eigenvectors with λ min and λ max must be orthogona to the coumns of the submatrix Y. The second part of our assertion readiy foows. 5. Numerica exampes Before we present our specific exampes we make a number of points. (i) We have run numerous exampes of various sizes from n = 3ton = 50 using a six methods. Namey, iterative methods (.) and (.5) of Section, the van Bokhoven s MA,its nonstationary extrapoated counterpart (NSEMA), and simiary, the Kappe and Watson s BMA and the nonstationary extrapoated one (NSEBMA). For NSEMA and NSEBMA of the three aternatives of Section 3 the one in (iii) was adopted. (ii) For each n and for a six methods the vector q R n was the same and was seected by using the Matab command 0*(rand(n, )-0.5), so that each component q i, i = ()n, was chosen randomy in the interva ( 5, 5). It was observed that for the same matrix M but for different random vectors q the resuts were pretty much the same.

A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 07 Tabe Spectra condition numbers of the matrix coefficient M = tridiag(,, ) R n,n. n 0 0 30 40 50 κ (tridiag(,, )) 48.374 78.064 388.8 680.67 053.48 Tabe Number of iterations (iter) and CPU times in seconds. n Iterative Methods for Exampe MA BMA (n )N a NSEMA NSEBMA (n )Nω b 0 iter 8 5 66 4 08 CPU 0.060 0.040 0.040 0.030 0 iter 553 8 356 83 34 55 CPU 0.0 0.060 0.080 0.040 30 iter 3733 39 935 73 54 39 CPU 0.73 0.7 0.40 0.00 40 iter 64 69 3,7 064 97 69 CPU.003 0.440 0.5 0.80 50 iter 7,53 05 49,539 36 0 4459 CPU 6.780.06.0 0.43 a (n )N is the possibe maximum number of iterations for MA and BMA. b (n )Nω is the possibe maximum number of iterations for NSEMA and NSEBMA. (iii) If z (0) = 0in(.) and (.5), then a three unextrapoated methods have identica the first N z (k) s, k = ()N, with N of (3.). The same hods for the three extrapoated methods. (iv) Reca that a four (Bock) Moduus Agorithms are exact that is if exact arithmetic were used the exact resut woud be obtained after at most (n )N iterations foowed by the soution of a inear system. In contrast with the (Bock) Moduus Agorithms, the methods (.) and (.5)are iterative. Hence it is not easy to have a fair stopping criterion. What we did was the foowing. After the soution was found by any of the four (Bock) Moduus Agorithms exhausting a K = p i= N i iterations, provided K 0 6, we determined the worst" reative absoute error e for the ast two iterations for NSEMA and NSEBMA, that is e = x(k) x (K ) x (K). This was subsequenty used as a stopping criterion for the two iterative methods, specificay x (k+) x (k) x (k+) = z (k+) +z (k+) z (k) z (k) z (k+) +z (k+) e, k =,, 3,..., and a check was made after each iteration. (v) It was observed that in amost a the cases of the four (Bock) Moduus Agorithms the number of iterations required for the soution of an LCP was much ess than the theoretica computed one ((n )N). (vi) In more than 98% of the exampes we ran e = 0 to the Matab accuracy something which coud not happen with the iterative methods (.) and (.5). So, what we woud suggest is that if the obtained reative absoute error e is not very satisfactory then use the ast z of NSEMA or NSEBMA and run a sma number of iterations, say 5 to 0, using (.) as a smoother unti an e of satisfactory accuracy is obtained.

08 A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 Tabe 3 Spectra condition numbers for the Hibert matrix M = H R n,n. n 3 4 5 6 7 8 9 κ (H) 5.4 0.55 0 4 4.766 0 5.495 0 7 4.754 0 8.56 0 0 4.93 0 Tabe 4 Number of iterations (iter) and CPU times in seconds. n Iterative methods for Exampe MA BMA (n )N NSEMA NSEBMA (n )N ω 3 iter 650 89 578 6 48 96 CPU 0.090 0.00 0.030 0.030 4 iter 53,476 53,476 60,48 369 369 07 CPU.33.664 0.05 0.050 5 iter >0 6 >0 6 8, 394, 08 > 0 6 384 66 0,648 CPU a 0.8 0.90 6 iter >0 6 >0 6 8,364 8,369 9,80 CPU 0.84.05 7 iter >0 6 >0 6 3,006 3,856 738,036 CPU 6.89 7.7 8 iter >0 6 >0 6 807,005 807,005 5, 649, 035 > 0 6 CPU 44.684 54.949 9 iter >0 6 >0 6 >0 6 >0 6 CPU a A dash ( ) means that no convergence has been achieved. (vii) In a experiments the theory of Sections 4 was confirmed. Namey: (a) Regarding execution (CPU) times, a three Extrapoated schemes are better than the unextrapoated ones. (b) Both Bock Moduus Agorithms are better than the corresponding simpe Moduus Agorithms. (c) Going from one experiment to another of the same size the CPU time required for each method becomes arger as the condition number κ (D) or κ (D ω ) increases. (viii) In case the condition number is moderatey arge (see Exampe ) a four (Bock) Moduus Agorithms work exceptionay we. For extremey arge condition numbers (see Exampe ) a methods work ony for very sma numbers of n and this is due to the tremendous number of iterations required. For those n for which NSEMA and NSEBMA work the resuts are very satisfactory. Exampe. M is the cassica tridiagona matrix M = tridiag(,, ) R n,n, with n = 0(0)50. The corresponding spectra condition numbers for M are given in Tabe. In a five cases of the present exampe the resuts are very good despite the reativey arge condition numbers. This, in our opinion, is mainy due to the sparsity of the matrix and aso to its irreducibe diagona dominance property. As is seen NSEBMA is the best method. There are two extra coumns

A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 09 under (n )N and (n )N ω which indicate the possibe maximum number of iterations for MA, BMA and NSEMA, NSEBMA, respectivey. Exampe. M is the Hibert matrix H R n,n :={h i,j =, i, j = ()n}, with n = 3()9. The spectra condition numbers for M are iustrated in Tabe 3. i+j In the cases of this exampe, a nightmare" case when soving (or pivoting) a inear system, the arge condition numbers are disastrous even for rather sma vaues of n. In our opinion, despite the irreducibe diagona dominance property of the coefficient matrix, the poor" resuts may be due to the dense character of it. It is noted that this is the ony exampe out of those run that NSEMA beats NSEBMA. Tabe 4 is simiar to Tabe. 6. Concuding remarks Before we concude our work we woud ike to make a number of points: (i) The theory deveoped in the present work is fuy confirmed by the numerica experiments. (ii) The principe of extrapoation as was introduced in Sections 4 increases the convergence rates for a three known methods, namey the iterative method (.), the MA and the BMA. (iii) Kappe and Watson [] introduced a kind of nonstationary extrapoation but it is very difficut, if not impossibe, in practice to find the appropriate positive diagona matrix Ɣ defined there. Our work gives a partia answer for symmetric positive definite matrices. (iv) An extension of the theory of the present paper seems to work aso in cases where the matrix M is an M-matrix or a (rea) H-matrix with positive diagona eements. It is we-known that these two casses of matrices are P-matrices and the LCP has a unique soution that can aso be found by other iterative methods (see, e.g., [5,,8,7,6,7,,]). In this direction we have been working with encouraging preiminary resuts. References [] B.H. Ahn, Soution of nonsymmetric inear compementarity probems by iterative methods, J. Optim. Theory App. 33 (98) 75 85. [] Z.-Z. Bai, On the monotone convergence of the matrix mutispitting reaxation methods for inear compementarity probem, IMA J. Numer. Ana. 8 (998) 509 58. [3] Z.-Z. Bai, On the convergence of the mutispitting methods for inear compementarity probem, SIAM J. Matrix Ana. App. (999) 67 78. [4] Z.-Z. Bai, D.J. Evans, Matrix mutispitting reaxation methods for inear compementarity probems, Int. J. Comput. Math. 63 (997) 309 36. [5] A. Berman, R.J. Pemmons, Cassics in Appied Mathematics, SIAM, Phiadephia, 994 [6] R.W. Cotte, G.B. Dantzig, Compementarity pivot theory of mathematica programming, Linear Agebra App. (968) 03 5. [7] R.W. Cotte, J.-S. Pang, R.E. Stone, The Linear Compementarity Probem, Academic Press, New York, 99 [8] C.W. Cryer, The soution of a quadratic programming probem using systematic over-reaxation, SIAM J. Contro 9 (97) 385 39. [9] L. Cvetković, S. Rapajić, How to improve MAOR method convergence area for the inear compementarity probems, App. Math. Comput. 6 (005) 577 584. [0] S.M. Faatt, M.J. Tsatsomeros, On the Cayey transform of positivity casses of matrices, Eectron. J. Linear Agebra 9 (00) 90 96. [] A. Hadjidimos, M. Tzoumas, On the principe of extrapoation and the Cayey transform, Linear Agebra App. 48 (008) 76 777. [] N.W. Kappe, L.T. Watson, Iterative agorithms for the inear compementarity probems, Int. J. Comput. Math. 9 (986) 73 97. [3] M.D. Kouisianis, T.S. Papatheodorou, Improving projected successive overreaxation method for inear compementarity probems, App. Numer. Math. 45 (003) 9 40. [4] C.E. Lemke, Bimatrix equiibrium points and mathematica programming, Management Sci. (965) 68 689. [5] O.L. Mangasarian, Soution of symmetric inear compementarity probems by iterative methods, J. Optim. Theory App. (977) 465 485.

0 A. Hadjidimos, M. Tzoumas / Linear Agebra and its Appications 43 (009) 97 0 [6] O.L. Mangasarian, Noninear Programming, McGraw Hi, New York, 969 (Reprint: SIAM Cassics in Appied Mathematics 0, Phiadephia, 994). [7] K.G. Murty, Linear Compementarity, Linear and Noinear Programming. Internet ed., 997. [8] J.S. Pang, Necessary and sufficient conditions for the convergence of iterative methods for the inear compementarity probem, J. Optim. Theory App. 4 (984) 7. [9] K. Pantazopouos, Numerica Methods and Software for the Pricing of American Financia Derivatives, Ph.D. Thesis, Department of Computer Sciences, Purdue University, West Lafayette, IN, 998. [0] H. Sameson, R.M. Thra, O. Weser, A partitioning theorem for Eucidean n-space, Proc. Amer. Math. Soc. 9 (958) 805 807. [] U. Schäfer, A inear compementarity probem with a P-matrix, SIAM Rev. 46 (004) 89 0. [] U. Schäfer, On the moduus agorithm for the inear compementarity probem, Oper. Res. Lett. 3 (004) 350 354. [3] W.M.G. van Bokhoven, Piecewise-inear Modeing and Anaysis, Proeschrift, Eindhoven, 98. [4] R.S. Varga, Matrix Iterative Anaysis. Prentice-Ha, Engewood Ciffs, NJ, 96 (aso: second ed., Revised and Expanded, Springer, Berin, 000). [5] D.M. Young, Iterative Soution of Large Linear Systems, Academic Press, New York, 97. [6] D. Yuan, Y. Song, Modified AOR methods for inear compementarity probem, App. Math. Comput. 40 (003) 53 67.