Computation of Error Bounds for P-matrix Linear Complementarity Problems

Similar documents
A new error bound for linear complementarity problems for B-matrices

Supplementary Material for Fast Stochastic AUC Maximization with O(1/n)-Convergence Rate

Bounds for the Extreme Eigenvalues Using the Trace and Determinant

The log-behavior of n p(n) and n p(n)/n

A General Iterative Scheme for Variational Inequality Problems and Fixed Point Problems

A collocation method for singular integral equations with cosecant kernel via Semi-trigonometric interpolation

A 2nTH ORDER LINEAR DIFFERENCE EQUATION

NEW FAST CONVERGENT SEQUENCES OF EULER-MASCHERONI TYPE

An Algebraic Elimination Method for the Linear Complementarity Problem

A Hadamard-type lower bound for symmetric diagonally dominant positive matrices

2 The LCP problem and preliminary discussion

PAijpam.eu ON TENSOR PRODUCT DECOMPOSITION

New Iterative Method for Variational Inclusion and Fixed Point Problems

Supplementary Material for Fast Stochastic AUC Maximization with O(1/n)-Convergence Rate

6.3 Testing Series With Positive Terms

Assignment 5: Solutions

A New Solution Method for the Finite-Horizon Discrete-Time EOQ Problem

Optimally Sparse SVMs

Stochastic Matrices in a Finite Field

Seunghee Ye Ma 8: Week 5 Oct 28

c 2006 Society for Industrial and Applied Mathematics

Improving the Localization of Eigenvalues for Complex Matrices

Common Coupled Fixed Point of Mappings Satisfying Rational Inequalities in Ordered Complex Valued Generalized Metric Spaces

Appendix to Quicksort Asymptotics

MAT1026 Calculus II Basic Convergence Tests for Series

METHOD OF FUNDAMENTAL SOLUTIONS FOR HELMHOLTZ EIGENVALUE PROBLEMS IN ELLIPTICAL DOMAINS

BIRKHOFF ERGODIC THEOREM

Feedback in Iterative Algorithms

Research Article Nonexistence of Homoclinic Solutions for a Class of Discrete Hamiltonian Systems

Ma 530 Introduction to Power Series

Algebra of Least Squares

Self-normalized deviation inequalities with application to t-statistic

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014.

ACO Comprehensive Exam 9 October 2007 Student code A. 1. Graph Theory

LOWER BOUNDS FOR THE BLOW-UP TIME OF NONLINEAR PARABOLIC PROBLEMS WITH ROBIN BOUNDARY CONDITIONS

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

Mi-Hwa Ko and Tae-Sung Kim

A NOTE ON SPECTRAL CONTINUITY. In Ho Jeon and In Hyoun Kim

Estimation of Backward Perturbation Bounds For Linear Least Squares Problem

THE TRANSFORMATION MATRIX OF CHEBYSHEV IV BERNSTEIN POLYNOMIAL BASES

Bangi 43600, Selangor Darul Ehsan, Malaysia (Received 12 February 2010, accepted 21 April 2010)

IN many scientific and engineering applications, one often

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS

The value of Banach limits on a certain sequence of all rational numbers in the interval (0,1) Bao Qi Feng

TRACES OF HADAMARD AND KRONECKER PRODUCTS OF MATRICES. 1. Introduction

A class of spectral bounds for Max k-cut

IJITE Vol.2 Issue-11, (November 2014) ISSN: Impact Factor

k-generalized FIBONACCI NUMBERS CLOSE TO THE FORM 2 a + 3 b + 5 c 1. Introduction

MATH 112: HOMEWORK 6 SOLUTIONS. Problem 1: Rudin, Chapter 3, Problem s k < s k < 2 + s k+1

A Risk Comparison of Ordinary Least Squares vs Ridge Regression

Generalization of Samuelson s inequality and location of eigenvalues

The Method of Least Squares. To understand least squares fitting of data.

Multi parameter proximal point algorithms

IP Reference guide for integer programming formulations.

Exact Solutions for a Class of Nonlinear Singular Two-Point Boundary Value Problems: The Decomposition Method

Math 113, Calculus II Winter 2007 Final Exam Solutions

The Perturbation Bound for the Perron Vector of a Transition Probability Tensor

Math 113 Exam 3 Practice

DECOMPOSITION METHOD FOR SOLVING A SYSTEM OF THIRD-ORDER BOUNDARY VALUE PROBLEMS. Park Road, Islamabad, Pakistan

Chapter 2 The Solution of Numerical Algebraic and Transcendental Equations

INFINITE SEQUENCES AND SERIES

Oscillation and Property B for Third Order Difference Equations with Advanced Arguments

A GENERALIZATION OF THE SYMMETRY BETWEEN COMPLETE AND ELEMENTARY SYMMETRIC FUNCTIONS. Mircea Merca

Approximation by Superpositions of a Sigmoidal Function

Math 140A Elementary Analysis Homework Questions 3-1

A Note on the Symmetric Powers of the Standard Representation of S n

A Note on the Kolmogorov-Feller Weak Law of Large Numbers

Infinite Sequences and Series

-ORDER CONVERGENCE FOR FINDING SIMPLE ROOT OF A POLYNOMIAL EQUATION

Periodic solutions for a class of second-order Hamiltonian systems of prescribed energy

Poincaré Problem for Nonlinear Elliptic Equations of Second Order in Unbounded Domains

INFINITE SEQUENCES AND SERIES

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

On forward improvement iteration for stopping problems

Disjoint Systems. Abstract

BETWEEN QUASICONVEX AND CONVEX SET-VALUED MAPPINGS. 1. Introduction. Throughout the paper we denote by X a linear space and by Y a topological linear

MONOTONICITY OF SEQUENCES INVOLVING GEOMETRIC MEANS OF POSITIVE SEQUENCES WITH LOGARITHMICAL CONVEXITY

Gamma Distribution and Gamma Approximation

Differentiable Convex Functions

Some Results on Certain Symmetric Circulant Matrices

Simple Polygons of Maximum Perimeter Contained in a Unit Disk

Review Problems 1. ICME and MS&E Refresher Course September 19, 2011 B = C = AB = A = A 2 = A 3... C 2 = C 3 = =

MAS111 Convergence and Continuity

MA131 - Analysis 1. Workbook 3 Sequences II

Several properties of new ellipsoids

A Note On The Exponential Of A Matrix Whose Elements Are All 1

PROBLEM SET 5 SOLUTIONS 126 = , 37 = , 15 = , 7 = 7 1.

PRACTICE FINAL/STUDY GUIDE SOLUTIONS

Advanced Analysis. Min Yan Department of Mathematics Hong Kong University of Science and Technology

Research Article A New Second-Order Iteration Method for Solving Nonlinear Equations

Arkansas Tech University MATH 2924: Calculus II Dr. Marcel B. Finan

Benaissa Bernoussi Université Abdelmalek Essaadi, ENSAT de Tanger, B.P. 416, Tanger, Morocco

Council for Innovative Research

TR/46 OCTOBER THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION A. TALBOT

ON POINTWISE BINOMIAL APPROXIMATION

A note on the modified Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems

NICK DUFRESNE. 1 1 p(x). To determine some formulas for the generating function of the Schröder numbers, r(x) = a(x) =

Stability of fractional positive nonlinear systems

MAXIMAL INEQUALITIES AND STRONG LAW OF LARGE NUMBERS FOR AANA SEQUENCES

Transcription:

Mathematical Programmig mauscript No. (will be iserted by the editor) Xiaoju Che Shuhuag Xiag Computatio of Error Bouds for P-matrix Liear Complemetarity Problems Received: date / Accepted: date Abstract We give ew error bouds for the liear complemetarity problem where the ivolved matrix is a P-matrix. Computatio of rigorous error bouds ca be tured ito a P-matrix liear iterval system. Moreover, for the ivolved matrix beig a H-matrix with positive diagoals, a error boud ca be foud by solvig a liear system of equatios, which is sharper tha the Mathias-Pag error boud. Prelimiary umerical results show that the proposed error boud is efficiet for verifyig accuracy of approximate solutios. Keywords accuracy error bouds liear complemetarity problems Mathematics Subject Classificatio (2000) 90C33 65G20 65G50 1 Itroductio The liear complemetarity problem is to fid a vector x R such that Mx + q 0, x 0, x T (Mx + q) = 0, This work is partly supported by a Grat-i-Aid from Japa Society for the Promotio of Sciece. Xiaoju Che Departmet of Mathematical System Sciece, Hirosaki Uiversity, Hirosaki 036-8561, Japa. Tel.: +81-172-393639 Fax: +81-172-393541 E-mail: che@cc.hirosaki-u.ac.jp Shuhuag Xiag Departmet of Applied Mathematics ad Software, Cetral South Uiversity, Chagsha, Hua 410083, Chia. Tel.: +86-731-8830759 Fax: +86-731-8830759 E-mail: xiagsh@mail.csu.edu.c

2 Xiaoju Che, Shuhuag Xiag or to show that o such vector exists, where M R ad q R. We deote this problem by LCP(M, q) ad its solutio by x. Recall the followig defiitios for a matrix. M is called a P-matrix if max i(mx) i > 0 1 i for all x 0. M is called a M-matrix, if M 1 0 ad M ij 0 (i j) for i, j = 1, 2,...,. M is called a H-matrix, if its compariso matrix is a M-matrix. It is kow that a H-matrix with positive diagoals is a P-matrix. Moreover, M is a P-matrix if ad oly if the LCP(M, q) has a uique solutio x for ay q R. See [4]. It is easy to verify that x solves the LCP(M, q) if ad oly if x solves r(x) := mi(x, Mx + q) = 0, where the mi operator deotes the compoetwise miimum of two vectors. The fuctio r is called the atural residual of the LCP(M, q), ad ofte used i error aalysis. Error bouds for the LCP(M, q) have bee studied extesively, see [3 6,8,9,13]. For M beig a P-matrix, Mathias ad Pag [6] preset the followig error boud for ay x R, where x x 1 + M r(x), (1.1) = { } mi max x i(mx) i. x =1 1 i This error boud is well kow ad widely cited. However, the quatity i (1.1) is ot easy to fid. For M beig a H-matrix with positive diagoals, Mathias ad Pag [6] gave a computable lower boud for, (mi b i )(mi( M 1 b) i ) i i (max( M 1 b) j ) 2 =: c(b), (1.2) j for ay vector b > 0, where M is the compariso matrix of M, that is M ii = M ii Mij = M ij for i j. However, fidig a large value of c(b) is ot easy. For some b, c(b) ca be very small, ad thus the error coefficiet µ(b) := 1 + M c(b) (1.3) ca be very large. See examples i Sectio 3. Iterval methods for validatio of solutio of the LCP(M, q) have bee studied i [1,12]. Whe a umerical validatio coditio for the existece of a

Title Suppressed Due to Excessive Legth 3 solutio holds, a umerical error boud is provided. However, the umerical validatio coditio is ot esured to be held at every poit x. I this paper, for M beig a P-matrix, we preset a ew error boud i p (p 1, or p = ) orms, x x p d [0,1] DM) 1 p r(x) p, (1.4) where D =diag(d 1, d 2,..., d ). Moreover, for M beig a H-matrix with positive diagoals, we show d [0,1] DM) 1 p M 1 max(λ, I) p (1.5) where Λ is the diagoal part of M, ad the max operator deotes compoetwise maximum of two matrices. This implies max(λ, I) = diag(max(m 11, 1), max(m 22, 1),..., max(m, 1)). I compariso with the Mathias-Pag error coefficiets (1.1) ad (1.3), we give the followig iequalities. If M is a P-matrix, max d [0,1] (I D+DM) 1 max(1, M ) If M is a H-matrix with positive diagoals, = 1 + M mi(1, M ). (1.6) M 1 max(λ, I) µ(b) M 1 mi(λ, I), for ay b > 0. (1.7) If M is a M-matrix, M 1 max(λ, I) 1 + M I additio, for M beig a M-matrix, the optimal value (I D + D M) 1 1 := M 1 mi(λ, I). (1.8) max d [0,1] (I D + DM) 1 1 ca be foud by solvig a simple covex programmig problem. I Sectio 3, we use some umerical examples to illustrate these error bouds. I particular, for some cases, (1.4),(1.5),(1.6),(1.7), (1.8) hold with equalities, which idicate that they are tight estimates. Prelimiary umerical results show that the ew error bouds are much sharper tha existig error bouds. Notatios: Let N = {1, 2,..., }. Let e deote the vector whose all elemets are 1. The absolute matrix of a matrix B is deoted by B. Let deote the p-orm for p 1 or p =.

4 Xiaoju Che, Shuhuag Xiag 2 New error bouds It is ot difficult to fid that for every x, y R, mi(x i, y i ) mi(x i, y i ) = (1 d i )(x i x i ) + d i (y i y i ), i N (2.1) where 0 if y i x i, yi x i 1 if y d i = i x i, yi x i mi(x i, y i ) mi(x i, y i ) + x i x i y i yi + x i x otherwise. i Moreover, we have d i [0, 1]. Hece puttig y = Mx + q ad y = Mx + q i (2.1), we obtai r(x) = (I D + DM)(x x ), (2.2) where D is a diagoal matrix whose diagoal elemets are d = (d 1, d 2,..., d ) [0, 1]. It is kow that M is a P-matrix if ad oly if I D +DM is osigular for ay diagoal matrix D =diag(d) with 0 d i 1 [7]. This together with (2.2) yields upper ad lower error bouds, r(x) max I D + DM x x max d [0,1] (I D+DM) 1 r(x). (2.3) d [0,1] Moreover, it is ot difficult to verify that if M is a P-matrix ad D =diag(d) with d [0, 1], we have max x i((i D + DM)x) i > 0, for all x 0, 1 i that is, (I D + DM) is a P-matrix. Therefore, computatio of rigorous error bouds ca be tured ito optimizatio problems over a P-matrix iterval set, which is related to liear P-matrix iterval systems. The liear iterval system has bee studied itesively ad some highly efficiet umerical methods have bee developed, see [10, 12] for refereces. I the rest part of this sectio, we give some simple upper bouds for d [0,1] DM) 1. Lemma 2.1 If M is a M-matrix, the I D + DM is a M-matrix for d [0, 1]. Proof From I 27 of Theorem 2.3, Chap. 6 i [2], there is u > 0 such that Mu > 0. It is easy to verify that (I D + DM)u > 0. Applyig the theorem agai, we fid that I D + DM is a M-matrix. Theorem 2.1 Suppose that M is a H-matrix with positive diagoals. The we have d [0,1] DM) 1 M 1 max(λ, I). (2.4)

Title Suppressed Due to Excessive Legth 5 Proof Let M = Λ B. We ca write (I D + DM) 1 = (I (I D + DΛ) 1 DB) 1 (I D + DΛ) 1. (2.5) We first prove (2.4) for M beig a M-matrix. Note that B 0 with zero diagoal etries, ad for ay d [0, 1], Lemma 2.1 esures that I D+DM ad (I (I D + DΛ) 1 DB) are M-matrices. For each ith diagoal elemet of the diagoal matrix (I D + DΛ) 1 D, we cosider the fuctio φ(t) = t 1 t + tm ii, for t [0, 1]. It is easy to verify that φ(t) 0 ad mootoically icreasig for t [0, 1]. Hece, we have Λ 1 (I D + DΛ) 1 D 0, for d [0, 1]. Sice B is oegative, we get Λ 1 B (I D + DΛ) 1 DB 0, for d [0, 1]. By Theorem 5.2, Chap. 7 ad Corollary 1.5, Chap. 2 i [2], the spectral radius satisfies 1 > ρ(λ 1 B) ρ((i D + DΛ) 1 DB), for d [0, 1]. Therefore, we fid that (I (I D + DΛ) 1 DB) 1 = I + (I D + DΛ) 1 DB + + ((I D + DΛ) 1 DB) k + I + Λ 1 B + + (Λ 1 B) k + = (I Λ 1 B) 1 = (Λ B) 1 Λ = M 1 Λ. Now for each ith diagoal elemet of the diagoal matrix (I D + DΛ) 1, we cosider the fuctio ψ(t) = 1 1 t + tm ii. For t [0, 1], ψ(t) > 0, ad ψ (t) 0 if M ii < 1 otherwise ψ (t) 0. Hece, we obtai { max ψ(t) = 1/Mii if M ii < 1 t [0,1] 1 otherwise. This implies (I D + DΛ) 1 max(λ 1, I), for d [0, 1]. (2.6)

6 Xiaoju Che, Shuhuag Xiag Therefore, the upper boud (2.4) for M beig a M-matrix ca be derived by (2.6), (2.5) ad that for all d [0, 1], (I (I D + DΛ) 1 DB) 1 ad (I D + DΛ) 1 are oegative ad (I (I D + DΛ) 1 DB) 1 (I D + DΛ) 1 M 1 Λ max(λ 1, I) = M 1 max(λ, I). Now we show (2.4) for M beig a H-matrix with positive diagoals. Sice for ay matrix A, ρ(a) ρ( A ), we have that for all d [0, 1], ρ((i D + DΛ) 1 DB) ρ((i D + DΛ) 1 D B ) ρ(λ 1 B ) < 1. Therefore, we have (I (I D + DΛ) 1 DB) 1 = I + (I D + DΛ) 1 DB + + ((I D + DΛ) 1 DB) k + I + (I D + DΛ) 1 D B + + ((I D + DΛ) 1 D B ) k + I + Λ 1 B + + (Λ 1 B ) k + = (I Λ 1 B ) 1 = (Λ B ) 1 Λ = M 1 Λ. This together with (2.5) ad (2.6) gives (I D + DM) 1 M 1 Λ max(λ 1, I) = M 1 max(λ, I). Remark 1. Sice M 1 max(λ, I) 0, we have ad M 1 max(λ, I) = M 1 max(λ, I)e M 1 max(λ, I) 1 = (e T M 1 max(λ, I)) T. The upper error boud i (2.4) with or 1 ca be computed by solvig a liear system of equatios mi(λ 1, I) Mx = e or M T mi(λ 1, I)x = e. Theorem 2.2 Suppose that M is a M-matrix. Let V = {v M T v e, v 0} ad f(v) = max 1 i (e + v M T v) i. The we have d [0,1] DM) 1 1 = max f(v). (2.7) v V Proof From Lemma 2.1, we have that This implies that (I D + DM) 1 0, for all d [0, 1]. (I D +DM) 1 1 = (e T (I D +DM) 1 ) T = (I D +M T D) 1 e.

Title Suppressed Due to Excessive Legth 7 Therefore, max d [0,1] (I D + DM) 1 1 = max max u s.t. 1 i u i u Du + M T Du = e 0 d e. (2.8) Let v = Du, the from u 0 ad d [0, 1], we have 0 v u = v M T v + e. This implies v V. Hece we obtai d [0,1] DM) 1 1 max f(v). v V Coversely, suppose that v is a maximum solutio of f(v) i V. We set u = v M T v + e ad { vi /u d i = i if u i 0 0 otherwise, for all i N. The d [0, 1] ad u Du + M T Du = e. This implies that u is a feasible poit of the maximizatio problem (2.8). Thus, d [0,1] DM) 1 1 max f(v). v V Furthermore, the feasible set V is covex ad bouded, ad the objective fuctio f is covex. Thus, max f(v) always has a optimal value. The proof v V is completed. Now we show that error bouds give i this paper are sharper tha the Mathias-Pag error bouds. Theorem 2.3 If M is a P-matrix, the for ay x R, the followig iequalities hold. 1 r(x) (Mathias-Pag [6]) 1 + M 1 max(1, M ) r(x) (Cottle-Pag-Stoe [4]) 1 = max d [0,1] I D + DM r(x) x x d [0,1] DM) 1 r(x) max(1, M ) r(x) = 1 + M r(x) mi(1, M ) r(x) 1 + M r(x) (Mathias-Pag [6])).

8 Xiaoju Che, Shuhuag Xiag Proof The first iequality is obvious. For the ext equality, we set D =diag(d 1,..., d ) to be a optimal poit such that I D + D M = max I D + DM. d [0,1] From M ii > 0, we have I D + D M = max 1 i 1 d i + d i M ii + d i = max 1 i 1 d i + d i M ij j=1 =: 1 d i 0 + d i 0 M i0j. j=1 M ij Hece the value d i 0 must be a boudary poit of [0, 1]. Moreover, it is easy to fid { I D + D M if M M = > 1 1 otherwise which implies max(1, M ) = j=1 j i max I D + DM. (2.9) d [0,1] The secod ad third iequalities follows from (2.3). For the fourth iequality, we first prove that for ay osigular diagoal matrix D =diag(d) with d (0, 1], (I D + DM) 1 max(1, M ). (2.10) Let H = (I D +DM) 1 ad i 0 be the idex such that j=1 H i 0j = (I D+DM) 1. Defie y = (I D+DM) 1 p, where p = (sg(h i01),,sg(h i0)) T. The p = (I D + DM)y, My = D 1 p + y D 1 y, ad (I D + DM) 1 = y. Furthermore, by the defiitio of, we have 0 < y 2 max i Let j be the idex such that y j ( pj (i) If y j 1, the we have ( pi y i (My) i = max y i + y i y i i d i d i + y d j y ) j j d j y 2 My j My M y. ). ( pi = max y i + y i d i y ) i. i d i

Title Suppressed Due to Excessive Legth 9 This implies (I D + DM) 1 y M. (ii) If y j > 1, the p j + d j y j y j > 0 ad p d j > y j d j y j 0. Thus j p j = 1 ad d j > 1 y 1 j. Hece, we obtai 0 < p j + d j y j y j d j 1. This implies 0 < (My) j 1. Thus y 2 y j y ad (I D + DM) 1 1. (iii) If y j < 1, the p j + d j y j y j < 0 ad p d j < y j d j y j 0. Thus j p j = 1 ad d j 1 + y 1 j. Similarly, we obtai 0 > p j + d j y j y j d j 1. This implies 1 (My) j < 0. Thus y 2 y j y ad (I D + DM) 1 1. Combiig the three cases, we claim that (2.10) holds for ay osigular matrix D=diag(d) with d (0, 1]. Now we cosider d [0, 1]. Let d ɛ = mi(d + ɛe, e), where ɛ (0, 1]. The, we have (I D + DM) 1 = lim (I D ɛ + D ɛ M) 1 max(1, M ). ɛ 0 Sice D is arbitrarily chose, we obtai the fourth iequality. The ext equality ad iequality are trivial. Theorem 2.4 If M is a H-matrix with positive diagoals, the for ay x, b R, b > 0, the followig iequalities hold. x x d [0,1] DM) 1 r(x) M 1 max(λ, I) r(x) (µ(b) M 1 mi(λ, I) ) r(x) µ(b) r(x) (Mathias-Pag [6]). I additio, if M is a M-matrix, the for ay x R, the followig iequalities hold. x x M 1 max(λ, I) r(x)

10 Xiaoju Che, Shuhuag Xiag ( 1 + M M 1 mi(λ, I) ) r(x) 1 + M r(x) (Mathias-Pag [6]). Proof We first cosider that M is a H-matrix with positive diagoals. The first ad secod iequalities follow (2.3) ad Theorem 2.1. Now we show the third iequality. For ay b R, b > 0, let b 0 = mi b i. The b b 0 e, ad M 1 b 1 i M 1 b 0 e = b 0 M 1 e. Moreover, for every j N (( M 1 b) j ) 2 ( M 1 b) j b 0 ( M 1 e) j ( mi 1 i ( M 1 b) i )( mi 1 i b i)( M 1 e) j. Hece from M 1 e = M 1, we obtai (max( M 1 b) j ) 2 ( mi ( M 1 b) i )( mi b i) M 1. j 1 i 1 i Therefore, from the followig iequalities 1 + M I + M I + Λ max(λ, I) + mi(λ, I) we fid µ(b) = 1 + M c(b) M 1 (1 + M ) M 1 ( max(λ, I) + mi(λ, I) ) M 1 max(λ, I) + M 1 mi(λ, I). Now, we cosider that M is a M-matrix. Let M 1 w = max M 1 y = M 1. From the defiitio of y =1, we have M 1 2 max 1 i (M 1 w) i (MM 1 w) i M 1. By the similar argumet above, we fid 1 + M M 1 (1+ M ) M 1 max(λ, I) + M 1 mi(λ, I). Applyig Theorem 2.1, we obtai the followig relative error bouds Corollary 2.1 Suppose M is a H-matrix with positive diagoals. For ay x R, we have r(x) (1 + M ) M 1 max(λ, I) ( q) + x x x M M 1 max(λ, I) r(x). ( q) +

Title Suppressed Due to Excessive Legth 11 Proof. Set x = 0 i (2.3). From Theorem 2.1, we get x M 1 max(λ, I) r(0) = M 1 max(λ, I) ( q) +. Moreover, from Mx + q 0, we deduce ( q) + (Mx ) + Mx. This implies ( q) + Mx, ad ( q) + M x. Combiig (2.3) with the bouds for x x ad x, we obtai the desired error bouds. Remark 2. Note that for M beig a H-matrix with positive diagoals, x solves LCP(Λ 1 M, Λ 1 q) if ad oly if x solves LCP(M, q). Let r(x) = mi(λ 1 Mx + Λ 1 q, x). From (2.3) ad we have I D + DΛ 1 M I D(I Λ 1 M) Λ 1 M, for every x R. Moreover, from with p = 1 or p =, we obtai r(x) Λ 1 M x x M 1 Λ r(x) Λ 1 M p = Λ 1 M p, r(x) p cod p (Λ 1 M) ( q) x x p + p x cod p(λ 1 M) r(x) p, (2.11) p ( q) + p for p = 1 or p =. 3 Numerical examples I this sectio, we first use examples to illustrate error bouds derived i the last sectio. Next we report umerical results obtaied by usig Matlab 6.1 o a IBM PC. Example 3.1 I [12], Schäfer cosidered a applicatio of P-matrix liear complemetarity problems, which arises from computig iterval eclosure of the solutio set of a iterval liear system [10]. The followig P-matrix is from [12] ( ) 1 4 M =. 5 7 This matrix is ot a H-matrix. It is ot difficult to fid that 1 + 6d 2 + 4d 1 d [0,1] DM) 1 = max = 5, 2 d [0,1] 2 1 + 6d 2 + 20d 1 d 2

12 Xiaoju Che, Shuhuag Xiag ad 1 + M 13 mi(m ii ) = 13. Example 3.2 Cosider the followig H-matrix with positive diagoals. (Example 5.10.4 i [4]) ( ) 1 t M =, 0 1 where t 1. It is easy to show that 1/t 2. Hece the error boud (1.1) has 1 + M t 2 (2 + t ) = O(t 3 ). For b = e, we have ad c(b) = µ(b) = 1 + M c(b) (mi b i )(mi( M 1 b) i ) i i (max( M 1 b) j ) 2 = 1/(1 + t ) 2 j = (1 + t ) 2 (2 + t ) = O(t 3 ). The error coefficiets give i the last sectio satisfy for p = 1, max d [0,1] (I D(I M)) 1 p = max (1+d 1 t ) = M 1 max(i, Λ) p = 1+ t. 2 d 1 [0,1] Hece, the ew error bouds are much smaller tha the Mathias-Pag error bouds, especially whe t. Moreover, we ca show that the ew error bouds are tight. Let t = 1 ad q = (1, 1) T. The the LCP(M, q) has a uique solutio x = (0, 1) T. For x = (4, 3) T, x x = 4, M 1 max(i, Λ) = 2, r(x) = 2. Hece (1.4) ad (1.5) hold with equality. For M = I, (1.6),(1.7) with b = e ad (1.8) hold with equality. Now we report some umerical results to compare these error coefficiets. Example 3.3 Let M be a tri-diagoal matrix b + α si( 1 ) c a b + α si( 2 ) c M =............... c a b + α si(1) For b = 2, a = c = 1, α = 0, the LCP(M, q) with various q i a iterval vector arises from the fiite differece method for free boudary problems [11].

Title Suppressed Due to Excessive Legth 13 Table 1 Example 3.3, = 400, κ 1 = max d [0,1] (I D + DM) 1 1 α a b c κ 1 M 1 max(λ, I) µ(e) 0-1 2-1 2.0100E4 4.0200E4 2.0201E7 2-1.5 2-0.5 3.9920E2 7.8832E2 1.5536E6 2-1.5 2.2-0.5 6.3910E0 1.0999E1 3.6557E2 1-1.5 3.0-1.5 2.4399E1 7.3936E1 1.8060E4 Ackowledgemets The authors are grateful to the associate editor ad two aoymous referees for their helpful commets. Refereces 1. Alefeld, G.E., Che, X., Potra, F.A.: Numerical validatio of solutios of liear complemetarity problems, Numer. Math. 83, 1-23(1999) 2. Berma, A., Plemmos, R.J.: Noegative Matrix i the Mathematical Scieces, SIAM Publisher, Philadelphia(1994) 3. Che, B.: Error bouds for R 0-type ad mootoe oliear complemetarity problems, J. Optim. Theory Appl. 108, 297 316(2001) 4. Cottle, R.W., Pag, J.-S., Stoe, R.E.: The Liear Complemetarity Problem, Academic Press, Bosto, MA(1992) 5. Ferris, M.C., Magasaria, O.L.: Error bouds ad strog upper semicotiuity for mootoe affie variatioal iequalities, A. Oper. Res. 47, 293-305(1993) 6. Mathias, R., Pag, J.-S.: Error bouds for the liear complemetarity problem with a P-matrix, Liear Algebra Appl. 132, 123-136(1990) 7. Gabriel, S.A., Moré, J.J.: Smoothig of mixed complemetarity problems. I: M.C.Ferris ad J.-S.Pag (ed.) Complemetarity ad Variatioal Problems: State of the Art, 105-116. SIAM Publicatios, Philadelphia, PA(1997) 8. Magasaria, O.L., Re, J.: New improved error bouds for the liear complemetarity problem, Math. Programmig 66, 241-257(1994) 9. Pag, J.-S.: Error bouds i mathematical programmig, Math. Programmig 79, 299-332(1997) 10. Roh, J.: Systems of liear iterval equatios, Liear Algebra Appl. 126, 39-78(1989) 11. Schäfer, U.: A eclosure method for free boudary problems based o a liear complemetarity problem with iterval data, Numer. Fuc. Aal. Optim. 22, 991-1011(2001) 12. Schäfer, U.: A liear complemetarity problem with a P-matrix, SIAM Review 46, 189-201(2004) 13. Xiu, N., Zhag, J.: A characteristic quatity of P-matrices, App. Math. Lett. 15, 41-46(2002)