Cheap and Tight Bounds: The Recent Result by E. Hansen Can Be Made More Efficient

Similar documents
Institute of Computer Science. Compact Form of the Hansen-Bliek-Rohn Enclosure

NP-hardness results for linear algebraic problems with interval data

A NOTE ON CHECKING REGULARITY OF INTERVAL MATRICES

Inverse interval matrix: A survey

arxiv: v1 [math.na] 28 Jun 2013

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

arxiv: v1 [cs.na] 7 Jul 2017

LINEAR INTERVAL INEQUALITIES

Positive Semidefiniteness and Positive Definiteness of a Linear Parametric Interval Matrix

A Residual Existence Theorem for Linear Equations

Computing the Norm A,1 is NP-Hard

Checking Properties of Interval Matrices 1

Bounds on eigenvalues of complex interval matrices

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

Optimal Preconditioning for the Interval Parametric Gauss Seidel Method

Bounds on the radius of nonsingularity

Quantum mechanics without the measurement axiom. Jiří Souček

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

Solving Interval Linear Equations with Modified Interval Arithmetic

The Laplacian spectrum of a mixed graph

Weaker hypotheses for the general projection algorithm with corrections

AN IMPROVED THREE-STEP METHOD FOR SOLVING THE INTERVAL LINEAR PROGRAMMING PROBLEMS

REGULARITY RADIUS: PROPERTIES, APPROXIMATION AND A NOT A PRIORI EXPONENTIAL ALGORITHM

Interval Robustness in Linear Programming

Parallel Iterations for Cholesky and Rohn s Methods for Solving over Determined Linear Interval Systems

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

arxiv: v1 [math.na] 11 Sep 2018

Interval linear algebra

Institute of Computer Science. Verification of linear (in)dependence in finite precision arithmetic

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

Fast Verified Solutions of Sparse Linear Systems with H-matrices

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

Ann. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:

Separation of convex polyhedral sets with uncertain data

Characterization of half-radial matrices

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Interpolating the arithmetic geometric mean inequality and its operator version

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Generalized Numerical Radius Inequalities for Operator Matrices

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

Linear Analysis Lecture 16

Intrinsic products and factorizations of matrices

On the Schur Complement of Diagonally Dominant Matrices

Some bounds for the spectral radius of the Hadamard product of matrices

Constraint propagation for univariate quadratic constraints

Solving Global Optimization Problems by Interval Computation

Robust optimal solutions in interval linear programming with forall-exists quantifiers

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

EXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS

SOLVING FUZZY LINEAR SYSTEMS BY USING THE SCHUR COMPLEMENT WHEN COEFFICIENT MATRIX IS AN M-MATRIX

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

Global Optimization by Interval Analysis

Matrix functions that preserve the strong Perron- Frobenius property

Math 407: Linear Optimization

Academic Editors: Alicia Cordero, Juan R. Torregrosa and Francisco I. Chicharro

c 2009 Society for Industrial and Applied Mathematics

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

Improved Upper Bounds for the Laplacian Spectral Radius of a Graph

A Solution Algorithm for a System of Interval Linear Equations Based on the Constraint Interval Point of View

Moore Penrose inverses and commuting elements of C -algebras

Sang Gu Lee and Jeong Mo Yang

On the distance signless Laplacian spectral radius of graphs and digraphs

A new approach to solve fuzzy system of linear equations by Homotopy perturbation method

The least eigenvalue of the signless Laplacian of non-bipartite unicyclic graphs with k pendant vertices

Quality of the Solution Sets of Parameter-Dependent Interval Linear Systems

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

JUST THE MATHS UNIT NUMBER 9.9. MATRICES 9 (Modal & spectral matrices) A.J.Hobson

ON POSITIVE SEMIDEFINITE PRESERVING STEIN TRANSFORMATION

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Interval Gauss-Seidel Method for Generalized Solution Sets to Interval Linear Systems

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

Hierarchical open Leontief models

NUMERICAL SOLUTION OF CONVECTION DIFFUSION EQUATIONS USING UPWINDING TECHNIQUES SATISFYING THE DISCRETE MAXIMUM PRINCIPLE

Spectral Theorem for Self-adjoint Linear Operators

A note on the unique solution of linear complementarity problem

Properties for the Perron complement of three known subclasses of H-matrices

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

Tactical Decompositions of Steiner Systems and Orbits of Projective Groups

Linear Algebra and its Applications

Interval solutions for interval algebraic equations

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Minimum number of non-zero-entries in a 7 7 stable matrix

Wavelets and Linear Algebra

c 1998 Society for Industrial and Applied Mathematics

ON THE SIMILARITY OF CENTERED OPERATORS TO CONTRACTIONS. Srdjan Petrović

An Eigenvalue Symmetric Matrix Contractor

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

THE REAL AND THE SYMMETRIC NONNEGATIVE INVERSE EIGENVALUE PROBLEMS ARE DIFFERENT

16. Local theory of regular singular points and applications

On the Approximation of Linear AE-Solution Sets

The following two problems were posed by de Caen [4] (see also [6]):

Matrix Methods for the Bernstein Form and Their Application in Global Optimization

Transcription:

Interval Computations No 4, 1993 Cheap and Tight Bounds: The Recent Result by E. Hansen Can Be Made More Efficient Jiří Rohn Improving the recent result by Eldon Hansen, we give cheap and tight bounds on the solution of a linear interval system as well as on the inverse interval matrix. Грубая и улучшенная границы: недавний результат Э. Хансена может быть сделан более эффективным И. Рон Улучшая недавний результат Э. Хансена, мы приводим грубую и улучшенную границы для решения линейной интервальной системы и для обратной интервальной матрицы. Jirí Rohn c Jiří Rohn, 1993

14 J. Rohn 1 Introduction It follows from the general theory [2] that up to 2 n systems of linear equations need to be solved to compute the exact bounds on the solution of a system of linear interval equations in n unknowns. Eldon Hansen has recently published in [1] a remarkable result showing that this number can be reduced to 2n for linear interval systems whose midpoint matrix is the unit matrix. In this paper we prove that Hansen s result can be reformulated in such a way that inverting only one matrix is needed, and we apply this result to bounding solutions of linear interval systems and inverting interval matrices. The bounds on the inverse interval matrix are shown to be better in general than the classical ones derived via Neumann series. 2 Hansen s result improved Hansen considered in [1] a linear interval system of the form [I, I + ]x = [b c δ, b c + δ] (1) where I is the n n unit matrix, is a nonnegative n n matrix, b c, δ R n, δ 0 and [I, I+ ] = {A; A I }, [b c δ, b c +δ] = {b; b b c δ}. He used diagonal dominance as a sufficient regularity condition; however, it follows from the assertion (C3) of Theorem 5.1 in [2] that [I, I + ] is regular (i.e., all the matrices contained therein are nonsingular) if and only if ϱ( ) < 1 (2) holds (ϱ is the spectral radius). Let us note that this condition implies existence and nonnegativity of the matrix M = (I ) 1 = (m ij ). As is well known, the exact bounds on the solution of (1) are defined by x i = min x X x i x i = max x X x i

Cheap and Tight Bounds: The Recent Result by E. Hansen... 15 (i = 1,..., n), where X is the so-called solution set: X = { x; Ax = b for some A [I, I + ], b [b c δ, b c + δ] }. Hansen showed in [1] that these quantities can be computed by solving only 2n systems of linear equations sharing the same coefficient matrix. We shall prove here a reformulation of his result in which only one matrix inversion is needed: Theorem 1. Let (2) hold. Then for each i {1,..., n} we have where x i = min{x i, ν i x i} x i = max{ x i, ν i x i } i x x i x i = x i + m ii (b c + b c ) i = x i + m ii (b c b c ) i = (M( b c + δ)) i and ν i = 1 2m ii 1 (0, 1]. Proof. Let i {1,..., n} be fixed. We shall prove: 1) x i max{ x i, ν i x i } for each x X, 2) x i = x i, ν i x i = x i for some x, x X; this will prove that x i = max{ x i, ν i x i }, 3) the formula for x i using the result for x i. 1) First notice that for M = (I ) 1 we have M = M = M I and m ii 1, implying 2m ii 1 1 and ν i (0, 1]. Define a diagonal matrix D by (j = 1,..., n), and let D jj = 1 if j i and (b c ) j 0 1 if j i and (b c ) j < 0 1 if j = i b = Dbc + δ. Then we have x i = x i + m ii (b c b c ) i = (M b) i.

16 J. Rohn Now, let x X, so that Ax = b for some A [I, I + ] and b [b c δ, b c + δ]. Put x = ( x 1,..., x i 1, x i, x i+1,..., x n ) T. We shall prove that x satisfies the inequality M(x x ) + x M b. (3) In fact, x i = x i = b i + ((I A)x) i (b c + δ) i + ( x ) i = ( b + x ) i, and if j i, then x j = x j b j + ((I A)x) j b c j +δ j +( x ) j = ( b+ x ) j, which together gives x b + x and premultiplying this inequality by the nonnegative matrix M yields Mx qm b + M x = M b + (M I) x, which implies (3). Now, if x i 0, then x = x and from (3) we have x i = x i (M b) i = x i and if x i < 0, then from (3) we obtain (2m ii 1)x i (M b) i = x i which implies hence in both the cases we have x i ν i x i 2) Put x i max{ x i, ν i x i }. x = DM b x = DM( b 2ν i x i e i ) where e i is the i-th column of I. We shall prove that x and x belong to X. Since (I D D)x = DM b D(M I) b = D b = b c + Dδ, we see that x satisfies (I D D)x = b c + Dδ where I D D [I, I + ] and b c + Dδ [b c δ, b c + δ], which proves that x X. Furthermore, define a diagonal matrix D by D ii = 1 and

Cheap and Tight Bounds: The Recent Result by E. Hansen... 17 D jj = D jj otherwise. Then (I D D )DM = DM D (I 2e i e T i )M = DM D(M I) + 2D e i e T i M = D + 2D e ie T i M, hence (I D D )x = (D +2D e i e T i M)( b 2ν i x i e i ) = D b+2 x i D e i ( ν i +1 2ν i (m ii 1)) = D b = b c + Dδ, which again gives that x X. Now, since e T i D = et i, we have x i = e T i DM b = x i x i = x i 2ν i x i (m ii 1) = ν i x i which in conjunction with 1) proves that x i = max{ x i, ν i x i }. 3) To prove the formula for x i, consider the linear interval system [I, I + ]x = [ b c δ, b c + δ] with the solution set X 0 = X. Then, applying the formula for x i to it, we have x i = min x i = max x i = max{ x i, ν X X i x i } 0 where x i = x i + m ii( b c b c ) i = x i m ii(b c + b c ) i = x i, which finally gives x i = max{ x i, ν i x i} = min{x i, ν i x i}. 3 Solving linear interval systems Consider a linear interval system and its solution set [A c, A c + ]x = [b c δ, b c + δ] (4) X 0 = { x; Ax = b for some A [A c, A c + ], b [b c δ, b c + δ] }. Let R be an arbitrary n n matrix. If Ax = b for some A [A c, A c + ] and b [b c δ, b c + δ], then we have RAx = Rb and RA I = RA c I + R(A A c ) G R

18 J. Rohn where we have denoted Rb Rb c R δ G R = RA c I + R. Thus we can see that the solution set X 0 of (4) is contained in the solution set X of the system [I G R, I + G R ]x = [Rb c R δ, Rb c + R δ] (5) which is of the form (1). Now, if the condition ϱ(g R ) < 1 (6) is satisfied, then we can apply Theorem 1 to the system (5) to obtain the exact bounds x i, x i (i = 1,..., n) on X. Since X 0 X, this implies that x i x i x i (i = 1,..., n) holds for each x X 0. In this way we have obtained an interval enclosure of the solution set X 0 of (4). This enclosure is generally not sharp, but can be expected to be very tight if the radii and δ are narrow; cf. Neumaier [3] for a detailed discussion. The procedure described is performable if we can find a matrix R satisfying (6). It follows from Theorem 4.1.2 in [3] that such a matrix exists if and only if [A c, A c + ] is strongly regular (i.e., if A c is nonsingular and ϱ( A 1 c ) < 1); if this is the case, then R := A 1 c has the required property. Therefore, for practical purposes it is recommendable to set R equal to the computed value of A 1 c. 4 Inverting interval matrices For an interval matrix [I, I + ], consider its interval inverse [B, B] defined by B ij = min{(a 1 ) ij ; A [I, I + ]} B ij = max{(a 1 ) ij ; A [I, I + ]} (i, j = 1,..., n). Applying Theorem 1 to the systems [I, I + ]x = [e j, e j ], where e j is the j-th column of I (j = 1,..., n), we obtain this explicit form of the inverse (where, as before, M = (I ) 1 ):

Cheap and Tight Bounds: The Recent Result by E. Hansen... 19 Theorem 2. Let (2) hold. Then we have (i, j = 1,..., n). B ij = m { ij mij if i j B ij = if i = j m ii 2m ii 1 It may seem surprising that B ii > 1 2 (since m ii 1). This, however, is a consequence of a more general result ([2], Thm. 5.1, (C5)). For a general square interval matrix [A c, A c + ], employing Theorem 2 to the preconditioned matrix as in Section 3, we obtain the following result (where we employ matrices R and K to avoid the use of exact inverses A 1 c and (I G R ) 1 ): Theorem 3. For a given interval matrix [A c, A c + ], let K = (k ij ) 0 and R be any matrices satisfying where KG R + I K (7) G R = RA c I + R. Then for each A [A c, A c + ] we have A 1 T R (K T ) R (8) where T is the diagonal matrix with diagonal entries T ii = k2 ii 2k ii 1 (i = 1,..., n). Proof. Premultiplying (7) by the nonnegative matrix G R, we obtain G R + I KG R + I K and by induction l G j R K j=0 for each l 0, hence ϱ(g R ) < 1 and (I G R ) 1 = j=0 Gj R K. Now, for each A [A c, A c + ] we have RA I = RA c I +R(A A c ) G R, hence RA [I G R, I + G R ], which implies that A is nonsingular and B A 1 R 1 B

20 J. Rohn where B and B are as in Theorem 2, with M := (I G R ) 1. Define matrices and B B by B = K and { kij if i j ij = B if i = j k ii 2k ii 1 (i, j = 1,..., n), then from M K we obtain B B and B B, hence which implies that B A 1 R 1 B A 1 R 1 T = A 1 R 1 1 2 (B + B) 1 2 ( B B ) = K T and consequently A 1 T R = (A 1 R 1 T )R A 1 R 1 T R (K T ) R. To explain what is new in this result, let us notice that the classical argument using Neumann series (see e.g. [2], proof of Thm. 4.4) yields the estimate A 1 R (K I) R. (9) However, since T ii 1 = (k ii 1) 2 2k ii 1 0 for each i, we have T I and hence (K T ) R (K I) R. This shows that (8) is at least as good as (9), but for each i, j with k ii > 1 and R ij 0 the estimate (8) gives a result which is better than (9) by the amount of (k ii 1) 2 2k ii 1 R ij. Thus for the particular choice R := A 1 c and K := (I A 1 c ) 1, the estimate (8) gives a better result than (9) for each i, j such that ( A 1 c ) ii > 0 and (A 1 c ) ij 0. Let us note that Herzberger and Bethke proved in [4] that two well-known methods for bounding the inverse interval matrix cannot improve on the bound (9).

Cheap and Tight Bounds: The Recent Result by E. Hansen... 21 Acknowledgements I wish to thank Prof. Arnold Neumaier for stimulating e-mail discussions of Hansen s result, and two anonymous referees for their helpful comments. References [1] Hansen, E. R. Bounding the solution of interval linear equations. SIAM J. Numer. Anal. 29 (1992), pp. 1493 1503. [2] Rohn, J. Systems of linear interval equations. Lin. Alg. Appls. 126 (1989), pp. 39 78. [3] Neumaier, A. Interval methods for systems of equations. Cambridge University Press, Cambridge, 1990. [4] Herzberger, J. and Bethke, D. On two algorithms for bounding the inverse of an interval matrix. Interval Computations 1 (1991), pp. 44 53. Faculty of Mathematics and Physics Charles University Malostranske nam. 25 11800 Prague Czech Republic