Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems

Similar documents
CS229 Lecture notes. Andrew Ng

A Brief Introduction to Markov Chains and Hidden Markov Models

XSAT of linear CNF formulas

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

Separation of Variables and a Spherical Shell with Surface Charge

Problem set 6 The Perron Frobenius theorem.

Lecture Note 3: Stationary Iterative Methods

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION

Two-sample inference for normal mean vectors based on monotone missing data

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

Approximated MLC shape matrix decomposition with interleaf collision constraint

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness

Explicit overall risk minimization transductive bound

C. Fourier Sine Series Overview

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

Lecture 6: Moderately Large Deflection Theory of Beams

Integrating Factor Methods as Exponential Integrators

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)

Partial permutation decoding for MacDonald codes

Nonlinear Analysis of Spatial Trusses

arxiv: v1 [math.ca] 6 Mar 2017

Fitting affine and orthogonal transformations between two sets of points

An explicit Jordan Decomposition of Companion matrices

THE REACHABILITY CONES OF ESSENTIALLY NONNEGATIVE MATRICES

Approximated MLC shape matrix decomposition with interleaf collision constraint

u(x) s.t. px w x 0 Denote the solution to this problem by ˆx(p, x). In order to obtain ˆx we may simply solve the standard problem max x 0

A. Distribution of the test statistic

Efficiently Generating Random Bits from Finite State Markov Chains

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness

Volume 13, MAIN ARTICLES

Combining reaction kinetics to the multi-phase Gibbs energy calculation

JENSEN S OPERATOR INEQUALITY FOR FUNCTIONS OF SEVERAL VARIABLES

c 2007 Society for Industrial and Applied Mathematics

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

Asynchronous Control for Coupled Markov Decision Systems

Statistical Learning Theory: A Primer

The EM Algorithm applied to determining new limit points of Mahler measures

Week 6 Lectures, Math 6451, Tanveer

Algorithms to solve massively under-defined systems of multivariate quadratic equations

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS

Robust Sensitivity Analysis for Linear Programming with Ellipsoidal Perturbation

Cryptanalysis of PKP: A New Approach

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

Summation of p-adic Functional Series in Integer Points

Primal and dual active-set methods for convex quadratic programming

On the evaluation of saving-consumption plans

arxiv: v1 [math.co] 17 Dec 2018

Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University

A SIMPLIFIED DESIGN OF MULTIDIMENSIONAL TRANSFER FUNCTION MODELS

FRIEZE GROUPS IN R 2

PHYS 110B - HW #1 Fall 2005, Solutions by David Pace Equations referenced as Eq. # are from Griffiths Problem statements are paraphrased

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM

BALANCING REGULAR MATRIX PENCILS

Stochastic Variational Inference with Gradient Linearization

Data Mining Technology for Failure Prognostic of Avionics

Formulas for Angular-Momentum Barrier Factors Version II

Absolute Value Preconditioning for Symmetric Indefinite Linear Systems

The Symmetric and Antipersymmetric Solutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B A l X l B l = C and Its Optimal Approximation

Tight Approximation Algorithms for Maximum Separable Assignment Problems

Appendix of the Paper The Role of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Model

2M2. Fourier Series Prof Bill Lionheart

A BUNDLE METHOD FOR A CLASS OF BILEVEL NONSMOOTH CONVEX MINIMIZATION PROBLEMS

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS

Homogeneity properties of subadditive functions

More Scattering: the Partial Wave Expansion

14 Separation of Variables Method

A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC

On a geometrical approach in contact mechanics

Smoothness equivalence properties of univariate subdivision schemes and their projection analogues

Research of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance

Reichenbachian Common Cause Systems

Higher dimensional PDEs and multidimensional eigenvalue problems

arxiv:math/ v2 [math.pr] 6 Mar 2005

MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES

Dynamic Optimization of Batch Processes: I. Characterization of the Nominal Solution

Theory and implementation behind: Universal surface creation - smallest unitcell

The Binary Space Partitioning-Tree Process Supplementary Material

Approximate Bandwidth Allocation for Fixed-Priority-Scheduled Periodic Resources (WSU-CS Technical Report Version)

8 Digifl'.11 Cth:uits and devices

Converting Z-number to Fuzzy Number using. Fuzzy Expected Value

Copyright information to be inserted by the Publishers. Unsplitting BGK-type Schemes for the Shallow. Water Equations KUN XU

Symbolic models for nonlinear control systems using approximate bisimulation

Distributed average consensus: Beyond the realm of linearity

A UNIVERSAL METRIC FOR THE CANONICAL BUNDLE OF A HOLOMORPHIC FAMILY OF PROJECTIVE ALGEBRAIC MANIFOLDS

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Efficient Generation of Random Bits from Finite State Markov Chains

Identification of macro and micro parameters in solidification model

4 Separation of Variables

Minimizing Total Weighted Completion Time on Uniform Machines with Unbounded Batch

1D Heat Propagation Problems

Multilayer Kerceptron

Stochastic Automata Networks (SAN) - Modelling. and Evaluation. Paulo Fernandes 1. Brigitte Plateau 2. May 29, 1997

LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL HARMONICS

Do Schools Matter for High Math Achievement? Evidence from the American Mathematics Competitions Glenn Ellison and Ashley Swanson Online Appendix

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Transcription:

Componentwise Determination of the Interva Hu Soution for Linear Interva Parameter Systems L. V. Koev Dept. of Theoretica Eectrotechnics, Facuty of Automatics, Technica University of Sofia, 1000 Sofia, Bugaria koev@tu-sofia.bg Abstract In this paper, the probem of determining the interva hu (IH) soution x to a inear interva parameter system A(p)x = b(p), p p is revisited. A new iterative method for computing x is suggested, which is based on individuay finding each interva component x k of x. Each component x k = [x k, x k] is in turn found by separatey determining the ower endpoint x k and upper end-point x k of x k, respectivey. The ower end-point x k is ocated by an iterative method which, at each iteration, makes use of a respective outer soution x and an upper bound x u k on x k. The upper end-point x k is ocated in a simiar manner using reevant outer soutions x and ower bounds x k on x k. In both cases, appropriate modified monotonicity conditions are checked and used. Such an approach resuts in better performance compared to simiar methods empoying standard monotonicity conditions. The method is capabe of determining the soution x if the modified monotonicity conditions are satisfied for a components of p; otherwise, it ony provides a two-sided encosure [x k, x u k] ([x k, x k ]) of x k (x k). The method is extended to a more genera setting where the probem is to determine the IH y of an output variabe vector y which depends on x and p, p p. A numerica exampe iustrating the new method is aso given. Keywords: inear interva parameter systems, interva hu soution, modified monotonicity conditions AMS subject cassifications: 65L15,65G40 1 Introduction Let p = (p 1,..., p m) be a rea m-dimensiona vector beonging to a given interva vector p = (p 1,..., p m ). Aso, et A(p) and b(p) be, respectivey, a rea rectanguar (n 1 n 2) 2014 Submitted: January 22, 2013; Fina revision: February 26, 2014; Accepted: March 4, 1

2 L. V. Koev, Componentwise Determination of the Interva Hu matrix and an n 1-dimensiona vector whose eements depend on p. As is we known, a (rea) inear interva parameter (LIP) system is defined as the famiy of inear agebraic systems A(p)x = b(p) (1.1a) a ij(p) = a ij(p 1,..., p m), b i(p) = b i(p 1,..., p m) (1.1b) where a ij(p) = a ij(p 1,..., p m) and b i(p) = b i(p+1,..., p m) are given functions from R m to R and p µ p µ, µ = 1,..., m. (1.1c) Systems of this type are encountered in many practica appications (e.g., [3, 5, 11, 13, 16, 18]. The united soution set of (1.1) is the coection of a point soutions of (1.1a), (1.1b) over p, i.e. the set (a(p), b(p), p) = {x : A(p)x = b(p), p p}. This set has a rather compex form [1] even in the simpest case of affine (inear) functions m a ij(p) = α ij + a ij µp µ, (1.2a) b i(p) = β i + µ=1 m β iµp µ, µ=1 (1.2b) (systems characterised by the inear parametric dependence (1.2) wi be referred to as LPD systems) and n 1 = n 2. Therefore (under the assumption that (A(p), b(p), p) is a bounded set), the foowing interva soutions to (1.1) wi be considered in this paper: (i) interva hu (IH) soution x : the smaest interva vector containing (A(p), b(p), p); (ii) outer interva (OI) soution x : any interva vector encosing x, i.e. x x ; (iii) inner estimation of the hu (IEH) soution ζ: an interva vector such that ζ x. Most of the known resuts are obtained for the case of determining OI soutions of square LIP systems (n 1 = n 2). Various iterative [2, 8, 10, 12, 14] ([2] being a specia case of [14]) and direct [4, 24] methods for determining OI soutions associated with LPD systems have been suggested. Two different methods for treating noninear parametric dependency probems have been proposed in [7] and [14], respectivey. The genera case of n 1 n 2 has aso been considered for the case of interva and parametric matrices (e.g., [15]). Methods for obtaining IEH soutions have been proposed in [5, 10, 25]. The atter soutions are aso computed as a byproduct by the method of [14]. Determining the IH soution x of an LIP system is an NP-hard probem even in the case of LPD systems. Thus, the soution x can be obtained with reasonabe amount of computations ony if certain restrictive requirements are additionay imposed. Such an approach for determining x has been adopted by many authors (e.g., [5, 7, 17, 18, 21, 26]) where certain monotonicity conditions are to be fufied. In the earier pubications on the topic, the monotonicity conditions are defined over the whoe initia domain p of the interva parameters. The idea to check monotonicity conditions vaid for certain nested subdomains of p has been suggested, seemingy for the first time, in [5] (see aso [6] and [7]). Independenty of [5], the same idea was proposed sighty ater in [17] but was impemented in a better way (accounting to a fuer extent for the interdependencies between a uncertain parameters invoved, verified

Reiabe Computing 20, 2014 3 evauation of the partia derivatives used). The methods in [5, 7, 17] are iterative which permits to reduce the monotonicity requirements successivey at each iteration. A goba optimization method has been used in [27, 28] for approximating or computing x, respectivey. In the present paper, a new iterative method for determining x is suggested, which is an improvement over the methods of [5] and [17]. The idea behind the new method is to determine x in a componentwise manner, computing separatey the ower end-point x k and upper end-point x k of each component x k. In computing x k, use is made of both the outer interva approximation and the kth component ζ k of the approximation ζ with respect to x k at each iteration of the iterative process. Thus, appropriate modified monotonicity conditions are introduced which are ess restrictive compared to the standard monotonicity conditions previousy used. Therefore, the fufiment of the modified monotonicity conditions speeds up the ocation of x k. The same approach is used for computing x k. The method is capabe of determining the soution x if the modified monotonicity conditions are satisfied for a components of p; otherwise, it ony provides a two-sided encosure of x k (x k). It is aso shown that the new method can be extended to the more genera probem of determining the IH y of an outcome variabe vector y depending on both x and p, p p. In a ess genera form (treating a specific probem arising in mechanica engineering), a method for computing y, based on the standard monotonicity approach, has aready been considered in [13]; in the framework of an interva finite eement formuation, methods for bounding y can be found in [19] and [20]. The paper is organized as foows. The formuation of the probems considered and the basic approach to soving them are given in Section 2. The main resuts obtained are reported in the next section. In Section 4, severa agorithms impementing the resuts from the previous section are presented. The new method is iustrated by way of a numerica exampe in Section 5. The paper ends up with severa concuding remarks. 2 Probem Formuation and Basic Approach We sha distinguish between two forms of the IH soution probem: standard form and generaized form. The standard IH soution (SIHS) probem is formuated as foows: given the pair {A(p), b(p)} and the interva vector p, find the IH soution x to (1.1). The LIP system (1.1a) to (1.1c) defines impicity a non-inear mapping N : p R m R n 1. Thus, the united soution set (A(p), b(p), p) can be viewed as the image N(p) of p under the mapping N. Let (Σ) denote the interva hu of (A(p), b(p), p). Thus, the IH soution x can be defined as foows x = (N(p)). (2.1) To introduce the generaized IH soution (GIHS) probem, we need to define an additiona mapping y = f(x, p) (2.2) where f : N(p) p R n 2+m R n, 1 n n 2. It specifies the transformation of the state variabe vector x and the input parameter vector p into an output variabe vector y. Let (A(p), b(p), f, p) denote the united soution set of (1.1) and (2.2). Thus y = (Σ(A(p), b(p), f, p). (2.3)

4 L. V. Koev, Componentwise Determination of the Interva Hu The generaized IH soution (GIHS) probem is formuated as foows: given the tripe {A(p), b(p), f} and the interva vector p, find the corresponding IH soution y. On account of (1.1) and (2.2), the GIHS probem can be viewed as that of determining the interva hu of the image of p under the composition fn. A specific GIHS probem reated to a given pair {A(p), b(p)} is actuay defined by specifying the function f(x, p) in (2.2). In its most genera form, the GIHS setting encompasses a arge cass of various probems depending on the type of f used. If f(x, p) is a noninear function, determining each end-point y or k y k of the kth component y k = [y, k y k ] of y is a perturbed goba optimization probem with inear equaity constraints. If f(x, p) is inear in x, then the corresponding probem is a parametric inear programming probem. In many practica appications (e.g., [3, probems 3.9, 3.10], f is a scaar noninear function f(x) (n = 1) independent of p. An iustrative exampe is the case of determining the magnitude v of a singe compex variabe V k invoved in a (n n) compex-vaued LIP system [3] GV = J. (2.4) The atter system can be rewritten equivaenty as a (2n 2n) rea-vaued LIP system by introducing a 2n-dimensiona rea state vector x. In x, the first n components correspond to the respective rea parts of V j whie the next n components correspond to the respective imaginary parts of V j. Thus, for this exampe If f is independent of p and y = x 2 k + x 2 k+n. (2.5) f = E (2.6) (E is the identity matrix), then the GIHS probem reduces to the SIHS probem. If we are interested in finding the range of a singe component x k of x, then (2.2) becomes y k = e T k x (2.7) (e T k is the transposed kth coumn of E). In this section, the basis of a method for determining the IH soution x in the simper case of the SIHS probem defined by (1.1), (2.6) wi be presented. It is based on a componentwise approach: compute x k n 2 times for k = 1,..., n 2. Aso, for a fixed k: (i) first, the ower end-point x k of the kth component x k = [x k, x k] of x is ocated; (ii) next, the upper end-point x k of the kth component x k of x is found. Determination of x k On account of (2.7), the vaue of x k wi be determined as the soution of the foowing goba optimization probem x k = min et k x (2.8a) subject to the constraint A(p)x = b(p), p p. (2.8b) An iterative method for soving (2.8) wi be suggested in the next section. It is based on the use of an interva encosure d for the derivative x k / p of x k with respect

Reiabe Computing 20, 2014 5 to p, = 1,..., m for a given p (reated to a current iteration). To obtain d () k, we first differentiate (2.8b) in p to get n j=1 a ij(p) xj p System (2.9) is rewritten as = bi(p) p n j=1 a ij(p) aij(p) p x j, i = 1,...n 2, (2.9a) p p. (2.9b) A(p)d = γ (p) η (p)x(p), p p (2.10) where γ (p) is a coumn vector and η (p) is a matrix. Let d denote an outer soution to (2.10). Obviousy, the encosure d sought is the k-th component of d. If 0 / d, (2.11) we can reduce the interva p to a point p. Indeed, on account of (2.9) and (2.10) x k p (p) d, p p. (2.12) Hence, (2.11) guarantees that x k is monotone in p with respect to p. Therefore, { p, if d 0 p = p, if d (2.13) 0. Such an approach has aready been used in [5], [17] and [26] for square matrices. The interva d coud be computed by appying the approach of [5] and [17]. It consists of ignoring the dependence of x on p in (2.10) and treating x as an independent variabe q beoning to x. Thus, (2.10) becomes A(p)d = γ(p) η(p)q, p p, q x (2.14) where x is an OI soution to (2.8b). It shoud be stressed that this approach is based uniquey on p and the outer soution x to (2.8b). A better approach to assessing d appicabe both in the context of the SIHS and GIHS probem wi be suggested in the next section. It resorts to aso empoying the k-th component ζ k of the inner estimation ζ reated to (2.8b). Remark 2.1 Seemingy the best approach (not considered so far) to expoiting the dependence of both d and x on p in (2.10) is to find an outer soution to the foowing LIP system of size (2n 1 2n 2): A(p)x = b(p), A(p)d = γ (p) η (p)x, p p. (2.15a) (2.15b) (2.15c) This opportunity wi not be considered in this paper. In the seque, we sha compare various interva vectors with respect to their widths. The width w(p) of an interva vector p is a rea vector whose components w i(p) are defined as the width of the ith component p i of p, i.e. w i(p) = p i p i. An interva vector p wi be caed narrower than another interva vector p if w k (p) < w k (p ) for at east one index k. The vector p wi be referred to as stricty narrower than the vector p if a components w i(p) are narrower than the respective components w i(p ).

6 L. V. Koev, Componentwise Determination of the Interva Hu 3 Main Resuts 3.1 The SIHS Probem: Lower End-point of the Range The new approach is based on the simutaneous use of both the outer soution x to (2.8b) and an upper bound x u k on the ower end-point x k. The bound x u k is given by the ower end-point ζ k of the k-th component ζ k = [ζ k, ζ k ] of the IEH soution ζ to (2.8b) or by a speciaized method (e.g., [5]). Since x k is an upper bound on x k and x k is a ower bound on x k x k x k xu k. (3.1) Thus, where x k x k x k = [x k, x u k]. Now we introduce a modified outer soution vector x with components (3.2a) (3.2b) x i = { x i, if i k x i, if i = k. (3.2c) The new approach consists of repacing (2.14) with the foowing system: Since A(p)d = γ (p) η (p)q, p p, (3.3a) q x. x k x k, x x. (3.3b) (3.4a) (3.4b) Thus, the use of x instead of x in (3.3b) imposes a constraint on (3.3a). Expoiting this restriction via some constraint satisfaction technique (e.g., the second stage of the forward and backward sweep method) wi ead to a narrower x and, hence, to a narrower parameter interva vector p. Accordingy, we have to consider the foowing modified LIP system A(p)d = γ (p) η (p)q, (3.5a) Now we can formuate the foowing resut. p p, q x, (3.5b) Lemma 3.1 If then Otherwise, if then p p, x x, (3.6a) d (p, x ) d (p, x). (3.6b) p p, x x, (3.7a) d (p, x ) d (p, x). (3.7b)

Reiabe Computing 20, 2014 7 Proof. The assertions of the emma foow directy from the premises (3.6a), (3.7a) and the incusion isotonicity property of the interva operations needed to compute d (p, x ) and d (p, x ), respectivey. It is seen from Lemma 2.1 that the outer interva soution d (p, x ) of (3.5) is never wider and may be narrower than the outer interva soution d (p, x ) of (2.14). Moreover, the former vector d (p, x ) is proved to be stricty narrower than the atter one d (p, x ) if (3.6a) hods. As is we known, condition (2.11) guarantees that the function x k (p) is monotone within p with respect to the -th component p of p. It has been used in [5] and [17] at each iteration ν of the respective iterative method for different interva vectors p = p (ν) associated with the ν-th current iteration. To express the dependence of d on p (ν), (2.11) wi be written in the form 0 / d (p (ν), x (p ν ). (3.8) In [17], the requirements (3.8) have been caed a goba monotonicity condition for ν = 1 (first iteration) and oca monotonicity condition for ν > 1 (since p (1) is the whoe initia parameter domain whie each subsequent p (2), p (3), etc. is a smaer and smaer subdomain of p (1) ). In a simiar way, the satisfaction of the requirement 0 / d (p (ν), x(p (ν) )) (3.9) is a sufficient condition for x k to be monotone within p with respect to p. On account of (3.6a) and (3.7a), the atter type of monotonocity condition (3.9) wi be referred to as a modified monotonocity condition (compared to (3.8)). As is we known, the united soution set Σ(A(p), b(p), p) has a very compex form so, in the genera case, it can touch its interva hu x at arbitrary points ocated on the 2n 2 faces of x. Therefore, each x k (and, in a simiar manner, each x k) is the image of a corresponding point p (s) ying on a certain face of p. In a specia case, however, p (s) may be a vertex p (ν) of p (a vertex of p is a specific combination of end-points of p j, j = 1,..., m). This property wi be referred to as the vertex property with respect to x k (or x k). If x k does not possess the vertex property, the corresponding vector p (s) (providing x k ) wi have at east one component p(s) i such that p (s) i int(p i ) (3.10) (int stands for interior). One of the advantages of the present approach is that it is capabe of estabishing that the soution x k sought does not possess the vertex property. Lemma 3.2 Let p (ν) be the interva vector resuting from the appication of a constraint technique at the ν-th iteration. If for at east one iteration ν and one index i p (ν) int(p i ), (3.11) then the soution x k is not reached on a vertex. Proof. It foows from the fact that (3.11) entais (3.10). Remark 3.1 In the present paper, it is assumed that condition (3.11) does not occur for a indices i and a iterations ν. Thus, the present approach wi be deveoped ony for that case where the vertex property is vaid. The genera case of non-vertex soutions wi be considered in a separate pubication.

8 L. V. Koev, Componentwise Determination of the Interva Hu At this point, we need the foowing procedure designed to detect (if possibe) a component p such that x k is monotone aong that component within p (in the case of ocating the ower end-point x k ). To simpify the presentation, the arguments in d d and wi henceforth be omitted. Procedure P. Given k, the functions a ij(p), b i(p) and the initia parameter vector p (0), set = 1, p = p (0) and carry out the foowing sequence of steps. Step 1. Compute an outer soution x to (2.8b) using an appropriate method (accounting for the specific structure of the functions a ij(p) and b i(p)). Step 2. Compute an upper bound x u k, using an appropriate method (accounting for the specific structure of the functions a ij(p) and b i(p). Step 3. Form by (3.2) the reduced-width interva x. Step 3. Appy a constraint satisfaction technique (e.g., the second stage of the forward and backward sweep method) to (3.5a) to get the modified interva vectors x and p. Step 4. Form the LIP system (3.5) and find an outer soution d whose k-th component is d. Step 5. Check the monotonicity condition (3.9). If (3.9) is fufied, go to Outcome O 2. Otherwise, go back to Step 4 with = + 1 if m; ese proceed to the next ine. Outcome O 1: the procedure has not detected the modified monotonicity property for x k aong any p. Outcome O 2: a variabe p has been detected aong which x k is guaranteed to be ocay monotone. Remark 3.2 To faciitate the readabiity of Procedure P it has been tacity assumed that the method used in Step 1 and Step 4 is capabe of determining the outer soution x or d, respectivey. If this is not the case, the procedure wi terminate with outcome O 1. We now present the basic agorithm for ocating the ower end-point x k. It is iterative and consists of repeatedy caing Procedure P at most m times. Agorithm A. Let ν denote the number of the current iteration of the agorithm; et m 0 be the ength of the initia vector p (0). Set ν = 0, p = p (0) and m = m 0. Step 1. Let ν = ν + 1. Ca procedure P. If the outcome is O 1, go to Termination T 1. If the outcome is O 2, a derivative interva that does not contain zero has been found. Thus, the corresponding parameter p can be reduced to a rea number p using the formua p = { d p, if d 0 p, if d 0. Step 2. Form a new interva vector p with components (3.12) p i = { pi, if i p, if i =. (3.13a)

Reiabe Computing 20, 2014 9 Rewrite p in the partitioned form p = (p, p) (3.13b) where p of ength m = m 1 regroups the non-degenerate components of p. If m = 0, go to Termination T2; otherwise et m = m and proceed to the next step. Step 3. Substitute (3.13) into (1.1b), (1.1c) to get the modified functions a ij(p) and b i(p). Rename a ij(p) and b i(p) as a ij(p) and b i(p) and go back to Step 1. Termination T 1: ony a two-sided bound [x k, x u k] on the ower end-point x k has been found. Termination T 2: the agorithm has succeeded in determining a rea vector p whose components are given by (3.12) such that its image provides the ower end-point x k. Remark 3.3 As is easiy seen, ν m if the agorithm terminates in T 1 or ν = m if it terminates in T 2. On account of Agorithm A, we have the foowing resut. Theorem 3.1 For given {A(p), b(p)}, p (0) and k, assume that agorithm A terminates in T 2 with p whose components are defined by (3.12). Then the foowing assertions are vaid: (i) the goba soution x k of probem (2.8) is reached at a vertex pν for m iterations of A; (ii) the vertex p ν sought is defined by p in (3.12) and is unique; (iii) the numerica compexity of Agorithm A is poynomia in n 1, n 2, and m. Proof. (i). As Procedure P terminates in outcome O 2 for each iteration ν of Agorithm A, the assertions are proved by induction. Thus, we first set ν = 1. Let x denote the rea vector associated with the parameter soution vector p s to the minimization probem (2.8). On account of (3.1), (3.2) and the fact that x is an outer soution of (2.8b), it foows that x k x k, x i x i so x x ; aso p s p. Since any constraint satisfaction technique deetes ony such parts of the initia x and p that do not contain x and p s, we have x x, p s p. Hence, (3.9) is a sufficient condition for x k to be monotone within p with respect to p. Therefore, the -th component p s of the soution p s is given by (3.12). The same argument is vaid for a ν > 1. Thus, it has been shown that p s is given by (3.12) for each. Hence, the goba soution of (2.8) is actuay attained at a particuar combination of end-points p i and p i, i.e. at the vertex p (ν), which proves the vaidity of assertion (i). (ii). The vaidity of assertion (ii) is a coroary of assertion (i) since each p s is determined in a unique manner by (3.12). (iii). Since Agorithm A terminates in exit T 2, procedure P is caed m times. For each ν, the outer soution x, the upper soution x u k, the reduced-width vectors x, p and the outer soution d can be computed by a poynomia time agorithm. The respective number of computations is estimated by P 1(n 1, n 2, m), P 2(n 1, n 2, m), P 3(n 1, n 2, m), and P 4(n 1, n 2, m), where each P i(n 1, n 2, m), is a poynomia expression in n 1, n 2 and m (whose actua form depends on the specific method used). Thus, the numerica compexity of the present agorithm is P (n 1, n 2, m) = m 4 i=1 Pi(n1, n2, m), which competes the proof of the theorem.

10 L. V. Koev, Componentwise Determination of the Interva Hu 3.2 The SIHS Probem: Upper End-point of the Range The upper end-point x k is determined in, essentiay, the same manner as the ower end-point x k. We ist here the main distinctions. The goba minimization probem (2.8a) is now repaced with the goba maximization probem x k = max e T k x (3.14a) subject to the constraint A(p)x = b(p), p p. (3.14b) The soution x k is found using an iterative agorithm referred to as Agorithm Au. Its structure is simiar to that of A. At each iteration, however, use is now made of an outer soution x and a ower bound x k x k. The ower bound x k is given by a oca optimization technique or the upper end ζ k of the k-th component ζ k = [ζ k, ζ k ] of the IEH soution ζ to (3.14b). Thus, the interva x k = [x k, x k ] (3.15) is used to obtain the modified interva vector x with components given in (3.4). Agorithm Au empoys the procedure Pu where the main difference is that formua (3.12) now becomes { p p =, if d 0 p, if d (3.16) 0. Obviousy, Theorem 1 remains vaid for the case of probem (3.14) if it is reformuated substituting agorithm Au for agorithm A. 3.3 The GIHS Probem The extension of the present approach to the GIHS probem (1.1), (2.2) is straightforward. This wi be shown for the case of the ower end-point y of the k-th range y k k. Indeed, the vaue of y is found as the goba soution of the foowing minimization k probem y = min k et k y (3.17a) subject to the constraint A(p)x = b(p), p p, (3.17b) y = f(x, p). (3.17c) The ony difference is that now we need the derivatives of y k with respect to p. From (3.17c) y k (p) = f n k f k (x, p) + (x, p) xj (x, p). (3.18) p p x j p Let d (j) i j=1 D = γ k + n j=1 η kj d (j) (3.19) where are computed as in Step 4 of Procedure P of the SIHS probem whie γ k and η k are the interva extension of the corresponding terms in (3.18). For instance, γ kj = f k x j (x, p) (3.20)

Reiabe Computing 20, 2014 11 where x is an outer soution to (3.17b). A better but more expensive way to compute γ k is to determine the hu soution x to (3.17b) and use it in (3.20) instead of x. We sha iustrate this by way of exampe (2.5). In that case so or Obviousy y p (p) = f k p (x, p) + D = 2x k d n j=1 f k x j (x, p) xj p (x, p) + 2x k+n d (k+n) (3.21a) (3.21b) D = 2x k d + 2x k+n d (k+n). (3.21c) y k p (p) D, p p (3.22) so the goba or oca monotonicity condition for y k to be monotone with respect to p is the requirement 0 / D = [d, d ]. (3.23) As in the SIHS case, it is expedient to introduce and use the corresponding modified monotonicity condition. Therefore, we need an upper bound d u k on yk p (p), p p which can be found appying some oca minimization method (e.g., the simpe agorithm in [5]) to the right-hand side of (3.21a). Thus, the modified monotonicity condition for y k with respect to p is 0 / D = [D, d u k]. (3.24) For simpicity (as in the case of SIHS probem), the short notations D k and D k (where the respective arguments p (ν), x (p ν ) or p (ν), x (p ν ) have been omitted) are used. If (3.24) is satisfied for some, the respective interva parameter p can be reduced to a point p using the formua p = { p, if D 0 p, if D 0. (3.25) The soution yk sought can be found using the same computationa scheme as in the SIHS case. Thus, as soon as an index is detected such that the interva component p has been reduced to a point p, a new iteration is initiated with a reduced-width vector p = (p, p). There are, however, severa minor modifications to be introduced in Procedure P and Agorithm A. For instance, the constraint satisfaction technique is now appied to the equaity D = γ kj + n j=1 η k d (j). (3.26) Let these modified versions be denoted by Procedure P.g and Agorithm A.g. We have the foowing resut. Theorem 3.2 For given {A(p), b(p), f}, p (0) and k, assume that Agorithm A.g terminates in T 2 with p whose components are defined by (3.25). Then the foowing assertions are vaid:

12 L. V. Koev, Componentwise Determination of the Interva Hu (i) the goba soution y k of probem (3.17) is attained at a vertex p(ν) for m iterations of A.g; (ii) the vertex p (ν) sought is defined by p and is unique; (iii) the numerica compexity of Agorithm A.g is poynomia in n 1, n 2 and m. The proof of this theorem is simiar to that of Theorem 3.1. 4 Numerica Aspects 4.1 Efficiency of an Interva Method Let P (p) denote an interva anaysis probem defined for a given interva vector p. Aso, et M(P (p)) denote an interva method capabe of soving P (p). Such a method wi be referred to as a method appicabe to probem P for p. To assess the degree of appicabiity of M(P (p)) for intervas p of various widths, we first introduce a one-parameter famiy of intervas p(ρ), (ρ R) as foows: any two ρ 1, ρ 2 and the corresponding p(ρ 1), p(ρ 2) satisfy the reationship ρ 1 < ρ 2 p(ρ 1) p(ρ 2) (4.1) where the incusion is proper. The simpest (but not unique) way to construct p(ρ) is to use a centre p 0 encosed by a symmetric box of variabe width, i.e. p(ρ) = p 0 + ρ[ r 0, r 0 ], (4.2a) p(1) = p 0 + [ r 0, r 0 ] = p s, (4.2b) where p s is a given (start) interva vector. (An aternative for constructing p(ρ) is given in [9, formua (5.4)].) Now the foowing measure for appicabiity of a given method M to a certain probem P (p(ρ)), the so-caed appicabiity radius r a(m), is defined as foows: r a(m) = sup { ρ : M is appicabe to P (p(ρ)) for p(ρ) = p 0 + ρ[ r 0, r 0 ] }. (4.3) The concept of appicabiity radius has been suggested earier in [9] in the context of a method for determining the reguarity radius of an interva matrix. The radius r a(m) is a measure for the capacity of a given method to sove a cass of probems. It aso permits us to compare the reative efficiency of two methods M 1 and M 2 to sove the same probem P. Indeed, if r a(m 1) < r a(m 2), then M 2 is numericay more efficient since M 1 fais to sove the probem at hand earier (for an interva vector p(r(m 1)) of smaer width) compared to M 2. If the width p (0) of a given probem P (p (0) ) is such that both methods M 1 and M 2 are appicabe to P (p (0) ), then it is natura to assess their numerica efficiency by the tota number of arithmetic operations N t(m 1) and N t(m 2) needed by the respective method. Obviousy, there exists a reationship between N t(m) and r a(m) for a given method. As a genera rue, a method M 2 that is more expensive than method M 1 (that is, N t(m 2) > N t(m 1)) is expected to have a arger radius of appicabiity r a(m 2) than r a(m 1). These observations wi be confirmed by the theoretica considerations given beow in 4.3, 4.4 and the numerica evidence reated to the exampe considered in Section 5.

Reiabe Computing 20, 2014 13 4.2 Modifications of the Basic Agorithms The basic agorithms A and Au deveoped in the previous section wi be here referred to as Agorithm A.V1 and Au.V1, respectivey. In this subsection, two simpifications of the basic agorithms (denoted by A.V2 and A.V3 or Au.V2 and Au.V3) wi be presented. The first modification of Agorithm A.V1 (Au.V1) consists of simpifying Procedure P (Procedure Pu). Since Agorithms A.V2 and Au.V2 are simiar, ony the versions of Agorithm A.V2 and Procedure P.V2 wi be considered beow. Procedure P.V2. The modification of P.V1 into Procedure P.V2 consists of omitting Step 3 (where a constraint propagation is used in order to modify the initia interva vectors x and p to vectors x and p of reduced widths). Agorithm A.V2 is the same as Agorithm A.V1. The second modification denoted by Agorithm A.V3 is obtained by simpifying Agorithm A.V2. In this case, both Agorithm A.V2 and Procedure P.V2 are changed to get the modified versions A.V3 and P.V3. Procedure P.V3. In the previous version P.V2, the iterations are terminated as soon as a modified monotonicity condition has been detected aong a parameter p. In the present version, we continue the iterations unti = m, hoping to reach new parameters satisfying the respective modified monotonicity condition, modifying ony the right-hand side of LIP (3.3a). More specificay, Procedure P.V3 incudes the foowing steps Step 1. For = 1 to = m, do: a) form the LIP (3.3a), upgrading its right-hand side aone, and find the k-th component d of its outer soution; b) if the modified monotonicity condition (3.9) is fufied, reduce the -th interva component to a point p using (3.12). Step 2. Let n m denote the number of times the monotonicity condition (3.9) has been satisfied. If n m = 0, go to Outcome O 1; otherwise if n m = m, go to Outcome O 2; ese go to Outcome O 3. Outcome O 1: no interva component p has been reduced to a point. Outcome O 2: a components p, = 1,..., m have been reduced to points Outcome O 3: n m components p, 0 < n m < m have been reduced to points so a reduced-width interva vector p = (p, p) has been obtained where the ength of p is m n m. This procedure is used in the version A.V3. Agorithm A.V3. Let ν, p (0) and m 0 have the same meaning as in Agorithm A.V1. The new version comprises the foowing steps. Step 1. Compute an outer soution x to (2.8b) using an appropriate method. Step 2. Compute an upper bound x u k using an appropriate method. Step 3. Form by (3.2) the reduced-ength interva x. Step 4. Ca Procedure P.V3. Step 5. If its outcome is O 1, go to Termination T 1. In case the outcome is O 2, go to Termination T 2. If the outcome is O 3, then et m = m n m, p = p and go back to Step 1. Termination T 1: The simpified Agorithm A.V3 is not capabe of soving the IH probem considered: ony a two-sided encosure [x k, x u k] on x k has been obtained.

14 L. V. Koev, Componentwise Determination of the Interva Hu Termination T 2: The simpified Agorithm A.V3 has succeeded in soving the IH probem considered. Remark 4.1 A fourth modification V4 is possibe when Step 3 from version V1 is restored in Agorithm A.V3. This version V4 wi not be considered here. Remark 4.2 If Agorithm A.V2 or Agorithm A.V3 ends in termination T 2, it can be proved that the respective agorithm has the properties enumerated in Theorem 3.1 with the exception that for Agorithm A.V3 the number of iterations m in assertion (i) shoud be repaced by m and m < m. The version A.V3 was motivated by the foowing considerations. It differs, essentiay, from version A.V1 in that the same vectors x and p are used within a current ν-th iteration, i.e. for ν fixed and variabe. This is an attempt to reach severa new oca monotonicity conditions for the fixed ν, skipping the more costy operations of finding a new outer interva soution x and a new upper bound x u k. 4.3 Numerica Characteristics of the Present Method We first consider the numerica costs reated to the various agorithms of the present method. It is easiy seen that Agorithm A.V1 is more expensive than Agorithm A.V2; in a simiar manner, Agorithm A.V2 is more expensive than Agorithm A.V3. Indeed, the tota number of arithmetic operations N t (A.V1) needed by A.V1 is given by the expression ( m 4 ) N t(a.v1) = P (ν) i (n 1, n 2, m + 1 ν) (4.4) ν=1 i=1 where P (ν) i (n 1, n 2, m + 1 ν) is the number of operations required to determine the respective term P (ν) i at the ν-th iteration (as mentioned in the proof of part (iii) of Theorem 3.1, the vaue of the index i corresponds to the operations associated with computing an outer soution x, an upper bound x u k, the reduced width vectors x and p in Step 3 and an outer soution d ). Obviousy, since P (ν) i (n 1, n 2, m + 1 ν) refers to computations invoving m + 1 ν interva pa- (n 1, n 2, m ν) is associated with the same kind of computations invoving, however, m ν interva parameters. If P (ν) i, ν > 1 in (4.4) is repaced with rameters whie P (ν+1) i P (ν) i (n 1, n 2, m + 1 ν) > P (ν+1) i (n 1, n 2, m ν) (4.5) P 1 i, we obtain the upper bound for N t(a.v 1) given in Theorem 3.1. On account of (4.4) and (4.5): N t(a.v1) > N t(a.v2) (4.6) since the term P (ν) 3 associated with Step 3 in Procedure P.V2 (constraint propagation) is missing in Agorithm A.V2. Aso N t(a.v2) N t(a.v3) (4.7) since now ( m 3 ) N t(a.v3) = P (ν) i (n 1, n 2, m + 1 ν) ν=1 i=1 (4.8)

Reiabe Computing 20, 2014 15 where typicay m < m. We now compare the various agorithms with regard to their radii of appicabiity. In view of (4.6) and (4.7) it is expected that r a(a.v1) r a(a.v2) r a(a.v3). (4.9) We sha show ony the vaidity of the first reation (the argument for the second is simiar). Suppose that the width of the interva vector p(ρ) is such that ρ is sighty arger than r a(a.v2) so Agorithm AV2 is not appicabe. In that case, we can try the more expensive Agorithm A.V1 to sove the probem considered. Whie Agorithm A.V2 reies ony on the modified monotonicity conditions reated to the current parameter domain p and the associated initia domain x of the phase variabes, Agorithm A.V1 uses additionay the constraint propagation capabe of contracting x and p to narrower vectors x and p. Hence, the updated modified monotonicity conditions associated now with x and p wi be easier to satisfy, eading to a arger radius of appicabiity for Agorithm A.V1. 4.4 Comparison with Other Methods We now compare the new method (referred to as method MK) with two other methods of poynomia compexity, namey [17] (referred to as method MP) and [5] (referred to as method M0) in a quaitative manner. First, we consider the methods MK and MP. As mentioned in Section 2, the method MP is aso an iterative method capabe of determining the ower and upper end of the IH soution component x k. To be abe to carry out such a comparison, we assume that both methods compute the necessary outer interva soutions in the same manner. To be specific, ony the case of computing x k wi be considered here; so we consider a corresponding agorithm of method MP which wi be referred to as Agorithm A.P. We start by comparing Agorithm A.P with the simpest version of method MK, namey Agorithm A.V3. The two agorithms differ in that the upper bound x u k is not computed and used in Agorithm A.P. Hence, the cruder monotonocity condition (3.8) is expoited in the atter agorithm to reduce the width of the initia parameter vector p (0). In contrast, the present Agorithm A.V3 is based on the use of the more effective modified monotonicity criterion (3.9). Therefore, in view of Lemma 3.1 the new version A.V3 is never worse and is expected to be actuay more efficient than the known version A.P with regard to the radius of appicabiity of the two agorithms. Indeed, it is natura to suppose that, at each iteration, the standard monotonicity condition (3.8) is more difficut to satisfy than the respective modified monotonicity condition (3.9). This is due to the fact that the outer soution x is invoved in computing whie the modified vector x is used to evauate d d. Since (with regard to x x by (3.4b) and d by (3.7b), the overestimation of d the true range of the respective derivative x k / p over the current p) wi, in genera, be arger than the overestimation of d. This, in turn, resuts in easier vioation of conditions (3.8) than conditions (3.9). Hence, it is expected that most often r a(a.p) < r a(a.v3). (4.10) It shoud aso be stressed that if the genera-purpose method of [14] is used to find x and ζ, the upper bound x u k = ζ k needed in version A.V3 is obtained with no additiona computationa cost. In that case, the tota number of arithmetic operations

16 L. V. Koev, Componentwise Determination of the Interva Hu N t(mk) and N t(mp ) needed by the respective method wi be the same. If, a oca optimization technique is used to ocate x u k, then obviousy N t(mk) > N t(mp ). On the other hand, that bigger cost may be compensated by a reativey arger radius of appicabiity r a(a.mk) since aways ζ k > x k whie often (especiay for reativey narrow p) x u k = x k. In cases where the probem at hand has a specific structure, a speciaized method such as [10] or [24] shoud be empoyed in either method MP or method MK. The speciaized methods, however, do not provide any inner estimation vector ζ so a oca optimization technique is to be used to ocate x u k. Obviousy, N t(mk) > N t(mp ) in that case. Again, it might pay, in the ong run, to accept that bigger computationa cost since on the average method MK wi provide a reativey arger radius of appicabiity. Next we compare the new approach with the method M0 of poynomia compexity [5]. As mentioned in Section 2, the method M0 is aso a componentwise method, separatey determining (for a fixed k) the ower and upper end of the IH soution component x k. The corresponding agorithm for computing x k wi here be referred to as Agorithm A.V0. Agorithm A.V0. In fact, this agorithm is a simpified version of Agorithm A.V3. From this point of view, it consists of skipping Steps 2 and 3 in Procedure P.V3, that is, we do not compute and use the inner bound x u k. Thus, ony the outer interva soution x from Step 1 of Procedure P.V3 is empoyed in the subsequent computations. Hence, the cruder goba monotonocity condition (3.8) is expoited to reduce the width of the initia parameter vector p (0). It is obvious that, for reasons simiar to those considered in the comparison of MK and MP, agorithm A.V3 shoud outperform A.V0. In particuar, it is expected that r a(a.v0) < r a(a.v3). (4.11) The atter inequaity is confirmed numericay in Section 5. Furthermore, better resuts shoud be obtained if the more expensive Agorithm A.V2 or A.V1 is used rather than Agorithms A.K or A.V0. It shoud aso be stressed that, unike a known methods, the best version A.V1 of the present method is capabe of detecting those coordinates of the initia domain p aong which the vertex property of the soution x k sought is vioated (Lemma 3.2). 5 Numerica Exampe We iustrate the method suggested by way of an exampe. A LIP systems considered beow are square and defined by the affine functions (1.2). They are given equivaenty in the form m A(p) = A (0) + A (µ) p µ, (5.1a) µ=1 b(p) = b (0) + Bp, (5.1b) where A (0), A (µ), µ = 0, 1,..., m are n n rea matrices whie B is a n m rea matrix. d The outer interva soutions x and were determined using the direct method of [4]. The upper bound x u k or ower bound x k were computed using the simpe iterative method of [5]. Agorithms A.V3 and Au.V3 were empoyed to determine the ower or upper end-point of the k-th component x k of x. The agorithms were programmed in the MATLAB environment using the INTLAB toobox [23] to carry

Reiabe Computing 20, 2014 17 out the interva cacuations. The programs were run on a 1.7 GHz PC computer. The inear parametric system is given by the matrix [25] p 1 p 2 + 1 p 3 A(p) = p 2 + 1 3 p 1 (5.2a) 2 p 3 4p 2 + 1 1 and the vector 2p 1 b(p) = p 3 1. 1 (5.2b) Thus, n = 3 and m = 3. It is seen that 0 1 0 1 0 0 0 1 0 0 0 1 A (0) = 1 3 0, A (1) = 0 0 1, A (2) = 1 0 0, A (3) = 0 0 0 2 1 1 0 0 0 0 4 0 1 0 0 (5.2c) and 0 2 0 0 b (0) = 1, B = 0 0 1. (5.2d) 1 0 0 0 The initia interva vector p s = (p 1,..., p 3 ) is given by (4.2) with p 0 = (0.5 0.5 0.5) T, r 0 = (0.5 0.5 0.5) T. (5.3a) In the foowing three subsections, we use the data (5.2), (5.3) to iustrate the appication of the present method to soving the SIHS probem. The soution of a GIHS probem is treated in the fourth subsection. 5.1 Fixing an Index In this instance, the parameter vector p = (p 1,..., p 3) beongs to p = (p 1,..., p 3 ) defined by (4.2a) and (5.3a) for ρ = 0.1, i.e. p = p 0 + 0.1[ r 0, r 0 ]. (5.3b) The probem considered here is to determine the range x 3 so the fixed index is k = 3. We first determine the ower end-point x 3. Appication of Agorithm A.V3 yieds x 3 = 1.7786. (5.4) The agorithm takes two iterations to reach x 3. At the first iteration, Procedure P.V3 succeeds in reducing two interva variabes p 1 and p 3 to points p 1 and p, respectivey. 3 The interva parameter p 2, however, remains unchanged. Thus, we form the reducedwidth vector p = (p 1, p 2, p 3 ) T. (5.5) Now p is substituted into (5.2) to get a modified system of the form A (p 2 ) = A (0) + A (1) p 1 + A (3) p 3 + A (2) p 2 = A + A (2) p 2 b (p 2 ) = b (0) + B (1) p 1 + B (3) p 3 + B (2) p 2 = b + B (2) p 2 (5.6a) (5.6b)

18 L. V. Koev, Componentwise Determination of the Interva Hu Tabe 1: Data on Agorithm A.V3 for k = 3 and ρ = 0.1 ν x 3 x u 3 n in p x 3 1-1.7982-1.7785 2 (0.55 p 2 0.45) T 2-1.7800-1.7785 2 (0.55 0.55 0.45) T -1.7786 (B (j) denotes the corresponding jth coumn of matrix B). At this point, the second iteration of Agorithm A.V3 is initiated. This time, Procedure P.V3 is appied to (5.6) after p 2 has been renamed p. Now the ast interva parameter p = p 2 is successfuy reduced to the end-point p 2. Thus, it is seen that for the probem considered Agorithm A.V3 terminates in outcome T 2 with p = (0.55 0.55 0.45) T. (5.7) Finay, the goba soution (5.4) of (5.2) is obtained after soving A(p )x = b(p ) (5.8) for ˇx and etting x 3 = x 3. In Tabe 1, data concerning the agorithm used and the resuts obtained are given. The current iteration number of Agorithm A.V3 is denoted by ν; n in denotes the number of iterations needed by the oca optimization (LO) method used to compute the upper (inner) bound x u k. The ower end x k of the outer encosure [x k, x k ] at the corresponding iteration ν is given in the second coumn of the tabe. In the next two coumns, the vaues of x u k and n in are isted for each ν. The parameter vectors of reduced width are presented in the fifth coumn. The ast coumn incudes the approximate vaue of x 3 (for ν = 2). The optima parameter vector p is given in the second entry of the fifth coumn. Remark 5.1 The compexity of the Agorithm A.V3 can be assessed by the number N s of (n n) inear systems soved. We sha distinguish between N ip and N pp: number of interva parameter (IP) systems and number of point (noninterva) parameter systems, respectivey, since the former systems are harder to sove than the atter ones. To simpify the anaysis, we assume that soving (2.8b) once and (3.3a) m ν times (m ν being the number of interva parameters at each ν-th iteration) can be equated to soving one singe IP system since A(p) is the same for a m ν +1 systems. This is a reasonabe assumption if m n, which is often the case in practice [5]. Then, as is easiy seen, N ip varies between N ip = 1 (in the best case) and N ip = m (in the worst case). The number N pp is given by the sum of n (ν) in over ν where n(ν) in is the number of iterations needed by the LO method chosen to find the bound x ν k on x k at the ν-th iteration. For the LO method of [5] used in the present exampe, n (ν) in varies between n(ν) in = 2 (minimum vaue) and n (ν) in = mν +1 (maximum admissibe vaue). Thus N pp = 2 in the best case; in the worst case, N pp = (m + 1) + (m 1) +... + 2 + 1 = (m + 1)(m + 2)/2. Therefore, N s = N ip +N pp varies between N ip +N pp = 3 and N ip +N pp = m+(m+1)(m+2)/2. It is interesting to note that the number N ip reated to versions V2 and V1 of Agorithm A is the same and equa to N ip associated with version V3. Simiar resuts are obtained in determining the upper end-point x 3 using Agorithm Au.V3. These are reported in Tabe 2 (n in denoting the number of iterations needed by the LO method used to compute the upper (inner) bound x k).

Reiabe Computing 20, 2014 19 Tabe 2: Data on Agorithm Au.V3 for k = 3 and ρ = 0.1 ν x 3 x 3 n in p x 3 1-1.3447-1.3824 2 (0.45 p 2 0.55) T 2-1.3823-1.3824 2 (0.45 0.45 0.55) T -1.3823 Remark 5.2 The approximate four digit vaues for x 3, x u 3 and x 3 reported in Tabe 5.1 and x 3, x 3 and x 3 in Tabe 5.2 are obtained after appropriate directed roundings. Thus, downward rounding has been used to represent x 3, x 3 and x 3 whie upward rounding is needed for x 3,x 3 and x u 3. It shoud be borne in mind that the appropriate roundings of x k, x u k and x k, x k are mandatory when rigorousy impementing the present method in order to provide reiabe monotonic properties and fina resuts for x k and x k. 5.2 Computing a of x The probem in this subsection is to determine the whoe IH vector x associated with probem (5.2), (5.3). According to the present paper s approach, x is found in a componentwise manner, i.e. by separatey computing each end-point of each range x k. The data for k = 3 have been given in Tabes 1 and 2 from the previous exampe. Thus, agorithm A.V3 and Au.V3 remain to be appied for k = 1 and k = 2. Tabes 3 and 4 summarize the reevant resuts. Tabe 3: Data on A.V3 and Au.V3 for k = 1 and ρ = 0.1 A.V3 ν n in p x 1 1 2 (0.45 0.55 0.55) T 0.1826 Au.V3 ν n in p x 1 1 2 (0.55 0.45 0.45) T 0.4052 Tabe 4: Data for A.V3 and Au.V3 for k = 2 and ρ = 0.1 A.V3 ν n in p x 2 2 2 (0.55 0.45 0.55) T 0.0277 Au.V3 ν n in p x 2 3 2 (0.45 0.45 0.45) T 0.0654 5.3 Determining the Appicabiity Radius We now consider the probem of determining the appicabiity radius r a of agorithm A.V3 of the present method for the case of k = 2. With this in mind, we introduce

20 L. V. Koev, Componentwise Determination of the Interva Hu Tabe 5: Data on the appicabiity radii of agorithms A.V3 and A.V0 for k = 2 agorithm ν r a p x 2 A.V3 3 0.165 (0.5825, 0.4175, 0.5825) T 0.0137 A.V0 3 0.104 (0.5520, 0.4480, 0.5520) T 0.0269 the famiy (4.2) p(ρ) = p 0 + ρp s (5.9) where p 0 and p s are given by (5.3a). In accordance with the definition (4.3), we estimate r a(a) of the respective agorithm approximatey by etting ρ increase with an increment ρ unti inappicabiity is reached. The data concerning Agorithm A.V3 are given in the second row of Tabe 5. Around the critica vaue of r a, the increment of ρ was chosen to be ρ = 0.001. Thus, r a(a.v3) = 0.165 means that Agorithm A.V3 becomes inappicabe for ρ = r a(a.v3) + ρ = 0.166. We now compare r a(a.v3) with the appicabiity radius of the method M0 from [5] (in fact, version M3). To make the comparison invariant with respect to the way the encosures d as d d and are computed, d was evauated in the same manner. This modified version of M0 is denoted by Agorithm A.V0. The numerica evidence shows that approximatey A.V0 fais to be appicabe for ρ = 0.105 (third row of Tabe 5). It is seen that, in accordance with the theoretica consideration from Section 4.1, r a(a.v3) > r a(a.v0). (5.10) Thus, it has been shown that Agorithm A.V3 of the present method is capabe of soving LIP probems of arger uncertainties than Agorithm A.V0 of the previous method M0 [5]. 5.4 Soving a GIHS Probem In this fina subsection, we iustrate the appication of the present method to soving a GIHS probem. The parametric system is again (5.2), (5.3) whie the output variabe vector y is the scaar 3 y(p) = x 2 k(p). (5.11) k=1 If we interpret the components x k of x as the projections of the vector x onto the axes in Eucidian space, then (5.11) has a cear geometrica meaning: y is the square of the ength of the vector x. Thus, the GIHS defined by (5.2), (5.3) and (5.11) consists of determining the range y of y over p. On account of (5.11), [ 3 ] y (p) = 2 x k (p) x k (p). (5.12) p p k=1 Now we bound y p using different encosures x k and d for x k (p) and x k p (p) over p, respectivey. First, we use data reated to the standard (goba) monotonicity

Reiabe Computing 20, 2014 21 conditions. Therefore, we use the outer bounds x k and d obtained at the first iteration of agorithms A.V0 and Au.V0. We have (after canceing the factor of 2) so D = 3 k=1 x k d, (5.13) D 1 = [0.3533, 2.0732] > 0, D 2 = [0.1419, 0.6657] > 0, D 3 = [ 5.3009, 1.6776] > 0. (5.13a) Hence, on account of (5.11) to (5.13a) y = 3 x 2 k(p () ) = 1.9108 (5.14a, ) k=1 where p () = (p 1, p 2, p 3 ) T = (0.45 0.45 0.55) T. The corresponding vector x(p () ) is the soution of A(p () )x = b(p () ). manner 2 y = x 2 k(p (u) ) = 3.1631 k=1 where p (u) = (p 1, p 2, p 3 ) T = (0.55, 0.55, 0.45) T and x(p (u) ) is the soution of A(p (u) )x = b(p (u) ). Thus (5.14b) In a simiar (5.15a) (5.15b) y = [1.91083.1631]. (5.16) A narrower interva D bounding y p (p), p p is obtained if the x k in (5.13) are repaced with x k (since x k x k ): D = 3 k=1 x kd. (5.17) The data for D and D are reported in the second and third row, respectivey, of Tabe 6. Next, we bound y p using the modified monotonicity approach. We start with the case of determining modified monotonicity conditions reated to determining the ower end-point y of the range y. In that case, x k wi be repaced with x k = [x k, x u k]; d the corresponding modified derivatives (computed at the first iteration of A.V3) wi repace the previous d (). The vaues of D for that case are given in the fourth row of Tabe 6. (u) In a simiar manner, we determine modified monotonicity conditions D reated to the upper end-point y of y. Now x k and d wi be repaced with x k = [x k, x k ] d D (u) and, respectivey. The vaues for are given in the fifth row of Tabe 6. () (u) As expected, the intervas D and D provide more effective (easer to satisfy) monotonicity conditions as compared to D or even D. Indeed, the ower end-points D (), = 1 and = 2, of the respective intervas D () D () are much higher then the ower