Introduction to max-linear programming

Similar documents
Non-linear programs with max-linear constraints: A heuristic approach

A Solution of a Tropical Linear Vector Equation

On tropical linear and integer programs

MATHEMATICAL PROGRAMMING I

Widely applicable periodicity results for higher order di erence equations

Economics Bulletin, 2012, Vol. 32 No. 1 pp Introduction. 2. The preliminaries

Max-linear Systems: Theory and Algorithms

A Tropical Extremal Problem with Nonlinear Objective Function and Linear Inequality Constraints

SOLUTION OF GENERALIZED LINEAR VECTOR EQUATIONS IN IDEMPOTENT ALGEBRA

2 IM Preprint series A, No. 1/2003 If vector x(k) = (x 1 (k) x 2 (k) ::: x n (k)) denotes the time instants in which all jobs started for the k th tim

An O(n 2 ) algorithm for maximum cycle mean of Monge matrices in max-algebra

On the job rotation problem

GALOIS THEORY I (Supplement to Chapter 4)

Lecture 8: Basic convex analysis

On A Special Case Of A Conjecture Of Ryser About Hadamard Circulant Matrices

Choosing the Two Finalists

4. Duality and Sensitivity

EXPLORING THE COMPLEXITY OF THE INTEGER IMAGE PROBLEM IN THE MAX-ALGEBRA

Rings, Integral Domains, and Fields

Spectral Properties of Matrix Polynomials in the Max Algebra

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Structural Grobner Basis. Bernd Sturmfels and Markus Wiegelmann TR May Department of Mathematics, UC Berkeley.

The Kuhn-Tucker Problem

Quadratic and Copositive Lyapunov Functions and the Stability of Positive Switched Linear Systems

ECON2285: Mathematical Economics

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT

Chapter 5 Linear Programming (LP)

arxiv: v2 [cs.ds] 27 Nov 2014

Lecture 1- The constrained optimization problem

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)

Introduction to Groups

A division algorithm

Another algorithm for nonnegative matrices

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

Resolving infeasibility in extremal algebras 1

Nonlinear Programming (NLP)

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Tropical Optimization Framework for Analytical Hierarchy Process

Applied Mathematics &Optimization

Columbia University. Department of Economics Discussion Paper Series. The Knob of the Discord. Massimiliano Amarante Fabio Maccheroni

Results on Integer and Extended Integer Solutions to Max-linear Systems

Notes on Time Series Modeling

Introduction to Decision Sciences Lecture 6

The Degree of the Splitting Field of a Random Polynomial over a Finite Field

Averaging Operators on the Unit Interval

Fraction-free Row Reduction of Matrices of Skew Polynomials

Online Scheduling of Parallel Jobs on Two Machines is 2-Competitive

The spectra of super line multigraphs

Robust Estimation and Inference for Extremal Dependence in Time Series. Appendix C: Omitted Proofs and Supporting Lemmata

Introduction to linear programming

Determinants of Partition Matrices

Nets Hawk Katz Theorem. There existsaconstant C>so that for any number >, whenever E [ ] [ ] is a set which does not contain the vertices of any axis

Chapter 3 Continuous Functions

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587

Positive Political Theory II David Austen-Smith & Je rey S. Banks

An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity problems

Lecture Notes on Game Theory

Commutative Di erential Algebra, Part III

Ordered Value Restriction

Revisiting independence and stochastic dominance for compound lotteries

Chapter 1: Systems of Linear Equations

Fuzzy relation equations with dual composition

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Polynomial functions over nite commutative rings

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

Neighborly families of boxes and bipartite coverings

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Oscillatory Mixed Di erential Systems

Stochastic Processes

ORF 522. Linear Programming and Convex Analysis

Stochastic Processes

LINEAR INTERVAL INEQUALITIES

COHOMOLOGY AND DIFFERENTIAL SCHEMES. 1. Schemes

Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints

Asymptotics of generating the symmetric and alternating groups

Introduction to Linear Algebra. Tyrone L. Vincent

Linear algebra. S. Richard

Sets with two associative operations

Linear Algebra. James Je Heon Kim

Factorization of integer-valued polynomials with square-free denominator

SOME MEASURABILITY AND CONTINUITY PROPERTIES OF ARBITRARY REAL FUNCTIONS

1 Which sets have volume 0?

15-780: LinearProgramming

Arrovian Social Choice with Non -Welfare Attributes

Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences

Metric Spaces and Topology

Arrovian Social Choice with Non -Welfare Attributes

arxiv: v2 [math.oc] 28 Nov 2015

Matrices. Chapter Definitions and Notations

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

The Algebra of Fuzzy Truth Values

Game Theory. Bargaining Theory. ordi Massó. International Doctorate in Economic Analysis (IDEA) Universitat Autònoma de Barcelona (UAB)

Chapter 2: Linear Independence and Bases

f(x) = h(x)g(x) + r(x)

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

Transcription:

Introduction to max-linear programming P.Butkoviµc Abdulhadi Aminu y September 23, 2008 Abstract Let a b = max(a; b) and a b = a + b for a; b 2 R: Extend this pair of operations to matrices and vectors in the same way as in linear algebra. Being motivated by scheduling of multiprocessor interactive systems we introduce max-linear programs of the form f T x! min (or max) subject to Axc = Bxd and develop solution methods for both of them. We prove that these methods are pseudopolynomial if all entries are integer. This result is based on an existing pseudopolynomial algorithm for solving the systems of the form A x = B y: AMS Classi cation [2000]: Primary 90C26, 90C27. Keywords: Max-linear Programming; Optimal Solution; Pseudopolynomial Algorithm. 1 Problem formulation Consider the following multiprocessor interactive system (MPIS): Products P 1 ; :::; P m are prepared using n processors, every processor contributing to the completion of each product by producing a partial product. It is assumed that every processor can work on all products simultaneously and that all these actions on a processor start as soon as the processor starts to work. Let a ij be the duration of the work of the j th processor needed to complete the partial product for P i (i = 1; :::; m; j = 1; :::; n): Let us denote by x j the starting time of the j th processor (j = 1; :::; n). Then all partial products for P i (i = 1; :::; m) will be ready at time max(x 1 + a i1 ; :::; x n + a in ). Now suppose that independently, k other processors prepare partial products for products Q 1 ; :::; Q m and the duration and starting times are b ij and y j ; respectively. Then the synchronisation problem is to nd starting times of all Corresponding author, School of Mathematics, University of Birmingham, Edgbaston, Birmingham B15 2TT, United Kingdom, p.butkovic@bham.ac.uk. Supported by EPSRC grant RRAH12809. y Department of Mathematical Sciences, Kano University of Science and Technology, Wudil, P.M.B 3244, Kano, Nigeria 1

n + k processors so that each pair (P i ; Q i ) (i = 1; :::; m) is completed at the same time. This task is equivalent to solving the system of equations max(x 1 + a i1 ; :::; x n + a in ) = max(y 1 + b i1 ; :::; y k + b ik ) (i = 1; :::; m): It may also be required that P i is not completed before a particular time c i and similarly Q i not before time d i : Then the equations are max(x 1 + a i1 ; :::; x n + a in ; c i ) = max(y 1 + b i1 ; :::; y k + b ik ; d i ) (i = 1; :::; m): (1) If we denote a b = max(a; b) and a b = a + b for a; b 2 R then this system gets the form X aij x j c i = X bij y j d i (i = 1; :::; m): (2) j=1;:::;n j=1;:::;k Therefore (1) (and also (2)) is called a two-sided system of max-linear equations (or brie y a two-sided max-linear system, or just max-linear system). Lemma 1.1 (Cancellation Law) Let v; w; a; b 2 R; a > b. Then for any real x we have v a x = w b x (3) if and only if v a x = w: (4) Proof. If x satis es (3) then LHS a x > b x: Hence RHS = w and (4) follows. If (4) holds then w a x > b x and thus w = w b x: Lemma 1.1 shows that in a two-sided max-linear system variables missing on one side of an equation may be arti cially introduced using suitably taken small coe cients. We may therefore assume without loss of generality that (2) has the same variables on both sides, that is in the matrix-vector notation it has the form A x c = B x d where the pair of operations (; ) is extended to matrices and vectors in the same way as in linear algebra. In applications it may be required that the starting times are optimised with respect to a given criterion. In this paper we consider the case when the objective function is also max-linear, that is f(x) = f T x = max(f 1 + x 1 ; :::; f n + x n ) and it has to be either minimised or maximised. For instance it may required that all processors in an MPIS are in motion as soon/as late as possible, that is the latest starting time of a processor is as small/big as possible. In this case we would set f(x) = max(x 1 ; :::; x n ); that is all f j = 0: Thus the problems we will study are f T x! min or max 2

s.t. A x c = B x d Optimisation problems of this type will be called max-linear programming problems or, brie y, max-linear programs (MLP). Systems of max-linear equations were investigated already in the rst publications dealing with the algebraic structure called max-algebra (sometimes also extremal algebra, path algebra or tropical algebra). In these publications, systems of equations with all variables on one side were considered [7], [14], [16], [3]. Other systems with a special structure were studied in the context of solving eigenvalue problems in the corresponding algebraic structures or synchronization in discrete event systems [2]. Using the (; )-notation, the studied systems had one of the following forms: A x = b, A x = x or A x = x b, where A is a given matrix and b is a given vector. In nite-dimensional generalisations can be found e.g. in [1]. General two-sided max-linear systems have also been studied [4], [9], [10], [15]. A general solution method was presented in [15], however, no complexity bound was given. In [9] a pseudopolynomial algorithm, called the Alternating Method, has been developed. In [4] it was shown that the solution set is generated by a nite number of vectors and an elimination method was suggested. A general iterative approach suggested in [10] assumes that nite upper and lower bounds for all variables are given. We make a substantial use of the Alternating Method for solving the two-sided max-linear systems in the present paper and derive a bisection method for the MLP that repeatedly checks solvability of systems of the form A x = B x. To our knowledge this problem has not been studied before. We prove that the number of calls of a subroutine for checking the feasibility is polynomial when applied to max-linear programs with integer entries, yielding a pseudopolynomial computational complexity overall. Note that the problem of minimising the function 2 x1 + 2 x2 + ::: + 2 xn subject to one-sided max-linear constraints is NP-complete. This result is motivated by a similar result presented in [5] and details are presented at the end of the paper. 2 Max-algebraic prerequisites Let a b = max(a; b) and a b = a + b for a; b 2 R. If a 2 R then the symbol a 1 stands in this paper for a: By max-algebra we understand the analogue of linear algebra developed for the pair of operations (; ), extended to matrices and vectors in the same way as in linear algebra. That is if A = (a ij ); B = (b ij ) and C = (c ij ) are matrices of compatible sizes with entries from R, we write C = AB if c ij = a ij b ij for all i; j and C = A B if c ij = P k a ik b kj = max k (a ik + b kj ) for all i; j. If 2 R then A = A = ( a ij ). The main advantage of using max-algebra is the possibility of dealing with a class of non-linear problems in a linear-like way. This is due to the fact that basic rules (commutative, associative and distributive laws) hold in max-algebra to the same extent as in linear algebra. 3

Max algebra has been studied by many authors and the reader is referred to [7], [8], [11], [2] or [3] for more information, see also [6], [14], [16]. A chapter in [12] provides an excellent state of the art overview of the eld. We will now summarize some standard properties that will be used later on. The following hold for a; b; c 2 R: a b a a b =) a c b c a b () a c b c For matrices (including vectors) A; B; C of compatible sizes over R and a 2 R we have: A B A A B =) A C B C A B =) A C B C A B =) C A C B A B =) c A c B (c A) B = A (c B) The next statement readily follows from the above mentioned relations. Lemma 2.1 Suppose f 2 R n and let f(x) = f T x be de ned on R n : Then (a) f(x) is max-linear that is f( x y) = f(x) f(y) for every x; y 2 R n and ; 2 R: (b) f(x) is isotone that is f(x) f(y) for every x; y 2 R n ; x y: 3 Max-linear programming problem and its basic properties The aim of this paper is to develop methods for nding an x 2 R n that minimises [maximises] the function f(x) = f T x subject to A x c = B x d (5) where f = (f 1 ; :::; f n ) T 2 R n ; c = (c 1 ; :::; c m ) T ; d = (d 1 ; :::; d m ) T 2 R m ; A = (a ij ) ; B = (b ij ) 2 R mn are given matrices and vectors. These problems will be denoted by MLP min [MLP max ] and we also denote everywhere M = f1; :::; mg, N = f1; :::; ng : Note that it is not possible to convert MLP min to MLP max or vice versa. Any system of the form (5) is called a non-homogenous max-linear system and the set of solutions of this system will be denoted by S. The set of optimal solutions for MLP min [MLP max ] will be denoted by S min [S max ]. Any system of the form E z = F z (6) 4

is called a homogenous max-linear system and the solution set to this system will be denoted by S h. In the next proposition we show that any non-homogenous max-linear system can easily be converted to a homogenous one. Here and elsewhere the symbol 0 will be used to denote both the real number zero and the zero vector of an appropriate dimension. Proposition 3.1 Let E = (Aj0) and F = (Bj0) be matrices arising from A and B respectively by adding a zero column. If x 2 S then (xj0) 2 S h and conversely, if z = (z 1 ; :::; z n+1 ) T 2 S h then zn+1 1 (z 1; :::; z n ) T 2 S: Proof. The statement follows straightforwardly from the de nitions. Given MLP min [MLP max ] we denote K = max fja ij j ; jb ij j ; jc i j ; jd j j ; jf j j ; i 2 M; j 2 Ng : (7) Theorem 3.1 [9] Let E = (e ij ) ; F = (f ij ) 2 Z mn and K 0 be the greatest of the values je ij j ; jf ij j,i 2 M; j 2 N: There is an algorithm of computational complexity O (mn (m + n) K 0 ) that nds an x satisfying (6) or decides that no such x exists. Proposition 3.1 and Theorem 3.1 show that the feasibility question for MLP max and MLP min can be solved in pseudopolynomial time. We will use this result to develop bisection methods for solving MLP min and MLP max : We will prove that these methods need a polynomial number of feasibility checks if all entries are integer and hence are also of pseudopolynomial complexity. The algorithm in [9] is an iterative procedure that starts with an arbitrary vector and then only uses the operations of +; ; max and min applied to the starting vector and the entries of E; F. Hence using Proposition 3.1 we deduce: Theorem 3.2 If all entries in a homogenous max-linear system are integer and the system has a solution then this system has an integer solution. The same is true for non-homogenous max-linear systems. As a corollary to Lemma 1.1 we have: Lemma 3.1 Let ; 0 2 R; 0 < and f(x) = f T x; f 0 (x) = f 0T x where f 0 j < f j for every j 2 N. Then the following holds for every x 2 R: f(x) = if and only if f(x) 0 = f 0 (x) : The following proposition shows that the problem of attainment of a value for a max-linear program can be converted to a feasibility question. Proposition 3.2 f(x) = for some x 2 S if and only if the following nonhomogenous max-linear system has a solution: A x c = B x d f(x) 0 = f 0 (x) where 0 < and f 0 (x) = f 0T x where f 0 j < f j for every j 2 N. 5

Proof. The statement follows from Lemma 1.1 and Lemma 3.1. Corollary 3.1 If all entries in MLP max or MLP min are integer then an integer objective function value is attained by a real feasible solution if and only if it is attained by an integer feasible solution. Proof. It follows immediately from Theorem 3.2 and Proposition 3.2. Corollary 3.2 If all entries in MLP max or MLP min and are integer then the decision problem whether f(x) = for some x 2 S \ Z n can be solved by using O (mn (m + n) K 0 ) operations where K 0 = max (K + 1; jj) : Proof. For 0 and f 0 j in Proposition 3.2 we can take 1 and f j 1 respectively. Using Theorem 3.1 and Proposition 3.2 the computational complexity then is O ((m + 1) (n + 1) (m + n + 2) K 0 ) = O (mn (m + n) K 0 ) : A set C R n is said to be max-convex if x y 2 C for every x; y 2 C; ; 2 R with = 0: Proposition 3.3 S and S h are max-convex. Proof. A ( x y) c = = A ( x y) c c = = (A x c) (A y c) = = (B x d) (B y d) = = B ( x y) d d = = B ( x y) d: Hence S is max-convex and S h is max-convex for similar reasons. Proposition 3.4 If x; y 2 S; f(x) = < = f(y) then for every 2 (; ) there is a z 2 S satisfying f(z) = : Proof. Let = 0; = 1 ; z = x y: Then = 0; z 2 S by Proposition 3.3 and by Lemma 2.1 we have f(z) = f(x) f(y) = 1 = : Before we develop solutions methods for solving the optimisation problems MLP min and MLP max we need to nd and prove criteria for the existence of optimal solutions. For simplicity we denote inf x2s f(x) by f min ; similarly sup x2s f(x) by f max. 6

We start with the lower bound. We may assume without loss of generality that in (5) we have c d. Let M > = fi 2 M; c i > d i g: For r 2 M > we denote and As usual max ; = L r = min f k c r b 1 rk k2n 1 by de nition. L = max r2m > L r: Lemma 3.2 If c d then f(x) L for every x 2 S. Proof. If M > = ; then the statement follows trivially since L = x 2 S and r 2 M > : Then (B x) r c r and so x k c r b 1 rk for some k 2 N: Hence f(x) f k x k f k c r b 1 rk now follows. Theorem 3.3 f min = 1 if and only if c = d: 1. Let L r and the statement Proof. If c = d then x 2 S for any x 2 R n and every 2 R small enough. Hence by letting! 1 we have f( x) = f(x)! 1. If c 6= d then without loss of generality c d and the statement now follows by Lemma 3.2 since L > 1: Now we discuss the upper bound. Lemma 3.3 Let c d: If x 2 S and (A x) i > c i for all i 2 M then x 0 = x 2 S and (A x 0 ) i = c i for some i 2 M where = max c i (A x) 1 i2m i : (8) Proof. Let x 2 S: If (A x) i > c i for every i 2 M then A x = B x: For every 2 R we also have A ( x) = B ( x) : It follows from the choice of that also (A ( x)) i = (A x) i c i for every i 2 M with equality for at least one i 2 M: Hence x 0 2 S and the Lemma follows. Let us denote U = max max f j a 1 rj c r: r2m j2n 7

Lemma 3.4 If c d then the following hold: (a) If x 2 S and (A x) r c r for some r 2 M then f(x) U: (b) If A x = B x has no solution then f(x) U for every x 2 S: Proof. (a) Since for all j 2 N we have a rj x j c r f(x) max j2n f j a 1 rj c r U: (b) If S = ; then the statement holds trivially. Let x 2 S: Then (A x) r c r for some r 2 M since otherwise A x = B x; and the statement now follows from (a). Theorem 3.4 f max = +1 if and only if A x = B x has a solution: Proof. We may assume without loss of generality that c d: If A x = B x has no solution then the statement follows from Lemma 3.4. If it has a solution, say z, then for all su ciently big 2 R we have A ( z) = B ( z) c d and hence z 2 S: The statement now follows by letting! +1: We also need to show that the maximal [minimal] value is attained if S 6= ; and f max < +1 f min > 1 : Due to continuity of f this will be proved by showing that both for minimisation and maximisation the set S can be reduced to a compact subset. To achieve this we denote for j 2 N : h j = min min a 1 rj c j; min b 1 rj d j; f 1 j L ; (9) r2m r2m h 0 j = min min a 1 rj c j; min b 1 rj d j r2m r2m (10) and h = (h 1 ; :::; h n ) T ; h 0 = (h 0 1; :::; h 0 n) T : Note that h is nite if and only if f min > 1: Proposition 3.5 For any x 2 S there is an x 0 2 S such that x 0 h and f(x) = f(x 0 ): Proof. Let x 2 S. It is su cient to set x 0 = x h since if x j < h j ; j 2 N then x j is not active on any side of any equation or in the objective function and therefore changing x j to h j will not a ect any of the equations or the objective function value. 8

Corollary 3.3 If f min > that 1 and S 6= ; then there is a compact set S such f min = min f(x): x2s Proof. Note that h is nite since f min > 1: Let ~x 2 S; ~x h; then S = S \ x 2 R n ; h j x j f 1 j f(~x); j 2 N is a compact subset of S and ~x 2 S: If there was a y 2 S; f (y) < min x2s f(x) f(~x) then by Proposition 3.5 there is a y 0 h; y 0 2 S; f(y 0 ) = f(y): Hence f j y 0 j f(y 0 ) = f(y) f(~x) for every j 2 N and thus y 0 2 S; f (y 0 ) < min x2s f(x); a contradiction. Proposition 3.6 For any x 2 S there is an x 0 2 S such that x 0 h 0 and f(x) f(x 0 ): Proof. Let x 2 S and j 2 N: It is su cient to set x 0 = x h 0 since if x j < h 0 j then x j is not active on any side of any equation and therefore changing x j to h 0 j does not violate any of the equations. The rest follows from isotonicity of f(x): Let S 0 = S \ x 2 R n ; h 0 j x j f 1 j U; j 2 N : Corollary 3.4 If f max < +1 then f max = max x2s 0 f(x): Proof. The statement follows immediately from Lemma 3.4, Theorem 3.4 and Proposition 3.6. Corollary 3.5 If S 6= ; and f min > 1 [f max < +1] then S min 6= ; [S max 6= ;]. It follows from Lemma 3.2 that f max > L: However this information is not useful if c = d since then L = 1: Since we will need a lower bound for f max even when c = d we de ne L 0 = f (h 0 ) and formulate the following. Corollary 3.6 If x 2 S then x 0 = x h 0 satis es f(x 0 ) L 0 and thus f max L 0 : 4 The Algorithms It follows from Proposition 3.1 and Theorem 3.1 that in pseudopolynomial time either a feasible solution to (5) can be found or it can be decided that no such solution exists. Due to Theorems 3.3 and 3.4 we can also recognise the cases when the objective function is unbounded. We may therefore assume 9

that a feasible solution exists, the objective function is bounded (from below or above depending on whether we wish to minimise or maximise) and hence an optimal solution exists (Corollary 3.5). If x 0 2 S is found then using the scaling (if necessary) proposed in Lemma 3.3 or Corollary 3.6 we nd (another) x 0 satisfying L f(x 0 ) U or L 0 f(x 0 ) U (see Lemmas 3.2 and 3.4). The use of the bisection method applied to either (L; f(x 0 )) or (f(x 0 ); U) for nding a minimiser or maximiser of f(x) is then justi ed by Proposition 3.4. The algorithms are based on the fact that (see Proposition 3.2) checking the existence of an x 2 S satisfying f(x) = for a given 2 R can be converted to a feasibility problem. They stop when the interval of uncertainty is shorter than a given precision " > 0: Algorithm 4.1 MAXLINMIN (Max-Linear Minimisation) Input: f = (f 1 ; :::; f n ) T 2 R n ; c = (c 1 ; :::; c m ) T ; d = (d 1 ; :::; d m ) T 2 R m ; c d; c 6= d; A = (a ij ) ; B = (b ij ) 2 R mn ; " > 0: Output: x 2 S such that f(x) f min " 1. If L = f(x) for some x 2 S then stop (f min = L). 2. Find an x 0 2 S: If A x 0 i > c i for all i 2 M then scale x 0 by de ned in (8) : 3. L (0) := L; U(0) := f(x 0 ); r := 0 4. := 1 2 (L (r) + U (r)) 5. Check whether f (x) = is satis ed by some x 2 S and in the positive case nd one. If yes then U (r + 1) := ; L (r + 1) := L (r) If not then U (r + 1) := U (r) ; L (r + 1) := 6. r := r + 1 7. If U (r) L (r) " then stop else go to 4. Theorem 4.1 Algorithm MAXLINMIN is correct and the number of iterations before termination is U L O log 2 : " Proof. Correctness follows from Proposition 3.4 and Lemma 3.2. Since c 6= d we have at the end of step 2: f(x 0 ) L > 1 (Lemma 3.2) and U(0) := f(x 0 ) U U L by Lemma 3.4. Thus the number of iterations is O log 2 " since after every iteration the interval of uncertainty is halved. Algorithm 4.2 MAXLINMAX (Max-Linear Maximisation) 10

Input: f = (f 1 ; :::; f n ) T 2 R n ; c = (c 1 ; :::; c m ) T ; d = (d 1 ; :::; d m ) T 2 R m ; A = (a ij ) ; B = (b ij ) 2 R mn ; " > 0: Output: x 2 S such that f max f(x) " or an indication that f max = +1: 1. If U = f(x) for some x 2 S then stop (f max = U) 2. Check whether A x = B x has a solution. If yes, stop (f max = +1) 3. Find an x 0 2 S and set x 0 := x 0 h 0 where h 0 is as de ned in (10) 4. L (0) := f(x 0 ); U(0) := U; r := 0 5. := 1 2 (L (r) + U (r)) 6. Check whether f (x) = is satis ed by some x 2 S and in the positive case nd one. If yes then U (r + 1) := U (r) ; L (r + 1) := If not then U (r + 1) := ; L (r + 1) := L (r) 7. r := r + 1 8. If U (r) L (r) " then stop else go to 5. Theorem 4.2 Algorithm MAXLINMAX is correct and the number of iterations before termination is U L O log 0 2 : " Proof. Correctness follows from Proposition 3.4 and Lemma 3.4. By Lemma 3.4 and Corollary 3.6 U f(x 0 ) L 0 and thus the number of iterations is U L O log 0 2 " since after every iteration the interval of uncertainty is halved. 5 The integer case The algorithms of the previous section may immediately be applied to MLP min or MLP max when all input data are integer. However, we show that in such a case f min and f max are integers and therefore the algorithms nd an exact solution once the interval of uncertainty is of length one since then either L (r) or U (r) is the optimal value. Note that L and U are now integers and we will show how integrality of L (r) and U (r) can be maintained during the run of the algorithms. This implies that the algorithms will nd exact optimal solutions in a nite number of steps and we will prove that their computational complexity is pseudopolynomial. Theorem 5.1 If A; B; c; d; f are integer, S 6= ; and f min > (and therefore S min \ Z n 6= ;). 1 then f min 2 Z 11

Proof. Suppose f min =2 Z and let z = (z 1 ; :::; z n ) T 2 S min. We assume again without loss of generality that c d: For any x 2 R n denote Hence we have F (x) = fj 2 N; f j x j = f(x)g : z j =2 Z for every j 2 F (z) : (11) We will now show that all z j ; j 2 F (z) can be reduced while maintaining feasibility which will be a contradiction with optimality of z: To prove this we develop a special procedure called the Reduction Algorithm. Let us rst denote for x 2 R n : Q (x) = fi 2 M; (A x) i > c i g and for i 2 M and x 2 R n T i (x) = fj 2 N; a ij x j = (A x) i g; R i (x) = fj 2 N; b ij x j = (B x) i g: Since all entries are integer a ij z j = c i cannot hold for any i 2 M and j 2 F (z) and if a ij z j < c i for every i 2 M and j 2 F (z) then all z j ; j 2 F (x) could be reduced without violating any equation which contradicts the optimality of z. Hence Q (z) 6= ;: Reduction Algorithm 1. P (z) := F (z) 2. E 1 := fi 2 Q (z) ; T i (z) P (z) &R i (z) * P (z)g E 2 := fi 2 Q (z) ; T i (z) * P (z) &R i (z) P (z)g 3. If E 1 [ E 2 = ; then P (z) is the set of indices of variables to be reduced, STOP 4. P (z) := P (z) [ S i2e 1 (R i (z) n P (z)) [ S i2e 2 (T i (z) n P (z)) 5. Go to 2. Claim: Reduction Algorithm terminates after a nite number of steps and at termination z j =2 Z for j 2 P (z) : (12) Proof of Claim: Finiteness follows from the fact that the set P (z) strictly increases in size at every iteration and P (z) N. For the remaining part of the claim it is su cient to prove the following for any iteration of this algorithm: If (12) holds at Step 2 then it is also true at Step 5. The statement then follows from the fact that (12) is true when Step 2 is reached for the rst time due to Step 1 and assumption (11). Consider therefore a xed iteration at the beginning of which (12) holds. Suppose without loss of generality that E 1 [E 2 6= ; and take any i 2 E 1 : Hence z j =2 Z for j 2 T i (z) ; thus (A z) i =2 Z: 12

But i 2 Q (z) ; implying (B z) i = (A z) i and so (B z) i =2 Z too. Since b ij are also integer this yields that z j =2 Z for j 2 R i (z) : Therefore z j =2 Z for j 2 S i2e 1 (R i (z) n P (z)) : Similarly, z j =2 Z for j 2 S i2e 2 (T i (z) n P (z)) and the claim follows. If i 2 M n Q (z) then by integrality of the entries both a ij z j < c i and b ij z j < c i for j 2 P (z) : We conclude that all z j for j 2 P (z) can be reduced without violating any of the equations, a contradiction with optimality of z: Hence f min 2 Z: The existence of an integer optimal solution now follows from Corollary 3.1. Theorem 5.2 If A; B; c; d; f are integer, S 6= ; and f max < +1 then f max 2 Z (and therefore S max \ Z n 6= ;). Proof. (Sketch) The proof follows the ideas of the proof of Theorem 5.1. We suppose c d; f max =2 Z and let z = (z 1 ; :::; z n ) T 2 S max : We take one xed j 2 F (z) (hence z j =2 Z) and show that it is possible to increase z j without violating equality in any of the equations. Similarly as in the proof of Theorem 5.1 it is shown that the increase of z j only forces the non-integer components of z to increase. Due to integrality of all entries it is not possible that the equality in an equation is achieved by both integer and non-integer components of z. At the same time an equality of the form (A z) i = c i (if any) cannot be attained by non-integer components, thus a ij z j < c i and b ij z j < c i whenever z j =2 Z and hence there is always scope for an increase of z j =2 Z: The rest of the argument is the same as in the proof of Theorem 5.1. Integer modi cations of the algorithms are now straightforward since L; L 0 and U are also integer: we only need to ensure that the algorithms start from an integer vector (see Theorem 3.2) and that the integrality of both ends of the intervals of uncertainty is maintained, for instance by taking one of the integer parts of the middle of the interval. We start with the minimisation. Note that where K is de ned by (7). L; L 0 ; U 2 [ 3K; 3K] (13) Algorithm 5.1 INTEGER MAXLINMIN (Integer Max-Linear Minimisation) Input: f = (f 1 ; :::; f n ) T 2 Z n ; c = (c 1 ; :::; c m ) T ; d = (d 1 ; :::; d m ) T 2 Z m ; c d; c 6= d; A = (a ij ) ; B = (b ij ) 2 Z mn : Output: x 2 S min \ Z n 1. If L = f(x) for some x 2 S \ Z n then stop (f min = L). 2. Find x 0 2 S \ Z n : If A x 0 i > c i for all i 2 M then scale x 0 by de ned in (8) : 13

3. L (0) := L; U(0) := f(x 0 ); r := 0 4. := 1 2 (L (r) + U (r)) 5. Check whether f (x) = is satis ed by some x 2 S\Z n and in the positive case nd one. If x exists then U (r + 1) := ; L (r + 1) := L (r) : If it does not then U (r + 1) := U (r) ; L (r + 1) := : 6. r := r + 1 7. If U (r) L (r) = 1 then stop (U (r) = f min ) else go to 4. Theorem 5.3 Algorithm INTEGER MAXLINMIN is correct and terminates after using O (mn (m + n) K log K) operations. Proof. Correctness follows from the correctness of MAXLINMIN and from Theorem 5.1. For computational complexity rst note that the number of iterations is O (log(u L)) O (log 6K) = O (log K) : The computationally prevailing part of the algorithm is the checking whether f(x) = for some x 2 S \ Z n when is given. By Corollary 3.2 this can be done using O (mn (m + n) K 0 ) operations where K 0 = max (K + 1; jj) : Since 2 [L; U]; using (13) we have K 0 = O (K) : Hence the computational complexity of checking whether f(x) = for some x 2 S \ Z n is O (mn (m + n) K) and the statement follows. Algorithm 5.2 INTEGER MAXLINMAX (Integer Max-Linear Maximisation) Input: f = (f 1 ; :::; f n ) T 2 Z n ; c = (c 1 ; :::; c m ) T ; d = (d 1 ; :::; d m ) T 2 Z m ; A = (a ij ) ; B = (b ij ) 2 Z mn. Output: x 2 S max \ Z n or an indication that f max = +1: 1. If U = f(x) for some x 2 S \Z n then stop (f max = U). 2. Check whether A x = B x has a solution. If yes, stop (f max = +1) 3. Find an x 0 2 S \ Z n and set x 0 := x 0 h 0 where is as de ned in (10) 4. L (0) := f(x 0 ); U(0) := U; r := 0 5. := 1 2 (L (r) + U (r)) 6. Check whether f (x) = is satis ed by some x 2 S\Z n and in the positive case nd one. If x exists then U (r + 1) := U (r) ; L (r + 1) := : If not then U (r + 1) := ; L (r + 1) := L (r) : 7. r := r + 1 14

8. If U (r) L (r) = 1 then stop (L (r) = f max ) else go to 5. Theorem 5.4 Algorithm INTEGER MAXLINMAX is correct and terminates after using O (mn (m + n) K log K) operations. Proof. Correctness follows from the correctness of MAXLINMAX and from Theorem 5.2. The computational complexity part follows the lines of the proof of Theorem 5.3 after replacing L by L 0 : 6 An example Let us consider the max-linear program (minimisation) in which and the starting vector is f = (3; 1; 4; 2; 0) T ; 0 1 17 12 9 4 9 A = @ 9 0 7 9 10 A ; 19 4 3 7 11 0 2 11 8 10 1 9 B = @ 11 0 12 20 3 A ; 2 13 5 16 4 0 1 0 1 c = @ 12 15 A ; d = @ 12 12 A 13 3 x 0 = ( 6; 0; 3; 5; 2) T : Clearly, f(x 0 ) = 7; M > = f2; 3g and the lower bound is L = max min f k c r b 1 r2m > rk k2n = max (min (7; 16; 7; 7; 12) ; min (14; 1; 12; 5; 9)) = 5: We now make a record of the run of INTEGER MAXLINMIN for this problem. Iteration 1: Check whether L = 5 is attained by f(x) for some x 2 S by solving the system 0 1 0 1 17 12 9 4 9 12 2 11 8 10 9 12 B 9 0 7 9 10 15 C @ 19 4 3 7 11 13 A w = B 11 0 12 20 3 12 C @ 2 13 5 16 4 3 A w: 3 1 4 2 0 6 2 0 3 3 1 5 15

There is no solution, hence L (0) := 5; U(0) := 7; r := 0; := 1: Check whether f (x) = 1 is satis ed by some x 2 S by solving 0 1 0 1 17 12 9 4 9 12 2 11 8 10 9 12 B 9 0 7 9 10 15 C @ 19 4 3 7 11 13 A w = B 11 0 12 20 3 12 C @ 2 13 5 16 4 3 A w: 3 1 4 2 0 0 2 0 3 3 1 1 There is a solution x = ( 6; 0; 3; 5; 1) T : Hence U (1) := 1; L (1) := 5; r := 1; U (1) L (1) > 1: Iteration 2: Check whether f (x) = 2 is satis ed by some x 2 S by solving 0 1 0 1 17 12 9 4 9 12 2 11 8 10 9 12 B 9 0 7 9 10 15 C @ 19 4 3 7 11 13 A w = B 11 0 12 20 3 12 C @ 2 13 5 16 4 3 A w: 3 1 4 2 0 3 2 0 3 3 1 2 There is no solution. Hence U (2) := 1; L (2) := 2; r := 2; U (2) L (2) > 1: Iteration 3: Check whether f (x) = 0 is satis ed by some x 2 S by solving 0 1 0 1 17 12 9 4 9 12 2 11 8 10 9 12 B 9 0 7 9 10 15 C @ 19 4 3 7 11 13 A w = B 11 0 12 20 3 12 C @ 2 13 5 16 4 3 A w: 3 1 4 2 0 1 2 0 3 3 1 0 There is no solution. Hence U (3) := 1; L (3) := 0; U (1) L (1) = 1; stop, f min = 1; an optimal solution is x = ( 6; 0; 3; 5; 1) T : 7 An easily solvable special case One-sided systems of max-linear equations have been studied for many years and they are very well understood [7], [8], [16], [3]. Note that a one-sided system is a special case of a two-sided system (5) where a ij > b ij and c i < d i for every i and j: Not surprisingly, max-linear programs with one-sided constraints have also been known for some time [16]. Here we present this special case for the sake of completeness. Let us consider one-sided systems of the form A x = b (14) where A = (a ij ) 2 R mn and b = (b 1 ; :::; b m ) T 2 R m : These systems can be solved more easily than their linear-algebraic counterparts. One of the methods follows from the next theorem in which S = fx 2 R n ; A x = bg. Theorem 7.1 Let x = (x 1 ; :::; x n ) T where x j = min i2m b i a 1 ij for j 2 N: Then (a) x x for every x 2 S and 16

(b) x 2 S if and only if x x and [ j:x j=x j M j = M where for j 2 N M j = i 2 M; x j = b i a 1 ij : Proof. Can be found in standard texts on max-algebra [7], [11], [16]. Suppose that f = (f 1 ; :::; f n ) T 2 R n is given. The task of minimising [maximising] f(x) = f T x subject to (14) will be denoted by MLP min 1 [MLP max 1 ]. The sets of optimal solutions will be denoted S1 min and S1 max respectively. It follows from Theorem 7.1 and from isotonicity of f(x) that x 2 S1 max : We now present a simple algorithm which solves MLP min 1. Algorithm 7.1 ONEMAXLINMIN (One-sided max-linear minimisation) Input: A 2 R mn ; b 2 R m and c 2 R n : Output: x 2 S min 1 1. Find x and M j ; j 2 N 2. Sort (f j x j ; j 2 N) ; without loss of generality let 3. J := f1g ; r = 1 f 1 x 1 f 2 x 2 ::: f n x n 4. If [ M j = M j2j then stop (x j = x j for j 2 J and x j small enough for j =2 J) 5. r := r + 1; J := J [ frg 6. Go to 4. Theorem 7.2 Algorithm ONEMAXLINMIN is correct and its computational complexity is O(mn 2 ): Proof. Correctness is obvious and computational complexity follows from the fact that the loop 4.-6. is repeated at most n times and each run is O (mn) : Step 1. is O (mn) and Step 2. is O (n log n) : Note that the problem of minimising the function 2 x1 +2 x2 +:::+2 xn subject to one-sided max-linear constraints is NP-complete, since the classical minimum set covering problem can be formulated as a special case of this problem with matrix A over f0; 1g and b = 0. Indeed, given a nite set M = fv 1 ; :::; v m g and a collection M 1 ; :::; M n of its subsets, consider the minimum set covering 17

problem (MSCP) for this system, that is the task of nding the smallest k such that M i1 [ ::: [ M ik = M for some i 1 ; :::; i k 2 f1; :::; ng : MSCP is known to be NP -complete [13]. Let P be the minimisation problem f(x) = 2 x1 + ::: + 2 xn! min subject to A x = b where A = (a ij ) 2 R mn ; b = 0 2 R m and 0 if i 2 Mj a ij = 1 otherwise : It follows from Theorem 7.1 that at every local minimum x = (x 1 ; :::; x n ) T every x j is either 0 or 1 and [ x j=0 M j = M: Thus every local minimum x corresponds to a covering of M and the value f(x) is the number of subsets used in this covering. Therefore P is polynomially equivalent to MSCP. Note also that some results of this paper may be extended to the case when the objective function is isotone, that is f(x) f(y) whenever x y. This generalisation is beyond the scope of the present paper. References [1] M. Akian, S. Gaubert and V. Kolokoltsov. Set coverings and invertibility of functional Galois connections, In: Idempotent Mathematics and Mathematical Physics, G.L. Litvinov and V.P. Maslov, Eds, vol. 377 of Contemporary Mathematics, pp. 19 51, AMS, 2005. [2] F.L.Baccelli, G.Cohen, G.-J.Olsder and J.-P.Quadrat, Synchronization and Linearity, John Wiley, Chichester, New York, 1992. [3] P.Butkoviµc, Max-algebra: The linear algebra of combinatorics? Linear Algebra and Its Applications 367 (2003) 313-335. [4] P. Butkoviµc and G.Hegedüs, An Elimination Method for Finding All Solutions of the System of Linear Equations over an Extremal Algebra, Ekonomicko - matematický obzor 20 (1984) 203-215. 18

[5] K.Cechlárová, Computational complexity of some problems connected to fuzzy relational equations, Talk at the conference Fuzzy Sets Theory and Aplications, Liptovský Ján (2004). [6] R.A.Cuninghame-Green, Describing industrial processes with interference and approximating their steady-state behaviour, Oper.Res.Quart. 13(1962) 95-100. [7] R.A.Cuninghame-Green, Minimax Algebra. Lecture Notes in Economics and Mathematical Systems 166, Berlin, Springer, 1979. [8] R.A.Cuninghame-Green, Minimax Algebra and Applications in: Advances in Imaging and Electron Physics, vol. 90, pp. 1 121, Academic Press, New York, 1995. [9] R.A.Cuninghame-Green and P.Butkoviµc, The Equation Ax = By over (max,+), Theoretical Computer Science 293 (1991) 3-12. [10] R.A.Cuninghame-Green and K.Zimmermann, Equation with Residual Functions, Comment. Math. Univ. Carolinae 42 (2001) 729-740. [11] B.Heidergott, G.J.Olsder and J. van der Woude, Max Plus at Work: Modeling and Analysis of Synchronized Systems, A Course on Max-Plus Algebra, PUP (2005). [12] L. Hogben, R. Brualdi, A. Greenbaum, and R. Mathias (eds), Handbook of Linear Algebra, Discrete Mathematics and Its Applications, Volume 39, Chapman and Hall, 2006. [13] K.H.Rosen et al, Handbook of discrete and combinatorial mathematics, CRC Press 2000. [14] N.N.Vorobyov, Extremal Algebra of Positive Matrices, Elektronische Datenverarbeitung und Kybernetik 3 (1967) 39-71 (in Russian). [15] E.A. Walkup and G.Boriello, A General Linear Max-plus Solution Technique, in: J. Gunawardena, ed., Idempotency (Cambridge, 1988) 406-415. [16] K.Zimmermann, Extremální algebra, Výzkumná publikace Ekonomicko - matematické laboratoµre pµri Ekonomickém ústavµe µ CSAV, 46, Praha, (1976) (in Czech). 19