Constructing efficient solutions structure of multiobjective linear programming

Similar documents
Generating All Efficient Extreme Points in Multiple Objective Linear Programming Problem and Its Application

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization

A Dual Variant of Benson s Outer Approximation Algorithm

Duality Theory for the. Matrix Linear Programming Problem H. W. CORLEY. Received March 25, INTRODUCTION

A new adaptive algorithm for linear multiobjective programming problems

Relationships between upper exhausters and the basic subdifferential in variational analysis

Searching the Efficient Frontier in Data Envelopment Analysis INTERIM REPORT. IR-97-79/October. Pekka Korhonen

Multiple Criteria Optimization: Some Introductory Topics

arxiv: v1 [math.ra] 11 Aug 2014

Chapter 1. Preliminaries

The Hausdorff measure of a class of Sierpinski carpets

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

A new algorithm for linear multiobjective programming problems with bounded variables

A METHOD FOR SOLVING 0-1 MULTIPLE OBJECTIVE LINEAR PROGRAMMING PROBLEM USING DEA

Solving LFP by Converting it into a Single LP

USING LEXICOGRAPHIC PARAMETRIC PROGRAMMING FOR IDENTIFYING EFFICIENT UNITS IN DEA

A DIMENSIONAL DECOMPOSITION APPROACH TO IDENTIFYING EFFICIENT UNITS IN LARGE-SCALE DEA MODELS

Classical linear vector optimization duality revisited

The best generalised inverse of the linear operator in normed linear space

The Drazin inverses of products and differences of orthogonal projections

1 Maximal Lattice-free Convex Sets

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

A Grey-Based Approach to Suppliers Selection Problem

Multiple Objective Linear Programming in Supporting Forest Management

Optimality conditions of set-valued optimization problem involving relative algebraic interior in ordered linear spaces

Bounds of Hausdorff measure of the Sierpinski gasket

Ellipsoidal Mixed-Integer Representability

Abstract. The connectivity of the efficient point set and of some proper efficient point sets in locally convex spaces is investigated.

E-Companion to Fully Sequential Procedures for Large-Scale Ranking-and-Selection Problems in Parallel Computing Environments

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES

A New Method for Complex Decision Making Based on TOPSIS for Complex Decision Making Problems with Fuzzy Data

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C

HILBERT BASIS OF THE LIPMAN SEMIGROUP

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.

arxiv:math/ v1 [math.ac] 11 Nov 2005

Linear Vector Optimization. Algorithms and Applications

Uncertain Programming Model for Solid Transportation Problem

Integer Programming Duality in Multiple Objective Programming

Inverse Perron values and connectivity of a uniform hypergraph

Minimizing the Laplacian eigenvalues for trees with given domination number

Research Article Deriving Weights of Criteria from Inconsistent Fuzzy Comparison Matrices by Using the Nearest Weighted Interval Approximation

Equitable list colorings of planar graphs without short cycles

Linear and Integer Programming - ideas

Strong Convergence Theorems for Nonself I-Asymptotically Quasi-Nonexpansive Mappings 1

Approaches to Sensitivity Analysis in MOLP

Research Article Modified T-F Function Method for Finding Global Minimizer on Unconstrained Optimization

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem

On the Maximal Domain Theorem

Set-valued Duality Theory for Multiple Objective Linear Programs and Application to Mathematical Finance

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

Multicriteria Framework for Robust-Stochastic Formulations of Optimization under Uncertainty

Finite-time hybrid synchronization of time-delay hyperchaotic Lorenz system

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007

A New Fenchel Dual Problem in Vector Optimization

Local Approximation of the Efficient Frontier in Robust Design

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Part 1. The Review of Linear Programming

MAT-INF4110/MAT-INF9110 Mathematical optimization

Viscosity approximation methods for nonexpansive nonself-mappings

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

RANKS OF QUANTUM STATES WITH PRESCRIBED REDUCED STATES

SOME RESULTS CONCERNING THE MULTIVALUED OPTIMIZATION AND THEIR APPLICATIONS

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

3. Linear Programming and Polyhedral Combinatorics

A Smoothing Newton Method for Solving Absolute Value Equations

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Closing the Duality Gap in Linear Vector Optimization

Analytic Solution for the Nucleolus of a Three-Player Cooperative Game

A New Approach to Solve Multi-objective Transportation Problem

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

Some Properties of Convex Hulls of Integer Points Contained in General Convex Sets

3. Linear Programming and Polyhedral Combinatorics

Game Theory: Lecture 2

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

Gallai-Ramsey numbers for a class of graphs with five vertices arxiv: v1 [math.co] 15 Nov 2018

Nontrivial Solutions for Boundary Value Problems of Nonlinear Differential Equation

Linear Programming Inverse Projection Theory Chapter 3

Scheduling jobs with agreeable processing times and due dates on a single batch processing machine

Linear Algebra and its Applications

On the projection onto a finitely generated cone

Means of unitaries, conjugations, and the Friedrichs operator

Global Optimality Conditions for Optimization Problems

Minimal inequalities for an infinite relaxation of integer programs

Linear Programming Redux

On Computing Highly Robust Efficient Solutions

MATH 4211/6211 Optimization Linear Programming

On mixed-integer sets with two integer variables

CO 250 Final Exam Guide

Department of Mathematics, College of Sciences, Shiraz University, Shiraz, Iran

Cone characterizations of approximate solutions in real-vector optimization

Tensor Complementarity Problem and Semi-positive Tensors

Extended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications

The newsvendor problem with convex risk

Closing the duality gap in linear vector optimization

TIES598 Nonlinear Multiobjective Optimization A priori and a posteriori methods spring 2017

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Department of Social Systems and Management. Discussion Paper Series

Transcription:

J. Math. Anal. Appl. 307 (2005) 504 523 www.elsevier.com/locate/jmaa Constructing efficient solutions structure of multiobjective linear programming Hong Yan a,, Quanling Wei b, Jun Wang b a Department of Logistics, The Hong Kong Polytechnic University, Hong Kong b Institute of Operations Research and Mathematical Economics, Renmin University of China, PR China Received 29 April 2003 Available online 29 April 2005 Submitted by J.P. Dauer Abstract It is not a difficult task to find a weak Pareto or Pareto solution in a multiobjective linear programming (MOLP) problem. The difficulty lies in finding all these solutions and representing their structure. This paper develops an algorithm for solving this problem. We investigate the solutions and their relationships in the objective space. The algorithm determines finite number of weights, each of which corresponds to a weighted sum problems. By solving these problems, we further obtain all weak Pareto and Pareto solutions of the MOLP and their structure in the constraint space. The algorithm avoids the degeneration problem, which is a major hurdle of previous works, and presents an easy and clear solution structure. 2004 Published by Elsevier Inc. Keywords: Multiobjective linear programming; Efficient solutions set; Solution structure The research is partially supported by The Hong Kong UGC Grant B-Q343, other authors are partially supported by the National Natural Science Foundation of China, NNSF 70371058. * Corresponding author. E-mail address: lgthyan@polyu.edu.hk (H. Yan). 0022-247X/$ see front matter 2004 Published by Elsevier Inc. doi:10.1016/j.jmaa.2004.08.069

H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 505 1. Introduction A multiobjective linear programming (MOLP) problem is to minimize several linear objective functions subject to a set of linear constraints. As most of the real business decision making problems involve more than one objectives, the MOLP model has been widely applied in many fields and has become a useful tool for decision making. (For applications, for example, see Leschine et al. [18], Gravel et al. [15], Prabuddha et al. [20].) A Pareto solution of a MOLP problem is a solution that cannot improve some objectives without sacrificing others. A weak Pareto solution of a MOLP problem is a solution that cannot improve all the objectives simultaneously. The weak Pareto solution and the Pareto solution are both called efficient solutions of a MOLP. Since a MOLP problem usually has many efficient solutions, decision makers may have difficulty in choosing an efficient solution by simply applying the rule the smaller, the better on all objectives (see Steuer [22] and Dyer et al. [11]). Furthermore, they do not even know how many solutions are available and what relationships are among these solutions. In the theory of multiobjective programming, the efficient solution set attracts special concern because a rational decision maker needs to select a solution from this set according to his/her own preference. In order to convey all the available information to decision makers, it is important and fundamental to find all efficient solutions and represent them in a proper format. The main difficult point of the work lies in the following aspects. Firstly, an algorithm for finding the efficient solution set is usually extremely time consuming. Secondly, it is hard to cope with some special cases, such as the degeneration of solution. Thirdly, since an efficient solution set usually is not a convex set but a union of some efficient faces of the given polyhedron (Yu and Zeleny [26]), it is difficult to represent such an efficient solution set. In the earlier time, many works were mainly conducted for identifying solutions efficiently, by solving multiple parametric programming or by using multiple criteria simplex method to examine the adjacent extreme points (Dantzig [7], Geoffrion [14], Yu and Zeleny [26], Philip [19]). Yu and Zeleny [26] used a global view method and the top-down search strategy, while Philip [19] used a local view method to obtain the efficient face incident to a given efficient extreme point. Isermann [16] and Ecker et al. [12] combined these two view methods. Later on, Armand and Malivert [1] and Armand [2] applied a bottomup search strategy to developing an algorithm. However, all these previous algorithms considered the degeneration problem as their main difficulty to deal with, and the representation of the efficient solutions set was not clearly given. Benson [3] and Sayin [21] proposed to obviate the degeneration problem by employing the facial decomposition approach and the top-down search strategy first used by Yu and Zeleny [26]. Discrete representation of the efficient solution set were given in these papers. In some other works, it was found that the objective space (or outcome space) is a powerful tool for studying the efficient solution set. Dauer [8] investigated the characters of the objective space of MOLP. Dauer and Saleh [9] proposed an algorithm for obtaining the efficient structure of the objective polyhedron. These works provided a new view on studying efficient solutions set of a MOLP. Gallagher and Saleh [13] developed a creative approach to represent the efficient objective set of a MOLP. Dauer and Gallagher [10]

506 H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 studied both the constraint space and the objective space to determine the efficient faces, which they called combined constraint-space, objective-space approach. Recently, Benson [4] and Benson and Sun [5] combined the decomposition technique and the combined constraint-space, objective-space approach to determine the efficient points set in the objective space. Wei et al. [23] used the DEA (data envelopment analysis) method to find all the Pareto solutions and build their structure. Kim and Luc [17] added cones to a polyhedral convex set for determining the efficient faces of multiple objective linear programming. This work solved some special cases of the problem, but left the general situation behind. In this paper, we use an approach similar to the combined constraint-space, objectivespace approach together with the method of transferring a polyhedron from intersectionform to sum-form for building the efficient solution structure of a MOLP. It is well known that a polyhedron can be represented by a set of linear constraints, called intersectionform, or by a convex combination of finite extreme points and non-negative combination of finite extreme rays, called sum-form. A polyhedron can be transferred from intersection-form to sum-form (for the algorithms, see Charnes et al. [6], or Wei and Yan [24,25]). This paper starts at studying the MOLP problem in objective space. We show that by choosing weights properly and solving the weighted sum problem of MOLP associated with these weights, we can obtain all the weak Pareto solutions and Pareto solutions of the MOLP problem in the objective space. We then consider the constraint space. Since the mapping from the constraint space to the objective space is an onto mapping, we can thus find all weak Pareto solutions and Pareto solutions of the original MOLP. Using a transferring algorithm, we can represent the efficient solution set expediently. The first major contribution of our method is avoiding the degeneration difficulties of the former algorithms. Secondly, we provide an easy and clear representation of the efficient solution set. This is extremely valuable for the decision makers. This paper is organized as follows. Section 2 introduces the MOLP problem and some of its basic results for later use. Section 3 considers the MOLP problem in objective space. When some of the weak Pareto solutions are determined, we propose a method to obtain other new weak Pareto solutions in the objective space. Section 4 proposes an algorithm for finding all weak Pareto solutions of the original MOLP problem. We represent the weak Pareto solution set as discrete form to make its structure explicit. Section 5 discusses the Pareto solution set and its structure. Section 6 gives a numerical example for illustrating sour procedure. Section 7 concludes the research. 2. Preliminary of MOLP Consider the following multiobjective linear programming (MOLP) problem: where (VP) V-min Cx s.t. x R =x Ax = b, x 0}, C = ( c T 1,cT 2,...,cT p ) T is a p n matrix, c T i E n,i= 1, 2,...,p;

H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 507 x = (x 1,x 2,...,x n ) T E n,e n is called constraint space; A is an m n matrix, n m and rank(a) = m; b = (b 1,b 2,...,b m ) T E m. The weak Pareto solution and Pareto solution are defined as below. Definition 1. Let x R, x is called a Pareto solution of (VP) if there does not exist x R such that Cx C x, Cx C x. The set of Pareto solutions of (VP) is denoted by R pa. Definition 2. Let x R, x is called a weak Pareto solution of (VP) if there does not exist x R such that Cx < C x. The set of weak Pareto solutions of (VP) is denoted by R wp. Referring to the definitions above, it is obvious that R pa R wp, and x cannot be an inner point of R if x R wp. Denote a mapping F : R F(R), where F(R)=Cx x R}. F(R) is the objective value set of R associated with p objectives (c 1 x,c 2 x,...,c p x) T. F is an onto mapping. We call F(R) image set of R under the mapping F, and E p objective space. From the decomposition theorem of polyhedron (see Dantzig [7]), we know that F(R)is also a polyhedron in objective space E p. Denote F = (f 1,f 2,...,f p ) T E p. Consider the following MOLP problem (VP ) in the objective space E p : (VP ) V-min(f 1,f 2,...,f p ) s.t. F F(R). Because (VP ) is a special MOLP problem whose ith (1 i p) objective is the ith (1 i p) component of variable F, it has some nice properties which makes it easier to study. We can define the weak Pareto solution and Pareto solution of (VP ) analogically. Denote the set of Pareto solutions of (VP ) by E pa and the set of weak Pareto solutions of (VP ) by E wp. The following linear programming (P (v)) is called a weighted sum problem of (VP) associated with weight v = (v 1,v 2,...,v p ) T E p (v 0, v 0): (P (v)) min v T Cx s.t. x R. And the corresponding weighted sum problem of (VP ) associated with weight v in the objective space is (P (v)) min v T F s.t. F F(R).

508 H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 Since the mapping F : R F(R) is an onto mapping, it is easy to get the following properties. These properties exhibit the relationship between the constraint space and the objective space. We list them here for later use. (For these properties, see, for example, Zeleny [27].) Property 1. Let x R, and F F(R); then (i) x is a Pareto solution of (VP) if and only if there exists a v >0 such that x is the optimal solution of the weighted sum problem (P ( v)); (ii) x is a weak Pareto solution of (VP) if and only if there exists a v 0, v 0 such that x is the optimal solution of the weighted sum problem (P ( v)); (iii) F is a Pareto solution of (VP ) if and only if there exists a v >0 such that F is the optimal solution of the weighted sum problem (P ( v)); (iv) F is a weak Pareto solution of (VP ) if and only if there exists a v 0, v 0 such that F is the optimal solution of the weighted sum problem (P ( v)). Property 2. Let x R and F = C x; then (i) x R wp if and only if F E wp ; (ii) x R pa if and only if F E pa. Property 3. Let x R, F = C x; then x is an optimal solution of (P (v)) if and only if F is an optimal solution of (P (v)). Particularly, when v = (0, 0,...,0, 1, 0,...,0) T = e i, the weighted sum problem (P ( v)) becomes the linear programming problem (P (e i )), (P (e i )) min c i x s.t. x R. From Property 1(ii), the optimal solution of (P (e i )) is a weak Pareto solution of (VP). 3. New weak Pareto solutions in objective space Consider MOLP problem (VP ) in objective space E p : (VP ) V-min(f 1,f 2,...,f p ) s.t. F F(R), where F = (f 1,f 2,...,f p ) T E p. Provided that we have obtained k weak Pareto solutions, F 1,F 2,...,F k,of(vp ),we hope to find new weak Pareto solutions of (VP ) by solving its weighted sum problem. In order to do this, we should choose some weights to obtain the weighted sum problem. Let Q = F } F = λ i F i, λ i = 1, λ i 0, i= 1, 2,...,k + E p +,

H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 509 where E p + =x x EP,x 0}, and ( ) G S = (F i ) T G α, G 0, G 0, i= 1, 2,...,k}. α It is easy to see that Q is a closed convex set in E p and S is a polyhedral cone in E p+1. S can be represented by a non-negative combination of its extreme rays. (For the algorithms, see Charnes et al. [6] or Wei and Yan [24,25].) Denote extreme rays of S by ( v l ) α l, l = 1, 2,...,r. Then, Denote r S = β l ( v l α l ) β l 0, l= 1, 2,...,r}. P = F (v l ) T F α l,l= 1, 2,...,r }. We have the following theorem. Theorem 1. Let F i E p, i = 1, 2,...,k,are k vectors in space E p, denote Q = F } F = λ i F i, λ i = 1, λ i 0, i= 1, 2,...,k + E p +, (G ) S = (F i ) T G α, G 0, G 0, i= 1, 2,...,k} α and P = F (v l ) T F α l,l= 1, 2,...,r }, where ( v l ) α l, l = 1, 2,...,r, are the extreme rays of S. Thus, r ( v l) S = β l β l 0, l= 1, 2,...,r}. α l Then P = Q. Proof. First, we show that Q P.LetF 0 Q, then there exists λ 0 = (λ 0 1,λ0 2,...,λ0 k )T E k, e T λ 0 = 1, λ 0 0, such that F 0 λ 0 i F i. Since ( v l ) α l, l = 1, 2,...,r, are the extreme rays of S,wehave (F i ) T v l α l, i = 1, 2,...,k; l = 1, 2,...,r.

510 H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 Note that v l 0, l = 1, 2,...,r,wehave ( ) T (F 0 ) T v l λ 0 i F i v l = λ 0 i (F i ) T v l α l, that is, F 0 P. On the other hand, assume that P Q is not true. That is, there exists F 0 P but F 0 / Q. Since Q is a closed convex set, by the separation theorem of convex set, there exist a d E p, d 0 and an α E 1 such that for any F Q, d T F α>d T F 0. Note that when all components of F are very large, we still have F Q. Thus, d 0. Since F i Q, wehave d T F i α, i = 1, 2,...,k. Therefore, ( ) d S. α That is, β l 0, l = 1, 2,...,r, such that ( ) d r ( v l) ( r β l v l = β l = α α r ). l β l α l That is, r r d = β l v l, α= β l α l. Thus, ( r ) T α>d T F 0 = β l v l F 0 = r β l (v l ) T F 0. Note F 0 P, then (v l ) T F 0 α l, l = 1, 2,...,r. So r r α> β l (v l ) T F 0 β l α l = α, this is a contradiction. So P Q is true. The proof is completed. After all extreme rays of S are obtained, we have the following weighted sum problem (VP ) associated with weight v l : (P (v l )) min (v l ) T F s.t. F F(R),

H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 511 where l = 1, 2,...,r.Let F l be an optimal solution of (P (v l )). From Property 1(iv), F l is a weak Pareto solution of (VP ).For F 1, F 2,..., F r, we have two cases: Case 1. For any l, 1 l r, wehave F l P. The algorithm stops. From Theorems 2 and 3, we can see that the weak Pareto solution set can be determined. Case 2. There exist l 0 (1 l 0 r) such that F l 0 / P. Denote an index set I 0 =l F l / P, 1 l r}=l 1,l 2,...,l k }. Then F l 1, F l 2,..., F l k are k new weak Pareto solutions of the MOLP problem (VP ) in the objective space. 4. Determining weak Pareto solution set of MOLP In this section, we determine the weak Pareto solution set of MOLP problem (VP) and give a representation of the weak Pareto solution set. To describe the efficient solution structure of (VP), we first give an algorithm for finding all weak Pareto solutions of (VP ) in the objective space E p, and present the structure of weak Pareto solutions to (VP ).The algorithm is given as below. Step 1. For i = 1, 2,...,p, solve the following linear programming problem: (P (e i )) min c i x s.t. x R. Let x i, i = 1, 2,...,p, be optimal solutions of (P (e i )). Denote R 1 =x 1,x 2,...,x p }. For convenience, denote k 1 = p. Step 2. Let ( ) } G S 1 = (Cx i ) T G α, G 0, G 0, i= 1, 2,...,k 1. α Obtain extreme rays of S 1 and denote the rays by ( v l ) α l, l = 1, 2,...,r1, namely, r ( v S 1 l) } = β l β l 0, l= 1, 2,...,r 1. α l Now we get r 1 weights v 1,v 2,...,v r 1. Denote P 1 = F (v l ) T F α l,l= 1, 2,...,r 1 }.

512 H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 Step 3. For l = 1, 2,...,r 1, solve the weighted sum problem (P (v l )) associated with weight v l : (P (v l )) min (v l ) T Cx s.t. x R. Let x l be an optimal solution of (P (v l )). Denote an index set I 1 =l C x l / P 1, 1 l r 1 }. Step 4. If I 1 =, then denote k = k 1, r = r 1 and stop, else go to Step 5. Step 5. Denote R 2 = R 1 x l l I 1 }. Without loss of the generality, denote R 2 =x 1,x 2,...,x p,x p+1,...,x k 2 }, k 2 >p. where x i R, i = 1, 2,...,k 2, are extreme points of R. Repeat from Step 2 to Step 5 starting at R 2 and k 2. Since R has finite number of extreme points, this algorithm will finally end after finite steps. When the algorithm stops, we obtain k weak Pareto solutions of (VP). Denote the solutions by x 1,x 2,...,x k, and their images in the objective space are F 1 = Cx 1, F 2 = Cx 2,..., F k = Cx k. From Property 2 we know that Cx 1,Cx 2,...,Cx k are weak Pareto solutions of (VP ).Now Q = F } F = λ i (Cx i ), λ i = 1, λ i 0, i= 1, 2,...,k + E p + and ( ) G S = (Cx i ) T G α, G 0, G 0, i= 1, 2,...,k}. α All extreme rays of S are denoted by ( v l ) α l, l = 1, 2,...,r, then r S = β l ( v l α l ) β l 0, l= 1, 2,...,r}. Let P = F (v l ) T F α l,l= 1, 2,...,r }. From Theorem 1, P = Q. Because the algorithm stops, we know that the optimal solution x l of the linear programming (P (v l )) min (v l ) T Cx s.t. x R must satisfy C x l P, l = 1, 2,...,r. We have the following lemma.

H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 513 Lemma 1. In the objective space E p, F(R) P. Proof. Let F F(R). Then there exists x R such that F = C x. Because the algorithm stops, we have C x l P = Q, l = 1, 2,...,r. That is, λ l = (λ l 1,λl 2,...,λl k )T E k, e T λ l = 1,λ l i 0,i = 1, 2,...,k, such that C x l λ l i (Cxi ). Since x l is an optimal solution of (P (v l )) and v l 0, we have (v l ) T C x (v l ) T C x l (v l ) T λ l i (Cxi ) = λ l i (vl ) T Cx i. Because Cx i P, i = 1, 2,...,k,wehave (v l ) T C x λ l i (vl ) T Cx i λ l i α l = α l. That is, F = C x P. Thus, F(R) P holds. Furthermore, we have Theorem 2. In the objective space E p, P = F(R)+E p +, where Ep + =x x EP,x 0}. Proof. From Theorem 1 and Lemma 1, P = Q and F(R) P hold. Note that, P = Q = F } F = λ i (Cx i ), λ i = 1, λ i 0, i= 1, 2,...,k + E p +, then, F(R)+ E p + Q. On the other hand, since Cx i F(R), i = 1, 2,...,k, and F(R)is convex, we have where λ i (Cx i ) F(R), λ i = 1, λ i 0, i= 1, 2,...,k. Thus, P = Q F(R)+ E p +. The following theorem reveals that when the algorithm stops at Step 4, all weak Pareto solutions of (VP ) in the objective space are found.

514 H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 Theorem 3. In the objective space E p, the weak Pareto solution set of MOLP problem (VP ) can be represented as r E wp = F F F(R), (v l ) T } F = α l. Proof. First we show that r F F F(R), (v l ) T } F = α l Ewp. Let F r F F F(R), (F l ) T } F = α l. There exists l 0 (1 l 0 r), such that (v l 0 ) T F = α l0, (1) where v l 0 0, v l 0 0. From Theorem 2, because F(R) P = F (v l ) T F α l,l= 1, 2,...,r } then for any F F(R),wehave (v l 0 ) T F α l0. (2) From (1) and (2), F is an optimal solution of the following linear programming problem in the image space E p : min (v l 0 ) T F s.t. F F(R). From Property 1, we have F E wp. On the other hand, let F E wp. Note that F is a weak Pareto solution of (VP ) and P = F(R)+ E p +, thus F is also a weak Pareto solution of the following MOLP problem (P ): (P ) V-min F s.t. F P. Assume r F / F (v l ) T } F = α l. That is, (v l ) T F α l, l = 1, 2,...,r. (3) Since F F(R) P, then, (v l ) T F α l, l = 1, 2,...,r. (4)

H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 515 From (3) and (4), we have (v l ) T F>α l, l = 1, 2,...,r, which shows that F is an inner point of P. This contradicts with the fact that F is a weak Pareto solution of (P ). Therefore r E wp F (v l ) T F = α l,f F(R) } holds. The proof is completed. Then, we have the following corollary. Corollary 1. In the objective space E p, denote F i (v l ) T F i = α i, 1 i k } =F l j j = 1, 2,...,t l }, l = 1, 2,...,r. Then E wp r tl F l t l j λ j λ j = 1, λ j 0, j= 1, 2,...,t l }. j=1 j=1 Particularly, if F(R)is a bounded polyhedron, then r tl E wp = F l t l j λ j λ j = 1, λ j 0, j= 1, 2,...,t l }. j=1 j=1 Now we consider describing the structure of weak Pareto solutions of (VP). Since F : F F(R) is an onto mapping, from Property 3, if the efficient solutions structure of (V P ) is known, then the structure of efficient solutions of (VP) can be presented accordingly. From Properties 2, 3 and Theorem 3, we know that the following two theorems hold. Theorem 4. The weak Pareto solution set of MOLP problem (VP) can be obtained from a finite number of the weighted sum problems (P (v l )), l = 1,...,r. Theorem 5. The set of weak Pareto solutions of MOLP problem (VP) can be represented as r R wp = x Ax = b, (v l ) T Cx = α l,x 0 }. The theorem above reveals the weak Pareto solutions structure of (VP). Corollary 2. Denote F i (v l ) T F i = α i, 1 i k } =F l j j = 1, 2,...,t l }, l = 1, 2,...,r.

516 H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 Then R wp r tl x l t l j λ j λ j = 1, λ j 0, j= 1, 2,...,t l }. j=1 j=1 Particularly, if R is a bounded polyhedron and F : R F(R) is a one-to-one mapping, then r tl R wp = x l t l j λ j λ j = 1, λ j 0, j= 1, 2,...,t l }, where F l j = Cx l j, j=1 j=1 j = 1, 2,...,t l ; l = 1, 2,...,r. When R is a unbounded polyhedron, since all its extreme points and extreme rays can be found (for instance, use algorithm in Wei and Yan [24,25]), we can also represent the structure of R wp to the decision makers. 5. Structure of Pareto solution set of (VP) In order to obtain the Pareto solutions of MOLP problem (VP), we first consider MOLP problem (VP ) in the objective space. The following theorem gives a rule to test if F E wp is a Pareto solution of (VP ) in the objective space. Theorem 6. When the algorithm stops at Step 4, let F = ( F 1, F 2,..., F p ) T E wp and denote the index set J as J = l (v l ) T F = α l, 1 l r }. Then F is a Pareto solution of (VP ) if and only if l J vl > 0. Proof. For l J,wehave (v l ) T F = α l, and for l 1, 2,...,r}\J, (v l ) T F >α l. Assume F is a Pareto solution of (VP ) but l J vl > 0 is not true. Since for l = 1, 2,...,r,wehavev l 0, thus there exists j 0 (1 j 0 p) such that the j 0 th component of l J vl is zero. That is, l J,wehave v l j 0 = 0, where vj l 0 is the j 0 th component of v l.ifj =1, 2,...,r}, letε be a positive number (for example, ε = 1), else let (v l ) T } F α l ε = min vj l l 1, 2,...,r}\J, vj l 0 > 0. 0

H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 517 Denote F = ( F 1, F 2,..., F j0 1, F j0 ε, F j0 +1,..., F p ). Then (v l ) T F α l, l = 1, 2,...,r, and F F, F F, (5) that is, F P. From Theorem 1, P = Q, thus F Q. That is, there exists λ = ( λ 1, λ 2,..., λ k ) T 0, k λ i = 1, such that and λ i (Cx i ) F(R) F λ i (Cx i ). From (5) and (6), we have F F λ i (Cx i ), hence, λ i (Cx i ) F, F F, λ i (Cx i ) F. This contradicts with the assumption that F is a Pareto solution of (VP ). On the other hand, if l J vl > 0, for l J we have (v l ) T F = α l. But F F(R), (v l ) T F α l. Therefore, F F(R), ( ) T v l F ( ) T α l = v l F. l J l J l J This indicates that F is an optimal solution of linear programming ( ) T min v l F s.t. l J F F(R). (6)

518 H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 Note that l J vl > 0, from Property 1(i), F is a Pareto solution of (VP ). The theorem is completely proved. Theorem 6 gives a sufficient and necessary condition to test if F E wp is a Pareto solution of (VP ) in the objective space. The structure of Pareto solutions of (VP ) may be more complicated than that of weak Pareto solutions. But according to the test rule in Theorem 6, we can select all Pareto solutions of (VP ) from E wp step by step. First we test if there exists an l,1 l r, such that v l > 0, then we test if there exist l 1 and l 2,1 l 1 < l 2 r, such that v l 1 + v l 2 > 0, and so forth. In each step, if an index set B 1, 2,...,r} satisfies l B vl > 0, then the sets containing B as a subset need not be tested again. After all these index sets are tested, the Pareto solution set of (VP ) is found. Like the results in Theorem 3, Theorem 5, Corollaries 1 and 2, we can obtain E pa and R pa in the same way as that of E wp and R wp. Then we can obtain the representation of R pa. In particular, from Theorem 6, when p = 2, if (P (e 1 )) and (P (e 2 )) do not have one same optimal solution, we have R pa = x R, (v l ) T } Cx = α l. v l >0 6. A numerical example In this section, we give an example to demonstrate our method. We indicate that this is a well defined procedure to identify all weak Pareto solutions and build the structure of weak Pareto and Pareto solutions. Example 1. Consider the following MOLP problem (VP): (VP) V-min(x 1 + x 2,x 1 2x 2 ) s.t. x 1 + x 2 + x 3 = 1, x 1 + 2x 2 + x 4 = 3, x i 0, i = 1, 2, 3, 4. Step 1. Solve the two linear programming problem below: (LP 1 ) min (x 1 + x 2 ) s.t. x R and (LP 2 ) min (x 1 2x 2 ) s.t. x R, where x 1 x R = 2 x 1 + x 2 + x 3 = 1, x 3 x 1 + 2x 2 + x 4 = 3, x x i 0, i= 1, 2, 3, 4. 4

H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 519 An optimal solution of (LP 1 ) is x 1 = (0, 0, 1, 3) T, and its image in the objective space is Cx 1 = (0, 0) T. An optimal solution of (LP 2 ) is x 2 = (1, 2, 0, 0) T, and its image in the objective space is Cx 2 = (3, 3) T.Wehave R 1 = (0, 0, 1, 3) T,(1, 2, 0, 0) T }. Step 2. Denote S 1 = ( f1 f 2 α ) ( 0 0 3 3 )( f1 f 2 ) ( ) α,f 1 0, f 2 0, α ( f1 f 2 ) } 0. All extreme rays of S 1 are (1, 0, 0) T, (0, 1, 3) T and (1, 1, 0) T, and P 1 is ( ) P 1 f1 = f 1 0, f 2 3, f 1 + f 2 0}. f 2 Step 3. Hence we get 3 weights v 1 = (1, 0) T, v 2 = (0, 1) T and v 3 = (1, 1) T. When we use these weights to solve the weighted sum problem of (VP), only from the weight v 3 = (1, 1) T can we get new weak Pareto solutions. Consider the weighted sum problem below: (P (v 3 )) min ( (x 1 + x 2 ) + (x 1 2x 2 ) ) s.t. x R. Its optimal solution is x = (0, 1, 0, 1) T, and the image in the objective space is C x = (1, 2) T. Step 4. Since C x = (1, 2) T / P 1, I 1,gotoStep5. Step 5. Now R 2 =x 1,x 2,x 3 }= (0, 0, 1, 3) T,(1, 2, 0, 0) T,(0, 1, 0, 1) T }, Cx 3 = (1, 2) T. Step 2. Denote ( ) ( ) f1 0 0 (f1 ) ( ) α S 2 = f 2 3 3,f 1 0, f 2 0, α 1 2 f 2 α all extreme rays of S 2 are ( v 1) ( ) 1 ( v 2) ( ) 0 = 0, = 1, α 1 0 α 2 3 ( v 3) ( ) 2 ( v 4) ( ) 1 = 1, = 2, α 3 0 α 4 3 and P 2 = ( f1 f 2 ( f1 ) f 1 0, f 2 3, 2f 1 + f 2 0, f 1 + 2f 2 3}. f 2 ) } 0,

520 H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 Hence we get 4 weights ( ) 1 v 1 =, v 2 = 0 ( ) 0, v 3 = 1 ( ) 2, v 4 = 1 ( ) 1. 2 Step 3. Use these weights to solve the weighted sum problem of (VP). An optimal solution of P(v 1 ) is x 1 = (0, 0, 1, 3) T, and its image in the objective space C x 1 = (0, 0) T P 2. An optimal solution of P(v 2 ) is x 2 = (1, 2, 0, 0) T, and its image in the objective space C x 2 = (3, 3) T P 2. An optimal solution of P(v 3 ) is x 3 = (0, 0, 1, 3) T, and its image in the objective space C x 3 = (0, 0) T P 2. An optimal solution of P(v 4 ) is x 4 = (1, 2, 0, 0) T, and its images in the objective space is C x 4 = (3, 3) T P 2. Step 4. Since I 2 =, this algorithm stops. Since all images of these optimal solutions in the objective space are in P 2, the algorithm stops now. Let S = S 2 and P = P 2. From Theorem 5, we can easily get the set R wp : where R wp = 4 Rwp l, R l wp = x x R, (v l ) T Cx = α l }, l = 1, 2, 3, 4. For instance, Rwp 1 = (x 1,x 2,x 3,x 4 ) T x R, (v 1 ) T } Cx = α 1 x 1 + x 2 + x 3 = 1, x 1 x 1 + 2x 2 + x 4 = 3, x = 2 ( ) x 1 1 1 0 0 x x 3 (1, 0) 2. = 0, 1 2 0 0 x x 4 3 x 4 x 1 0, x 2 0, x 3 0, x 4 0 We can get all extreme points and extreme rays of Rwp l, l = 1, 2, 3, 4. R1 wp contains only one point (0, 0, 1, 3) T ; Rwp 2 has an extreme point (1, 2, 0, 0)T and an extreme ray (2, 1, 1, 0) T ; Rwp 3 has two extreme points (0, 0, 1, 3)T and (0, 1, 0, 1) T ; Rwp 4 has two extreme points (1, 2, 0, 0) T and (0, 1, 0, 1) T. According to Theorem 5, the weak Pareto solution set of (VP) can be represented as R wp = (x 1,x 2,x 3,x 4 ) T } 4 (x 1,x 2,x 3,x 4 ) T, where R l wp

H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 521 Rwp 1 = (0, 0, 1, 3) T }, Rwp 2 = (1, 2, 0, 0) T + λ(2, 1, 1, 0) T λ 0 }, Rwp 3 = λ 1 (0, 0, 1, 3) T + λ 2 (0, 1, 0, 1) T λ 1 + λ 2 = 1, λ 1 0, λ 2 0 }, Rwp 4 = λ 1 (1, 2, 0, 0) T + λ 2 (0, 1, 0, 1) T λ 1 + λ 2 = 1, λ 1 0, λ 2 0 }. In order to obtain the Pareto solution set R pa, we should test v 1, v 2, v 3 and v 4. According to the algorithm in Section 5, firstly, since v 3 = (2, 1) T > 0 and v 4 = (1, 2) T > 0, we know that R 3 wp R pa and R 4 wp R pa. Secondly, since v 1 + v 2 = (1, 1) T > 0, we have R 1 wp R 2 wp R pa.and R pa = Rwp 3 R4 wp ( Rwp 1 wp) R2, where Rwp 1 R2 wp = (x 1,x 2,x 3,x 4 ) T x R, (v 1 ) T Cx = α 1,(v 2 ) T } Cx = α 2 x 1 + x 2 + x 3 = 1, x 1 x 1 + 2x 2 + x 4 = 3, x = 2 ( )( ) x 1 ( ) 1 0 1 1 0 0 x x 3 2 0. =, 0 1 1 2 0 0 x x 4 3 3 x 4 x 1 0, x 2 0, x 3 0, x 4 0 We find that Rwp 1 R2 wp has no extreme point or extreme ray, that is, R1 wp R2 wp =. Therefore, we have R pa = Rwp 3 R4 wp. 7. Conclusion This paper investigated the structure of weak Pareto and Pareto solutions of a multiobjective linear programming problem (MOLP), and presented a method for constructing the solutions. We first considered the MOLP in its objective space, and showed that by choosing weights properly and solving the weighted sum problems of MOLP associated with these weights, all weak Pareto solutions and Pareto solutions of the MOLP problem, and their relationships, can be obtained in the objective space. Since the mapping from the constraint space to the objective space is an onto mapping, then all the weak Pareto and Pareto solutions in the constraint space can be obtained accordingly. This method avoids the degeneration difficulty and makes the representation of the efficient solution set reasonably easy and clear. From Property 1, we know that the Pareto solution set (weak Pareto solution set) of amolp(vp) can be obtained by solving weighted sum problem (P (v)) for all weight v>0(v 0, v 0). This paper presents a constructing method. The method showed that the weak Pareto or Pareto solutions can be determined by solving only finite number of linear weighted sum problems. Thus, we presented a reliable procedure for decision

522 H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 makers to deal with the multiple decision alternatives. The method can be modified for studying the general convex multiple objective programming problem. References [1] P. Armand, C. Malivert, Determination of the efficient set in multiobjective linear programming, J. Optim. Theory Appl. 70 (1991) 467 489. [2] P. Armand, Finding all maximal efficient faces in multiobjective linear programming, Math. Programming 61 (1993) 357 375. [3] H.P. Benson, Complete efficiency and the initialization of the algorithms for multiple objective programming, Oper. Res. Lett. 10 (1991) 481 487. [4] H.P. Benson, Hybrid approach for solving multi-objective linear programs in outcome space, J. Optim. Theory Appl. 98 (1998) 17 35. [5] H.P. Benson, E. Sun, Outcome space partition of the weight set in multiobjective linear programming, J. Optim. Theory Appl. 105 (2000) 17 36. [6] A. Charnes, W.W. Cooper, Z.M. Huang, D.B. Sun, Relations between half-space and finitely generated cones in polyhedral cone-rate DEA models, Internat. J. Systems Sci. 22 (1991) 2057 2077. [7] G.B. Dantzig, Linear Programming and Extensions, Princeton Univ. Press, Princeton, NJ, 1963. [8] J.P. Dauer, Analysis of the objective space in multiobjective linear programming, J. Math. Anal. Appl. 126 (1987) 579 593. [9] J.P. Dauer, O.A. Saleh, Constructing the set of efficient objective values in multiple objective linear programs, European J. Oper. Res. 46 (1990) 358 365. [10] J.P. Dauer, R.J. Gallagher, A combined constraint-space, objective-space approach for determining highdimensional maximal efficient faces of multiple objective linear programs, European J. Oper. Res. 88 (1996) 368 381. [11] J.S. Dyer, P.C. Fishburn, R.E. Steuer, J. Wallenius, S. Zionts, Multiple criterion decision making, multiattribute utility theory: the next ten years, Management Sci. 38 (1992) 645 654. [12] J.G. Ecker, N.S. Hegner, I.A. Kouada, Generating all maximal efficient faces for multiple objective linear programs, J. Optim. Theory Appl. 30 (1980) 353 381. [13] R.J. Gallagher, O.A. Saleh, A representation of an efficiency equivalent polyhedron for the objective set of a multiobjective linear program, European J. Oper. Res. 80 (1995) 204 212. [14] A.M. Geoffrion, Proper efficiency and theory of vector maximization, J. Math. Anal. Appl. 22 (1968) 618 630. [15] M. Gravel, J.M. Martel, R. Nadeau, W. Price, R. Tremblay, A multicriterion view of optimal resource allocation in job shop production, European J. Oper. Res. 61 (1992) 230 244. [16] H. Isermann, The enumeration of the set of all efficient solutions for a linear multiple objective program, Oper. Res. Quart. 28 (1977) 711 725. [17] N.T.B. Kim, D.T. Luc, Normal cones to a polyhedral convex set and generating efficient faces in multiobjective linear programming, Acta Math. Vietnam. 25 (2000) 101 124. [18] T.M. Leschine, H. Wallenius, W.A. Verdini, Interactive multiobjective analysis and assimilative capacitybased ocean disposal decision, European J. Oper. Res. 56 (1992) 278 289. [19] J. Philip, Algorithms for the vector maximization problem, Math. Programming 2 (1972) 207 228. [20] D. Prabuddha, J.B. Ghosh, C.E. Wells, On the minimization of completion time variance with a bicriteria extension, Oper. Res. 40 (1992) 1148 1155. [21] S. Sayin, An algorithm based on facial decomposition for finding the efficient set in multiple objective linear programming, Oper. Res. Lett. 19 (1996) 87 94. [22] R.E. Steuer, Multiple Criteria Optimization: Theory, Computation, and Application, Wiley, New York, 1986. [23] Q.L. Wei, G. Yu, P. Brockett, L. Zhou, Structural analysis of the Pareto efficient solutions in multiobjective programming by using data envelopment analysis models, Research Report 710, Center for Cybernetic Studies, The University of Texas at Austin, 1995. [24] Q.L. Wei, H. Yan, An algebra-based vertex identification process on polytopes, Beijing Math. 3 (1997) 40 48.

H. Yan et al. / J. Math. Anal. Appl. 307 (2005) 504 523 523 [25] Q.L. Wei, H. Yan, A method of transferring cones of intersection form to cones of sum form and its applications in data envelopment analysis models, Internat. J. Systems Sci. 31 (2000) 629 638. [26] P.L. Yu, M. Zeleny, The set of all nondominated solutions in linear cases and a multicriteria simplex method, J. Math. Anal. Appl. 49 (1975) 430 468. [27] M. Zeleny, Linear Multiobjective Programming, Springer-Verlag, New York, 1974.