Multiobjective optimization methods Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi spring 2014 TIES483 Nonlinear optimization
No-preference methods DM not available (e.g. online optimization) No preference information available Compute some PO solution Do not take into account which problem is solved Fast methods One PO solution is enough No communication with the DM
Method of global criterion min x S k i=1 f i x z i p 1/p Distance to the ideal objective vector is minimized Different metrics can be used, e.g. L p metric where 1 p A single objective optimization problem is solved
Method of global criterion Ideal objective vector L 1 metric L 2 metric L metric
Method of global criterion When p=, maximum metric nonsmooth optimization problem If p <, the solution obtained is PO If p =, the solution obtained is weakly PO
A posteriori methods Idea: 1) compute different PO solutions, 2) the DM selects the most preferred one Approximation of the PO set (or part of it) is approximated Benefits Well suited for problems with 2 objectives since the PO solutions can be easily visualized for the DM Understanding of the whole PO set
A posteriori methods Drawbacks Approximating the PO set often time consuming DM has to choose the most preferred solution among large number of solutions Visualization of the solutions for high number of objectives
Weighting method where min x S k i=1 w i k i=1 w i f i (x), = 1, w i 0, i = 1,, k A weighted sum of the objectives is optimized different PO solutions can be obtained by changing the weights w i One of the most well-known methods Gass & Saaty (1955), Zadeh (1963)
Weighting method Benefits Solution obtained with positive weights is PO Easy to solve (simple objective function, no additional constraints) Drawbacks Can t find solutions from non-convex parts of the PO set PO solution obtained does not necessarily reflect the preferences
Convex / non-convex PO set Weights = slope of the level set of the objective function Slope changes by changing the weights Non-convex part can t be reached with any weights! f 2, min w 1 =0.5, w 2 =0.5 w 1 =1/3, w 2 =2/3 f 2, min convex PO set f 1, min Non-convex PO set f 1, min
Weighting method Result1: The solution given by the weighting method is weakly PO Result2: The solution given by the weighting method is PO if all the weights are strictly positive Result3: Let x be a PO solution of a convex multiobjective optimization problem. Then there exists a weighting vector T w = w 1,, w k such that x is the solution obtained with the weighting method.
Example Where to go for a vacation (adopted from Prof. Pekka Korhonen) Price Hiking Fishing Surfing Max A 1 10 10 10 6,4 B 5 5 5 5 5 C 10 1 1 1 4,6 weight 0,4 0,2 0,2 0,2 The place with the best value for the objective function is the worst with respect to the most important objective!
ε-constraint method min x S f j(x) s. t. f i x ε i, i j Choose one of the objectives to be optimized, give other objectives an upper bound and consider them as constraints Different PO solutions can be obtained by changing the bounds and/or the objective to be optimized Haimes, Lasdon & Wismer (1971)
From Miettinen: Nonlinear optimization, 2007 (in Finnish) ε-constraint method PO solutions for different upper bounds for f 2 ε 1 : no solutions ε 2 : z 2 ε 3 : z 3 ε 4 : z 4
ε-constraint method Benefits Every PO solution can be found (also for nonconvex problems) Easy to implement Drawbacks How to choose upper bounds? Does not necessarily give feasible solutions How to choose the objective to be optimized?
ε-constraint method Result1: A solution obtained with the ε- constraint method is weakly PO Result2: A unique solution obtained with the ε- constraint method is PO Result3: A solution x S is PO if and only if it is the solution given by the ε-constraint method for every j = 1,, k where ε i = f i x, i j. (every PO solution can be found)
ε-constraint method PO vs. weakly PO ε 1 : weakly PO ε 2 : PO f 2, min weakly PO ε 1 ε 2 =f 2 (x*) PO f 1, min
Equally spaced PO solutions? The weighting method: change the weights systematically In the figure, PO solutions are nearer to each other towards the minimum of f 2 How to obtain equally spaced set? f 2, min f 1, min
Normal Boundary Intersection (NBI) Find the extreme solutions of the PO set Construct a plane passing through the extreme solutions; fix equally spaced points in the plane Search orthogonal to the plane f 2, min f 1, min
Normal Boundary Intersection (NBI) Idea: produce equally spaced approximation of the PO set Solutions are produced by solving max x S λ s. t. Pw λpe = f x z, where P is a payoff table, w is the vector of k weights ( i=1 w i = 1, w i 0) and e = 1,, 1 T Das & Dennis, SIAM Journal of Optimization, 8, 1998
Normal Boundary Intersection (NBI) Properties Equally spaced solutions aproximating the PO set Computation time increases significantly when the number of objectives increases Can produce non PO solutions for non-convex problems f 2, min f 1, min
Equally spaced PO solutions? The weighting method Normal Boundary Intersection f 2, min f 2, min f 1, min f 1, min NBI gives more equally spaced solutions
A priori methods Idea: 1) ask first the preferences of the DM, 2) optimize using the preferences Only such PO solutions are produced that are of interest to the DM Benefits Computed PO solutions are based on the preferences of the DM (no unnecessary solutions) Drawbacks It may be difficult for the DM to express preferences before (s)he has seen any solutions
Lexicographic ordering Order the objectives according to their importance Optimize first w.r.t. to the most important one and continue optimizing the second most important one in the set of optimal solutions for the first one etc. Requires the importance order from the DM before optimization The solution obtained is PO
From Miettinen: Nonlinear optimization, 2007 (in Finnish) Lexicographic ordering 2 objectives: 1st more important Optimize w.r.t. 1st: z 1 and z 2 obtained Optimize w.r.t. 2nd: choose better z 1 In practice, some tolerance is used for optimal values
Interactive methods Idea: DM is utilized actively during the solution process Solution process is iterative: 1. Initialization: compute some PO solution(s) 2. Show PO solution(s) to the DM 3. Is the DM satisfied? If no, ask the DM to give new preferences. Otherwise, stop. A most preferred solution has been found. 4. Compute new PO solution(s) by taking into account new preferences. Go to step 2. Solution process ends when the DM is satisfied with the PO solution obtained
Interactive methods Benefits Only such solutions are computed that are of interest to the DM DM is able to steer the solution process with his/her preferences DM can learn about the interdependences between the conflilcting objectives through the solutions obtained based on the preferences helps adjusting the preferences Drawbacks DM has to invest a lot of time in the solution process If computing PO solutions takes time, DM does not necessarily remember what happened in the early phases
Reference point method Interactive method, based on the usage of a reference point Reference point is an intuitive way to express preferences DM gives a reference point that is used in scalarizing the problem Different PO solutions are obtined by changing the reference point Wierzbicki, The Use of Reference Objectives in Multiobjective Optimization, In: Multiple Criteria Decision Making, Theory and Applications, Springer, 1980
min x S Reference point method max w i(f i x i=1,,k zi) Reference point Consists of aspiration levels for the objectives Can be in the image of the feasible region (Z = f(s)) or not Weights Affect the solution obtained, are not coming from the DM
Effect of the weights f 1, min f 2, min nad z 1 z * z 2 z * 1 i nad i i z z w * 1 i i i z z w i nad i i z z w 1
Reference point method Results: Reference point method produces weakly PO solutions Every weakly PO solution can be found Scalarization of the reference point method can be changed so that the solution obtained is PO
Reference point method Scalarized problem is not differentiable due to the min-max form Can be reformulated in order to have differentiable form (if the objective are differentiable) An additional variable and extra constraints min δ s. t. w i f i x zi δ i = 1,, k x S,δ R
Satisficing Trade-Off Method (STOM) Interactive method, based on classification of the objective functions Very similar to the idea of the reference point method Nakayama & Sawaragi, Satisficing Trade-Off Method for Multiobjective Programming, In: Interactive Decision Analysis, Springer-Verlag, 1984
Satisficing Trade-Off Method (STOM) DM classifies the objectives into 3 classes at the current PO solution f i, whose values should be improved f i, whose value is satisfactory at the moment f i whose value is allowed to get worse A reference point is formed based on the classification DM gives aspiration levels for the functions in the first class Aspiration levels for the functions in the second class are the current values Aspiration levels for the functions in the third class can be computed by using automatic trade-off help for the DM
Satisficing Trade-Off Method (STOM) min x S max i=1,,k f i x z i z i z i + ρ k i=1 f i (x) z i z i Aspiration levels must be greater than the components of the ideal objective vector A solution of the scalarized problem in STOM is PO (if the augmentation term is used)
Satisficing Trade-Off Method (STOM) f 2, min z 2 nad z w 1 i z i z i * * z 1 z f 1, min
NIMBUS method Interactive method, based on classification of the objectives Classification: consider the current PO solution and set every objective into one of the classes Miettinen, Nonlinear Multiobjective Optimization, Kluwer Academic Publishers, 1999 Miettinen & Mäkelä, Synchronous Approach in Interactive Multiobjective Optimization, European Journal of Operational Research, 170, 2006
NIMBUS method 5 classes consist of objectives f i whose values should be improved as much as possible (i є I imp ) should be improved until zi (i є I asp ) is satisfactory at the moment (i є I sat ) Is allowed to get worse until ε i (i є I bound ) Can change freely at the moment (i є I free )
NIMBUS method Classification is feasible if A scalarized problem is formed based on the classification (x c is the current PO solution) min x S max i I imp,j I asp f i x z i z nad i z, f j x z j i z nad j z j + ρ k i=1 s. t. f i x f i x c i I imp I asp I sat, f i x ε i i I bound f i (x) z i nad z i
NIMBUS method Results: Solution of the scalarized problem in the NIMBUS method is weakly PO without the augmentation term It is PO if the augmentation term is used In the synchronous NIMBUS method, 4 different scalarizations are used Different solutions can be obtained for the same preference information No just one way to scalarize the problem, the DM gets to choose from the solutions obtained
WWW-NIMBUS: implementation of the NIMBUS method operating on the Internet 1st multiobjective optimization software operating on the Internet (2000) All the computations are done in servers at JYU, only a browser is needed Always the latest version available Graphical user interface based on forms Freely available for academic purposes http://nimbus.it.jyu.fi/
http://www.mcdmsociety.org/ Newsletter 23 rd International Conference on Multiple Criteria Decision Making 3-7 August 2015, Hamburg (Germany) Membership does not cost you anything! January 23-27, 2012 Dagstuhl Seminar on Learning in Multiobjective Optimization