TIES598 Nonlinear Multiobjective Optimization A priori and a posteriori methods spring 2017 Jussi Hakanen jussi.hakanen@jyu.fi
Contents A priori methods A posteriori methods Some example methods
Learning outcomes To understand different approaches of solving multiobjective optimization problems To understand differences between a priori and a posteriori approaches To be able to apply the methods presented in solving multiobjective optimization problems
Reminder: Pareto optimality All the objectives don t have the same optimal solution optimality needs to be modified Pareto optimality (PO) A solution is Pareto optimal if none of the objectives can be improved without impairing at least one of the others
Weak Pareto optimality Pareto optimality can be difficult to guarantee Weak PO: Some objective can be improved without worsening others PO solution is also weakly PO PO solutions are better but more difficult to compute than weakly PO ones f 2 Weakly PO solutions f 1
Reminder: Solution approaches Plenty of methods developed for MOO We concentrate on methods that aim at finding the most preferred PO solution MOO methods can de categorized based on the role of DM No-preference methods (no DM) A priori methods A posteriori methods Interactive methods
Properties of a good MOO method Methods based on scalarization produce usually one solution at a time Good method should have the following properties produce (weakly) PO solutions is able to find any (weakly) PO solution (by using suitable parameters of the method) parameters should be meaningful for the DM
Normalization of objectives In many of the methods, the normalization of the objectives is necessary If very different scales, then small changes for higher scale objectives can dominate large changes for objectives with smaller scale We can normalize the objectives using the nadir and ideal and setting the normalized objective as ሚf i x = f i x z i z i nad z i There exist other ways of normalization as well
Calculating ideal and nadir Ideal objective vector z R k : best values for each objective when optimized independently z i = min x S f i (x) Nadir objective vector z nad R k : worst values for each objective within Pareto front z i nad = max f x Pareto front f i (x) Easy to obtain for bi-objective problems Estimation required for k > 3 Pay-off table: nadir value the worst in each column f 1 f 2 f k f 1 f 1 (x,1 ) f 2 (x,1 ) f k (x,1 ) f 2 f 1 (x,2 ) f 2 (x,2 ) f 2 (x,2 ) f k (x,1 ) f 1 (x,k ) f 2 (x,k ) f k (x,k ) x,i = optimal solution for f i Ideal values
No-preference methods No DM/preferences available E.g. online optimization Idea: compute some PO solution E.g. closest PO solution to the ideal objective vector Benefits Easy to implement and fast to solve No communication with DM needed(?) Drawbacks No way to influence what kind of PO solution is obtained Problem characteristics not taken into account
Method of global criterion min x S k i=1 f i x z i p 1/p Z = f(s) Distance to the ideal objective vector is minimized A single objective optimization problem is solved Different metrics can be used, e.g. L p metric where 1 p If p <, the solution obtained is PO If p =, the solution obtained is weakly PO f 2 z :ideal objective vector f 1 L 1 metric L 2 metric L metric
A priori methods Ask preferences from DM Idea: 1) ask preferences, 2) optimize Only such PO solution is produced that is of interest to the DM Benefits Computed PO solutions are based on the preferences of the DM (no unnecessary solutions) Drawbacks May be difficult for DM to express preferences before seeing any PO solutions Compute PO solution accordingly
A posteriori methods Idea: 1) optimize, 2) DM makes decision Approximation of Pareto front (or part of it) E.g. evolutionary multiobjective optimization methods Benefits Well suited for 2 objectives PO solutions easy to visualize Understanding of the whole Pareto front Drawbacks Approximating Pareto front often time consuming DM has to choose among large number of solutions Visualization for high number of objectives Compute different PO solutions Ask DM to select most preferred one
Weighting method where σk i=1 min σ k x S i=1 w i f i (x), w i = 1, w i 0, i = 1,, k Weighted sum of the objectives is optimized Different PO solutions can be obtained by changing the weights w i Either a priori or a posteriori method One of the most well-known methods Gass & Saaty (Naval Research Logistics,1955), Zadeh (IEEE Transactions on Automatic Control, 1963)
Weighting method Benefits Solution obtained with positive weights is PO Easy to solve (simple objective function, no additional constraints) Drawbacks Can t find all PO solutions (only for convex problems) PO solution obtained does not necessarily reflect the preferences
Example of proving Pareto optimality Show that the solution of k min σ i=1 w i f i (x) s. t. x S, is Pareto optimal when w i > 0 for all i (w i 0, i = 1,, k and σk i=1 w i = 1)
Example Where to go for a vacation (adopted from Emeritus Prof. Pekka Korhonen) Price Hiking Fishing Surfing Max A 1 10 10 10 6,4 B 5 5 5 5 5 C 10 1 1 1 4,6 weight 0,4 0,2 0,2 0,2 The place with the best value for the most important objective function has the worst total score! Compromise B can t be optimal for any weights 0,5 for price and 0,167 for others: A and C both get the best score 0,6 for price and 0,133 for others: C will get the best score
Convex / non-convex Pareto front Weights = slope of the level set of the objective function Slope changes by changing the weights Non-convex part can t be reached with any weights! f 2, min w 1 =0.5, w 2 =0.5 w 1 =1/3, w 2 =2/3 f 2, min convex Pareto front f 1, min non-convex Pareto front f 1, min
ε-constraint method min x S f j(x) s. t. f i x ε i, i j Choose one objective to be optimized, give others an upper bound and consider as constraints Different PO solutions can be obtained by changing the bounds and/or the objective to be optimized Either a priori or a posteriori method Haimes, Lasdon & Wismer (IEEE Transactions on Systems, Man and Cybernetics, 1971)
From Miettinen: Nonlinear optimization, 2007 (in Finnish) Example PO solutions for different upper bounds for f 2 ε 1 no solutions ε 2 z 2 ε 3 z 3 ε 4 z 4 f 2 f 1
ε-constraint method Benefits Every PO solution can be found* Solution is weakly PO (unique solution is PO) Easy to implement Drawbacks How to choose upper bounds? Does not necessarily give feasible solutions How to adjust the bounds How to choose the objective to be optimized? Scalability for high number of objectives * A solution x S is PO if and only if it is the solution given by the ε-constraint method for every j = 1,, k where ε i = f i x, i j
Reference point method Based on a reference point തz given by DM Different PO solutions are obtained by changing the reference point Satisficing vs optimal decision making Wierzbicki (In: Multiple Criteria Decision Making, Theory and Applications, 1980)
Reference point method min x S max w i(f i x z i ҧ ) + ρ i=1,,k Reference point തz = z 1 ҧ,, ҧ Consists of aspiration levels for the objectives Can be feasible or not Weights w = w 1,, w T k, w i 0 Used for normalizing objectives, are not coming from DM Augmentation term guarantees Pareto optimality Small ρ > 0, typically of the order 10 3 Solution is weakly PO without the augmentation term z k T k i=1 w i (f i x z i ҧ )
Effect of the weights f 2, min Z = f(s) nad z Z = f(s): image of the feasible region z : ideal objective vector z nad : nadir objective vector തz 1, തz 2 : reference points z 2 w i 1 z i nad z i * w i 1 z i z i * w i 1 z i nad z i * z 1 z f 1, min Different weights can produce different PO solutions for the same reference point
Reference point method Benefits Produces only PO solutions (when augmentation term included) Every PO solution can be obtained Reference point is an intuitive way to express preferences Drawbacks Aspiration level needs to be given for all objectives No support for specifying the levels
More information V. Changkong & Y. Haimes, Multiobjective Decision Making: Theory and Methodology, 1983 Y. Sawaragi, H. Nakayama & T. Tanino, Theory of Multiobjective Optimization, 1985 R.E. Steuer, Multiple Criteria Optimization: Theory, Computation and Applications, 1986 K. Miettinen, Nonlinear Multiobjective Optimization, 1999 M. Ehrgott, Multicriteria Optimization, 2005 K. Miettinen, Introduction to Multiobjective Optimization: Noninteractive Approaches, In: J. Branke, K. Deb, K. Miettinen & R. Slowinski (eds): Multiobjective Optimization: Interactive and Evolutionary Approaches, 2008
Material for discussion on March 30th I. Das & J. Dennis, Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems, SIAM Journal on Optimization, 1998, 8, 631-657 M. Tamiz, D. Jones & C. Romero, Goal programming for decision making: An overview of the current state-of-the-art, European Journal of Operational Research, 1998, 111, 569-581