Multiobjective optimization methods

Similar documents
TIES598 Nonlinear Multiobjective Optimization A priori and a posteriori methods spring 2017

Synchronous Usage of Parameterized Achievement Scalarizing Functions in Interactive Compromise Programming

Experiments with classification-based scalarizing functions in interactive multiobjective optimization

Multiple Objective Linear Programming in Supporting Forest Management

Introduction to unconstrained optimization - direct search methods

Chapter 2 Interactive Programming Methods for Multiobjective Optimization

Multicriteria Decision Making Achievements and Directions for Future Research at IIT-BAS

New Reference-Neighbourhood Scalarization Problem for Multiobjective Integer Programming

Multiple Criteria Optimization: Some Introductory Topics

USING LEXICOGRAPHIC PARAMETRIC PROGRAMMING FOR IDENTIFYING EFFICIENT UNITS IN DEA

Constrained optimization: direct methods (cont.)

THE REFERENCE POINT METHOD WITH LEXICOGRAPHIC MIN-ORDERING OF INDIVIDUAL ACHIEVEMENTS

Applications of Interactive Methods of MOO in Chemical Engineering Problems

Searching the Efficient Frontier in Data Envelopment Analysis INTERIM REPORT. IR-97-79/October. Pekka Korhonen

Tolerance and critical regions of reference points: a study of bi-objective linear programming models

Integer Programming Duality in Multiple Objective Programming

A NONLINEAR WEIGHTS SELECTION IN WEIGHTED SUM FOR CONVEX MULTIOBJECTIVE OPTIMIZATION. Abimbola M. Jubril. 1. Introduction

On prediction. Jussi Hakanen Post-doctoral researcher. TIES445 Data mining (guest lecture)

Evolutionary Multiobjective. Optimization Methods for the Shape Design of Industrial Electromagnetic Devices. P. Di Barba, University of Pavia, Italy

An Interactive Reference Direction Algorithm of the Convex Nonlinear Integer Multiobjective Programming

Interactive Random Fuzzy Two-Level Programming through Possibility-based Fractile Criterion Optimality

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.

Principles of Pattern Recognition. C. A. Murthy Machine Intelligence Unit Indian Statistical Institute Kolkata

Robust goal programming

FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG INSTITUT FÜR ANGEWANDTE MATHEMATIK

Multicriteria Framework for Robust-Stochastic Formulations of Optimization under Uncertainty

An Interactive Reference Direction Algorithm of Nonlinear Integer Multiobjective Programming*

Robust Optimal Experiment Design: A Multi-Objective Approach

Multiobjective Evolutionary Algorithms. Pareto Rankings

Approximation Method for Computationally Expensive Nonconvex Multiobjective Optimization Problems

Scalarizing Problems of Multiobjective Linear Integer Programming

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method

Stochastic Equilibrium Problems arising in the energy industry

Multiobjective Optimization

Interactive fuzzy programming for stochastic two-level linear programming problems through probability maximization

An LP-based inconsistency monitoring of pairwise comparison matrices

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

The Edgeworth-Pareto Principle in Decision Making

Approximating Pareto Curves using Semidefinite Relaxations

Event-Triggered Interactive Gradient Descent for Real-Time Multi-Objective Optimization

A DIMENSIONAL DECOMPOSITION APPROACH TO IDENTIFYING EFFICIENT UNITS IN LARGE-SCALE DEA MODELS

Multiobjective Optimisation An Overview

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization

The effect of learning on membership and welfare in an International Environmental Agreement

Decision Science Letters

An example for the L A TEX package ORiONeng.sty

A New Fenchel Dual Problem in Vector Optimization

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Local Modelling with A Priori Known Bounds Using Direct Weight Optimization

Metoda porównywania parami

Convex Feasibility Problems

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

Research Article Deriving Weights of Criteria from Inconsistent Fuzzy Comparison Matrices by Using the Nearest Weighted Interval Approximation

Local Approximation of the Efficient Frontier in Robust Design

Group Decision-Making with Incomplete Fuzzy Linguistic Preference Relations

A derivative-free nonmonotone line search and its application to the spectral residual method

Lecture 04 Decision Making under Certainty: The Tradeoff Problem

Semismooth Hybrid Systems. Paul I. Barton and Mehmet Yunt Process Systems Engineering Laboratory Massachusetts Institute of Technology

Integrated Electricity Demand and Price Forecasting

Knapsack Feasibility as an Absolute Value Equation Solvable by Successive Linear Programming

Multi Objective Optimization

The Method of Alternating Projections

Interactive Decision Making for Hierarchical Multiobjective Linear Programming Problems with Random Variable Coefficients

2D Decision-Making for Multi-Criteria Design Optimization

SOFTWARE ARCHITECTURE DESIGN OF GIS WEB SERVICE AGGREGATION BASED ON SERVICE GROUP

LCPI method to find optimal solutions of nonlinear programming problems

3E4: Modelling Choice

Machine Learning : Support Vector Machines

MinOver Revisited for Incremental Support-Vector-Classification

Preferences and Utility

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

Optimization and Gradient Descent

No EFFICIENT LINE SEARCHING FOR CONVEX FUNCTIONS. By E. den Boef, D. den Hertog. May 2004 ISSN

Iterative Methods for Solving A x = b

The next generation in weather radar software.

Mathematics for Decision Making: An Introduction. Lecture 8

Mixed-Integer Multiobjective Process Planning under Uncertainty

International Journal of Information Technology & Decision Making c World Scientific Publishing Company

Dynamic Macroeconomic Theory Notes. David L. Kelly. Department of Economics University of Miami Box Coral Gables, FL

Proportional Response as Iterated Cobb-Douglas

Chapter 2: Preliminaries and elements of convex analysis

Incorporating detractors into SVM classification

Short Course Robust Optimization and Machine Learning. 3. Optimization in Supervised Learning

Chapter 2 An Overview of Multiple Criteria Decision Aid

WEB-BASED SPATIAL DECISION SUPPORT: TECHNICAL FOUNDATIONS AND APPLICATIONS

DATA MINING AND MACHINE LEARNING

A Flexible Strategy for Augmenting Design Points For Computer Experiments

Subgradient Methods in Network Resource Allocation: Rate Analysis

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

An Evaluation of the Reliability of Complex Systems Using Shadowed Sets and Fuzzy Lifetime Data

Scalable robust hypothesis tests using graphical models

Linear Regression (continued)

Cone characterizations of approximate solutions in real-vector optimization

Finding Top-k Preferable Products

A Rothschild-Stiglitz approach to Bayesian persuasion

A DEA- COMPROMISE PROGRAMMING MODEL FOR COMPREHENSIVE RANKING

A Note on Robustness of the Min-Max Solution to Multiobjective Linear Programs

ML (cont.): SUPPORT VECTOR MACHINES

LINEAR PROGRAMMING APPROACH FOR THE TRANSITION FROM MARKET-GENERATED HOURLY ENERGY PROGRAMS TO FEASIBLE POWER GENERATION SCHEDULES

Transcription:

Multiobjective optimization methods Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi spring 2014 TIES483 Nonlinear optimization

No-preference methods DM not available (e.g. online optimization) No preference information available Compute some PO solution Do not take into account which problem is solved Fast methods One PO solution is enough No communication with the DM

Method of global criterion min x S k i=1 f i x z i p 1/p Distance to the ideal objective vector is minimized Different metrics can be used, e.g. L p metric where 1 p A single objective optimization problem is solved

Method of global criterion Ideal objective vector L 1 metric L 2 metric L metric

Method of global criterion When p=, maximum metric nonsmooth optimization problem If p <, the solution obtained is PO If p =, the solution obtained is weakly PO

A posteriori methods Idea: 1) compute different PO solutions, 2) the DM selects the most preferred one Approximation of the PO set (or part of it) is approximated Benefits Well suited for problems with 2 objectives since the PO solutions can be easily visualized for the DM Understanding of the whole PO set

A posteriori methods Drawbacks Approximating the PO set often time consuming DM has to choose the most preferred solution among large number of solutions Visualization of the solutions for high number of objectives

Weighting method where min x S k i=1 w i k i=1 w i f i (x), = 1, w i 0, i = 1,, k A weighted sum of the objectives is optimized different PO solutions can be obtained by changing the weights w i One of the most well-known methods Gass & Saaty (1955), Zadeh (1963)

Weighting method Benefits Solution obtained with positive weights is PO Easy to solve (simple objective function, no additional constraints) Drawbacks Can t find solutions from non-convex parts of the PO set PO solution obtained does not necessarily reflect the preferences

Convex / non-convex PO set Weights = slope of the level set of the objective function Slope changes by changing the weights Non-convex part can t be reached with any weights! f 2, min w 1 =0.5, w 2 =0.5 w 1 =1/3, w 2 =2/3 f 2, min convex PO set f 1, min Non-convex PO set f 1, min

Weighting method Result1: The solution given by the weighting method is weakly PO Result2: The solution given by the weighting method is PO if all the weights are strictly positive Result3: Let x be a PO solution of a convex multiobjective optimization problem. Then there exists a weighting vector T w = w 1,, w k such that x is the solution obtained with the weighting method.

Example Where to go for a vacation (adopted from Prof. Pekka Korhonen) Price Hiking Fishing Surfing Max A 1 10 10 10 6,4 B 5 5 5 5 5 C 10 1 1 1 4,6 weight 0,4 0,2 0,2 0,2 The place with the best value for the objective function is the worst with respect to the most important objective!

ε-constraint method min x S f j(x) s. t. f i x ε i, i j Choose one of the objectives to be optimized, give other objectives an upper bound and consider them as constraints Different PO solutions can be obtained by changing the bounds and/or the objective to be optimized Haimes, Lasdon & Wismer (1971)

From Miettinen: Nonlinear optimization, 2007 (in Finnish) ε-constraint method PO solutions for different upper bounds for f 2 ε 1 : no solutions ε 2 : z 2 ε 3 : z 3 ε 4 : z 4

ε-constraint method Benefits Every PO solution can be found (also for nonconvex problems) Easy to implement Drawbacks How to choose upper bounds? Does not necessarily give feasible solutions How to choose the objective to be optimized?

ε-constraint method Result1: A solution obtained with the ε- constraint method is weakly PO Result2: A unique solution obtained with the ε- constraint method is PO Result3: A solution x S is PO if and only if it is the solution given by the ε-constraint method for every j = 1,, k where ε i = f i x, i j. (every PO solution can be found)

ε-constraint method PO vs. weakly PO ε 1 : weakly PO ε 2 : PO f 2, min weakly PO ε 1 ε 2 =f 2 (x*) PO f 1, min

Equally spaced PO solutions? The weighting method: change the weights systematically In the figure, PO solutions are nearer to each other towards the minimum of f 2 How to obtain equally spaced set? f 2, min f 1, min

Normal Boundary Intersection (NBI) Find the extreme solutions of the PO set Construct a plane passing through the extreme solutions; fix equally spaced points in the plane Search orthogonal to the plane f 2, min f 1, min

Normal Boundary Intersection (NBI) Idea: produce equally spaced approximation of the PO set Solutions are produced by solving max x S λ s. t. Pw λpe = f x z, where P is a payoff table, w is the vector of k weights ( i=1 w i = 1, w i 0) and e = 1,, 1 T Das & Dennis, SIAM Journal of Optimization, 8, 1998

Normal Boundary Intersection (NBI) Properties Equally spaced solutions aproximating the PO set Computation time increases significantly when the number of objectives increases Can produce non PO solutions for non-convex problems f 2, min f 1, min

Equally spaced PO solutions? The weighting method Normal Boundary Intersection f 2, min f 2, min f 1, min f 1, min NBI gives more equally spaced solutions

A priori methods Idea: 1) ask first the preferences of the DM, 2) optimize using the preferences Only such PO solutions are produced that are of interest to the DM Benefits Computed PO solutions are based on the preferences of the DM (no unnecessary solutions) Drawbacks It may be difficult for the DM to express preferences before (s)he has seen any solutions

Lexicographic ordering Order the objectives according to their importance Optimize first w.r.t. to the most important one and continue optimizing the second most important one in the set of optimal solutions for the first one etc. Requires the importance order from the DM before optimization The solution obtained is PO

From Miettinen: Nonlinear optimization, 2007 (in Finnish) Lexicographic ordering 2 objectives: 1st more important Optimize w.r.t. 1st: z 1 and z 2 obtained Optimize w.r.t. 2nd: choose better z 1 In practice, some tolerance is used for optimal values

Interactive methods Idea: DM is utilized actively during the solution process Solution process is iterative: 1. Initialization: compute some PO solution(s) 2. Show PO solution(s) to the DM 3. Is the DM satisfied? If no, ask the DM to give new preferences. Otherwise, stop. A most preferred solution has been found. 4. Compute new PO solution(s) by taking into account new preferences. Go to step 2. Solution process ends when the DM is satisfied with the PO solution obtained

Interactive methods Benefits Only such solutions are computed that are of interest to the DM DM is able to steer the solution process with his/her preferences DM can learn about the interdependences between the conflilcting objectives through the solutions obtained based on the preferences helps adjusting the preferences Drawbacks DM has to invest a lot of time in the solution process If computing PO solutions takes time, DM does not necessarily remember what happened in the early phases

Reference point method Interactive method, based on the usage of a reference point Reference point is an intuitive way to express preferences DM gives a reference point that is used in scalarizing the problem Different PO solutions are obtined by changing the reference point Wierzbicki, The Use of Reference Objectives in Multiobjective Optimization, In: Multiple Criteria Decision Making, Theory and Applications, Springer, 1980

min x S Reference point method max w i(f i x i=1,,k zi) Reference point Consists of aspiration levels for the objectives Can be in the image of the feasible region (Z = f(s)) or not Weights Affect the solution obtained, are not coming from the DM

Effect of the weights f 1, min f 2, min nad z 1 z * z 2 z * 1 i nad i i z z w * 1 i i i z z w i nad i i z z w 1

Reference point method Results: Reference point method produces weakly PO solutions Every weakly PO solution can be found Scalarization of the reference point method can be changed so that the solution obtained is PO

Reference point method Scalarized problem is not differentiable due to the min-max form Can be reformulated in order to have differentiable form (if the objective are differentiable) An additional variable and extra constraints min δ s. t. w i f i x zi δ i = 1,, k x S,δ R

Satisficing Trade-Off Method (STOM) Interactive method, based on classification of the objective functions Very similar to the idea of the reference point method Nakayama & Sawaragi, Satisficing Trade-Off Method for Multiobjective Programming, In: Interactive Decision Analysis, Springer-Verlag, 1984

Satisficing Trade-Off Method (STOM) DM classifies the objectives into 3 classes at the current PO solution f i, whose values should be improved f i, whose value is satisfactory at the moment f i whose value is allowed to get worse A reference point is formed based on the classification DM gives aspiration levels for the functions in the first class Aspiration levels for the functions in the second class are the current values Aspiration levels for the functions in the third class can be computed by using automatic trade-off help for the DM

Satisficing Trade-Off Method (STOM) min x S max i=1,,k f i x z i z i z i + ρ k i=1 f i (x) z i z i Aspiration levels must be greater than the components of the ideal objective vector A solution of the scalarized problem in STOM is PO (if the augmentation term is used)

Satisficing Trade-Off Method (STOM) f 2, min z 2 nad z w 1 i z i z i * * z 1 z f 1, min

NIMBUS method Interactive method, based on classification of the objectives Classification: consider the current PO solution and set every objective into one of the classes Miettinen, Nonlinear Multiobjective Optimization, Kluwer Academic Publishers, 1999 Miettinen & Mäkelä, Synchronous Approach in Interactive Multiobjective Optimization, European Journal of Operational Research, 170, 2006

NIMBUS method 5 classes consist of objectives f i whose values should be improved as much as possible (i є I imp ) should be improved until zi (i є I asp ) is satisfactory at the moment (i є I sat ) Is allowed to get worse until ε i (i є I bound ) Can change freely at the moment (i є I free )

NIMBUS method Classification is feasible if A scalarized problem is formed based on the classification (x c is the current PO solution) min x S max i I imp,j I asp f i x z i z nad i z, f j x z j i z nad j z j + ρ k i=1 s. t. f i x f i x c i I imp I asp I sat, f i x ε i i I bound f i (x) z i nad z i

NIMBUS method Results: Solution of the scalarized problem in the NIMBUS method is weakly PO without the augmentation term It is PO if the augmentation term is used In the synchronous NIMBUS method, 4 different scalarizations are used Different solutions can be obtained for the same preference information No just one way to scalarize the problem, the DM gets to choose from the solutions obtained

WWW-NIMBUS: implementation of the NIMBUS method operating on the Internet 1st multiobjective optimization software operating on the Internet (2000) All the computations are done in servers at JYU, only a browser is needed Always the latest version available Graphical user interface based on forms Freely available for academic purposes http://nimbus.it.jyu.fi/

http://www.mcdmsociety.org/ Newsletter 23 rd International Conference on Multiple Criteria Decision Making 3-7 August 2015, Hamburg (Germany) Membership does not cost you anything! January 23-27, 2012 Dagstuhl Seminar on Learning in Multiobjective Optimization