Multiple Criteria Optimization: Some Introductory Topics

Similar documents
Multiple Objective Linear Programming in Supporting Forest Management

TIES598 Nonlinear Multiobjective Optimization A priori and a posteriori methods spring 2017

Multiobjective optimization methods

Searching the Efficient Frontier in Data Envelopment Analysis INTERIM REPORT. IR-97-79/October. Pekka Korhonen

Tolerance and critical regions of reference points: a study of bi-objective linear programming models

3E4: Modelling Choice

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

A Straightforward Explanation of the Mathematical Foundation of the Analytic Hierarchy Process (AHP)

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Optimization and Newton s method

CO 250 Final Exam Guide

Mixed-Integer Multiobjective Process Planning under Uncertainty

ENGI 5708 Design of Civil Engineering Systems

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

FINANCIAL OPTIMIZATION

Constructing efficient solutions structure of multiobjective linear programming

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. NPTEL National Programme on Technology Enhanced Learning. Probability Methods in Civil Engineering

Linear Programming Inverse Projection Theory Chapter 3

Optimization. A first course on mathematics for economists

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

OPRE 6201 : 3. Special Cases

Systems Analysis in Construction

Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure

CHAPTER 2: QUADRATIC PROGRAMMING

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra. Problem Set 6

Multi Objective Optimization

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Structured Problems and Algorithms

Introduction to sensitivity analysis

An introductory example

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Generating All Efficient Extreme Points in Multiple Objective Linear Programming Problem and Its Application

Expectation maximization tutorial

1 Seidel s LP algorithm

36106 Managerial Decision Modeling Linear Decision Models: Part II

3.4 Relaxations and bounds

A Brief Introduction to Multiobjective Optimization Techniques

Term Definition Example. 3-D shapes or (3 dimensional) acute angle. addend. algorithm. area of a rectangle. array

Overviewing the transition of Markowitz bi-criterion portfolio selection to tri-criterion portfolio selection

The Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1

Today s class. Constrained optimization Linear programming. Prof. Jinbo Bi CSE, UConn. Numerical Methods, Fall 2011 Lecture 12

Non-negative Matrix Factorization via accelerated Projected Gradient Descent

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Introduction to Linear Programming (LP) Mathematical Programming (MP) Concept (1)

The Expectation-Maximization Algorithm

Homework 5. Convex Optimization /36-725

The simplex algorithm

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

Optimization Methods in Management Science

Theory and Internet Protocols

USING LEXICOGRAPHIC PARAMETRIC PROGRAMMING FOR IDENTIFYING EFFICIENT UNITS IN DEA

Chapter 4 The Simplex Algorithm Part I

MATH 445/545 Test 1 Spring 2016

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

ORIGINS OF STOCHASTIC PROGRAMMING

Chapter 9: Roots and Irrational Numbers

The Simplex Algorithm and Goal Programming

9. Decision-making in Complex Engineering Design. School of Mechanical Engineering Associate Professor Choi, Hae-Jin

CS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1)

Optimization Methods in Management Science

Solving LP and MIP Models with Piecewise Linear Objective Functions

The Steiner Network Problem

A Dual Variant of Benson s Outer Approximation Algorithm

Optimization: an Overview

Event-Triggered Interactive Gradient Descent for Real-Time Multi-Objective Optimization

1 Review Session. 1.1 Lecture 2

Computational Optimization. Constrained Optimization Part 2

College Algebra. Systems of Equations and Inequalities with Matrices addendum to Lesson2. Dr. Francesco Strazzullo, Reinhardt University

ICS-E4030 Kernel Methods in Machine Learning

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018

Chapter 1 - Preference and choice

Microeconomics. Joana Pais. Fall Joana Pais

OAKLYN PUBLIC SCHOOL MATHEMATICS CURRICULUM MAP EIGHTH GRADE

Overviewing the Transition of Markowitz Bi-criterion Portfolio Selection to Tri-criterion Portfolio Selection

Let's look at some higher order equations (cubic and quartic) that can also be solved by factoring.

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Archdiocese of Washington Catholic Schools Academic Standards Mathematics

An Experimental Design Approach

Lecture 04 Decision Making under Certainty: The Tradeoff Problem

Introduction To Maple The Structure of Maple A General Introduction to Maple Maple Quick Review Maple Training Introduction, Overview, And The

Selected Topics in Optimization. Some slides borrowed from

Algebra II Polynomials: Operations and Functions

Common Core Coach. Mathematics. First Edition

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Convex optimization problems. Optimization problem in standard form

New Reference-Neighbourhood Scalarization Problem for Multiobjective Integer Programming

Testing Research and Statistical Hypotheses

An interactive reference point approach for multiobjective mixed-integer programming using branch-and-bound 1

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

Instructor Notes for Chapters 3 & 4

Multiobjective Mixed-Integer Stackelberg Games

Preferences and Utility

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Transcription:

Multiple Criteria Optimization: Some Introductory Topics Ralph E. Steuer Department of Banking & Finance University of Georgia Athens, Georgia 30602-6253 USA Finland 2010 1

rsteuer@uga.edu Finland 2010 2

max{ f (x) =z} M 1 1 max{ f (x) =z } s.t. x S k k Tchebycheff contour probing direction Z feasible region in criterion space Finland 2010 3

Production planning min { cost } min { fuel consumption } min { production in a given geographical area } River basin management achieve { BOD standards } min { nitrate standards } min { pollution removal costs } achieve { municipal water demands } min { groundwater pumping } Oil refining min { cost } min { imported crude } min { environmental pollution } min { deviations from demand slate } Finland 2010 4

Sausage blending min { cost } max { protein } min { fat } min { deviations from moisture target } Portfolio selection in finance min { variance } max { expected return } max { dividends } max { liquidity } max { social responsibility } Finland 2010 5

Discrete Alternative Methods Multiple Criteria Optimization max{ f (x) =z} M 1 1 max{ f (x) =z } s.t. x S k k Finland 2010 6

1. Decision Space vs. Criterion Space 2. Contenders for Optimality 3. Criterion and Semi-Positive Polar Cones 4. Graphical Detection of the Efficient Set 5. Graphical Detection of the Nondominated Set 6. Nondominated Set Detection with Min and Max Objectives 7. Image/Inverse Image Relationship and Collapsing 8. Unsupported Nondominated Criterion Vectors Finland 2010 7

In the general case, we write max{ f (x) =z} M 1 1 max{ f (x) =z } s.t. k x S But if all objectives and constraints are linear, we write 1 max{ cx =z1} M k max{ cx =z } s.t. x S in which case we have a multiple objective linear program (MOLP). k k Finland 2010 8

1. Decision Space vs. Criterion Space c 2 x 2 x 2 = (1, 2) max{ x x = z} 1 2 1 max{ x + 2 x = z } st.. x S 1 2 2 z 2 x 3 = (3, 3) z 2 = (-1, 3) z 3 = (0, 3) criterion objective outcome attribute evaluation image S x 4 = (4, 1) Z x 1 x 1 c 1 z 4 = (3, -2) Finland 2010 9

Morphing of S into Z as we change coordinate system x 2 S x 1 Finland 2010 10

Finland 2010 11

Finland 2010 12

Finland 2010 13

z 2 Z Finland 2010 14

2. Contenders for Optimality Points (criterion vectors) in criterion space are either nondominated or dominated. Their points in decision space are either efficient or inefficient. We are interested in nondominated criterion vectors and their efficient points because only they are contenders for optimality. Finland 2010 15

3. Criterion and Nonnegative Polar Cones Criterion cone -- convex cone generated by the gradients of the objective functions. x 2 c 2 S x 1 c 1 Finland 2010 16

The larger the criterion cone (i.e., the more conflict there is in the problem), the bigger the efficient set. x 2 c 2 S x 1 c 1 Finland 2010 17

Nonnegative polar of the criterion cone -- set of vectors that make 90 o or less angle with all objective function gradients. In the case of an MOLP, given by {y n i R : c y 0, i= 1, K, k} Contains all points that dominate its vertex. x 2 c 2 S x 1 c 1 Finland 2010 18

4. Graphical Detection of the Efficient Set Example 1 x 2 c 2 c 1 S x 1 observe the criterion cone. Finland 2010 19

x 2 c 2 c 1 S x 1 form nonnegative polar cone Finland 2010 20

x 2 x 2 x 3 c 2 x 1 c 1 S x 1 move it around Finland 2010 21

x 2 x 2 x 3 c 2 x 1 c 1 S x 1 1 2 2 3 efficient set E = γ[x, x ] γ[x, x ] 1 2 3 set of efficient extreme points E x = {x, x, x } Finland 2010 22

x 2 x 2 x 3 c 2 x 1 c 1 S x 1 1 2 2 3 efficient set E = γ[x, x ] γ[x, x ] 1 2 3 set of efficient extreme points E x = {x, x, x } Finland 2010 23

x 2 x 2 x 3 c 2 x 1 c 1 S x 1 1 2 2 3 efficient set E = γ[x, x ] γ[x, x ] 1 2 3 set of efficient extreme points E x = {x, x, x } Finland 2010 24

x 2 x 2 x 3 c 2 x 1 c 1 S x 1 Only when there is no intersection at other than the vertex of the cone Finland 2010 25

Graphical Detection of the Efficient Set Example 2 x 2 c 2 S x 1 c 1 observe criterion cone Finland 2010 26

x 2 c 2 S x 1 c 1 form nonnegative polar cone Finland 2010 27

x 2 x 1 x 3 x 2 c 2 S x 4 x 5 x 1 c 1 1 2 3 4 5 E = [x, x ) {x } (x, x ] Finland 2010 28

x 2 x 1 x 3 x 2 c 2 S x 4 x 5 x 1 c 1 1 2 3 4 5 E = [x, x ) {x } (x, x ] (observe that x 2 and x 4 are not efficient) Finland 2010 29

Graphical Detection of the Efficient Set Example 3 x 2 x 2 x 1 c 2 x 3 c 1 x 4 x 5 x 6 x 1 Note small size of criterion cone and that S consists of only 6 points. Finland 2010 30

x 2 x 2 x 1 c 2 x 3 c 1 x 4 x 5 x 6 x 1 Small criterion cone results in a large nonnegative polar cone. (this makes it harder for points to be efficient). Finland 2010 31

x 2 x 2 x 1 c 2 x 3 c 1 x 4 x 5 x 6 x 1 Moving nonnegative polar cone around. 6 E = {x } Finland 2010 32

5. Graphical Detection of the Nondominated Set To determine if a criterion vector in Z is nondominated, translate nonnegative orthant in R k to the point. z 2 max max Z move nonnegative orthant around Finland 2010 33

z 2 Z try to identify the entire nondominated set Finland 2010 34

z 2 Z z 2 z 3 1 2 2 3 nondominated set N = [z, z ] γ [z, z ] Finland 2010 35

z 2 max max Z Now, move nonnegative orthant around. Finland 2010 36

z 2 Z z 2 z 3 z 4 z 5 z 6 Finland 2010 37

z 2 Z z 2 z 3 z 4 z 5 z 6 Finland 2010 38

z 2 Z z 2 z 3 z 4 z 5 z 6 Finland 2010 39

z 2 Z z 2 z 3 z 4 z 5 z 6 Finland 2010 40

z 2 Z z 2 z 3 z 4 z 5 z 6 1 2 3 4 5 6 N = [z, z ) [z, z ] (z, z ] Finland 2010 41

6. Nondominated Set Detection with Min and Max Objectives z 2 max max Z Finland 2010 42

z 2 max max Z Finland 2010 43

z 2 max max Z Finland 2010 44

z 2 max max Z Finland 2010 45

z 2 max max Z Finland 2010 46

z 2 min max Z Finland 2010 47

z 2 min max Z Finland 2010 48

z 2 min max Z Finland 2010 49

z 2 min max Z Finland 2010 50

z 2 min min Z Finland 2010 51

z 2 min min Z Finland 2010 52

z 2 max min Z Finland 2010 53

z 2 max min Z Finland 2010 54

Let z Z. Then z is nondominated if and only if there does not exist another z Z such that z z for all i and z > z for at least one j. Otherwise, z is dominated. i i j j Let x S. Then x is efficient if and only its criterion vector z is nondominated. Otherwise, x is inefficient. In other words, image of an efficient point is a nondominated criterion vector inverse image of a nondominated criterion vector is an eff point Finland 2010 55

7. Image/Inverse Image Relationship and Collapsing x 2 max{3x + x 2 x = z} 1 2 3 1 max{ x + x + x = z } 1 2 3 2 st.. x S = unit cube x 3 x 7 = (1, 1, 0) z 2 x 4 x 8 c 2 x 1 S c 1 x5 x 1 z 2 z 4 = (-2, 1) Z z 3 z 6 2 z 8 = (2, 1) z 7 = (4, 0) x 3 x 2 x 6 z 5 dimensionality of S is n, but dimensionality of Z is k. Finland 2010 56

8. Unsupported Nondominated Criterion Vectors A nondominated criterion vector is supported or unsupported. Unsupported if dominated by a convex combination of other feasible criterion vectors. Unsupported nondominated criterion vectors are typically hard to compute. Finland 2010 57

x 2 z 2 c 1 x 1 2 15 c 2 max{ x + 9 x = z} 1 2 1 max{ 3x 8 x = z } st.. x S 1 2 2 Finland 2010 58

x 2 c 1 x 1 x 2 x 3 x 4 z 5 z 2 x 5 x 1 2 z 4 15 c 2 max{ x + 9 x = z} 1 2 1 max{ 3x 8 x = z } st.. x S 1 2 2-15 z 3 z 2 Finland 2010 59

x 2 c 1 x 1 x 2 x 3 x 4 z 5 z 2 x 5 x 1 2 z 4 15 c 2 max{ x + 9 x = z} 1 2 1 max{ 3x 8 x = z } st.. x S 1 2 2-15 z 3 z 2 N N = Z supp 1 2 3 4 5 N {z, z, z, z, z} unsupp = = Z N supp Finland 2010 60

z 2 z 2 Z supp 1 N = {z } unsupp 2 N = {z } Finland 2010 61

Multiple Criteria Optimization: An Introduction (Continued) Ralph E. Steuer Department of Banking & Finance University of Georgia Athens, Georgia 30602-6253 USA Finland 2010 62

Recall 9. Ideal way? 10. Contours, Upper Level Sets and Quasiconcavity 11. More-Is-Always-Better-Than-Less vs. Quasiconcavity 12. ADBASE 13. Size of the Nondominated Set 14. Criterion Value Ranges over Nondominated Set 15. Nadir Criterion Values 16. Payoff Tables 17. Filtering 18. Stamp/Coin Example 19. Weighted-Sums Method 20. e-constraint Method Finland 2010 63

Recall max{ x x = z} 1 2 1 max{ x + 2 x = z } st.. x S 1 2 2 x 2 z 2 x 3 = (3, 3) z 2 = (-1, 3) z 3 = (0, 3) c 2 x 2 = (1, 2) S x 4 = (4, 1) Z x 1 x 1 c 1 z 4 = (3, -2) Finland 2010 64

9. Ideal Way? k Assess a decision maker s utility function UUUUUUaand : R R solve z 2 max{ U( z, K, z )} 1 s.. t f (x) = z i = 1, K, k i i x S k z o Z Finland 2010 65

Maybe not good for four reasons. 1. Difficulty in assessing U 2. U is almost certainly nonlinear 3. Generates only one solution 4. Does not allow for learning Finland 2010 66

10. Contours, Upper Level Sets, and Quasiconcavity A U is quasiconcave if all upper level sets are convex. z 2 1 2 3 4 5 6 7 8 U 8 6 4 2 Finland 2010 67

z 2 1 2 3 4 5 6 7 8 z 2 U 8 6 4 2 z 2 Finland 2010 68

z 2 1 2 3 4 5 6 7 8 z 2 U 8 6 4 2 z 2 Finland 2010 69

z 2 1 2 3 4 5 6 7 8 z 2 U 8 6 4 2 z 2 Finland 2010 70

Quasiconcave functions have at most one top. z 2 1 2 3 4 5 6 7 8 z 2 U 8 6 4 2 z 2 Finland 2010 71

11. More-Is-Always-Better-Than-Less (i.e, Coordinate-wise increasing) vs. Quasiconcavity z 2 1 3 5 7 9 Z U 8 6 4 2 Finland 2010 72

More-is-always-better-than-less does not imply that all local optima are global optima z 2 1 3 5 7 9 z 2 Z is a local optimum, but z 2 is the global optimum. Finland 2010 73

More-is-always-better-than-less does not imply quasiconcavity z 2 1 3 5 7 9 z 3 Z z 4 U 8 6 4 2 z 3 z 4 Finland 2010 74

z 2 1 3 5 7 9 z 3 Z z 4 U 8 6 4 2 z 3 z 4 Finland 2010 75

z 2 1 3 5 7 9 z 3 Z z 4 U 8 6 4 2 z 3 z 4 Finland 2010 76

Assuming that U is coordinate-wise increasing: Nondominated set N -- set of all potentially optimal criterion vectors. Efficient set E -- set of all potentially optimal solutions. Finland 2010 77

12. ADBASE In an MOLP, of course, efficient set is a portion of the surface of S, and nondominated set is a portion of the surface of Z. ADBASE is for MOLPs. It computes all of the extreme points of S that efficient, and hence all of the vertices of Z that are nondominated in an MOLP. Finland 2010 78

13. Size of the Efficient and Nondominated Sets MOLP ave efficient ave problem size extreme pts CPU time 3 x 100 x 150 13,415 36 3 x 250 x 375 285,693 5,573 4 x 50 x 75 19,921 24 5 x 35 x 45 15,484 14 5 x 60 x 90 414,418 1,223 Finland 2010 79

14. Criterion Value Ranges over the Nondominated Set If know nondominated set ahead of time, can warm up decision maker with following information. 100 Obj1 Obj2 Obj3 Obj4 Obj5 50 0 The lower bounds on the ranges are called nadir criterion values. Finland 2010 80

15. Nadir Criterion Values If don t know nondominated set ahead of time, true nadir criterion vector can be difficult to obtain. 100 Obj1 Obj2 Obj3 Obj4 Obj5 50 0 z max = (110, 90,100, 50, 40) nad estimated z = ( 30, 10, 60, 20, 20) Finland 2010 81

16. Payoff Table Obtained by individually maximizing each objective over S. But minimum column values often over-estimate nadir values. 15 1-11 x 2 x 2 z 2 12 8-16 c 2 = (-1, 3) z 3 12-4 -4 x 4 S x 1 Obj1 Obj2 Obj3 15 x 1 c 1 = (3, 0) x 3 0 c 3 = (-1, -3) -15 Finland 2010 82

In this problem E = S. True nadir value for Obj1 is 0 not 12. 15 1-11 x 2 x 2 z 2 12 8-16 c 2 = (-1, 3) z 3 12-4 -4 x 4 S x 1 Obj1 Obj2 Obj3 10 x 1 c 1 = (3, 0) x 3 0 c 3 = (-1, -3) -15 Finland 2010 83

The larger the problem, the greater the likelihood that the payoff table column minimum values will be wrong. After about 5 x 20 x 30, most will be wrong. Finland 2010 84

17. Filtering Reducing 8 vectors down to a dispersed subset of size 5 z 6 z 7 z 3 z 4 z 2 z 5 z 8 z 2 Finland 2010 85

First point always retained by filter. z 6 z 7 z 3 z 4 z 2 z 5 z 8 z 2 Finland 2010 86

z 2 retained by filter, but z 3 and z 5 discarded. z 6 z 7 z 3 z 4 z 2 z 5 z 8 z 2 Finland 2010 87

z 4 retained by filter, but z 8 discarded. z 6 z 7 z 3 z 4 z 2 z 5 z 8 z 2 Finland 2010 88

z 6 retained by filter, but z 7 discarded. z 6 z 7 z 3 z 4 z 2 z 5 z 8 Wanted 5 but got 4. Reduce neighborhood, then do again. After a number of iterations, will converge to desired size. z 2 Finland 2010 89

18. Stamp/Coin Example z 2 (Coins) z 2 z 3 Z (Stamps) Finland 2010 90

z 2 (Coins) z 2 z 3 Z (Stamps) Finland 2010 91

19. Weighted-Sums Method 1 max{ c x = z1 } M k max{ c x = z } s.. t x S k T max{ λ Cx} s.. t x S But how to pick the weights because they are a function of 1. decision-maker s preferences. 2. scale in which the objectives are measured (e.g., cubic feet versus board feet of lumber). 3. shape of the feasible region May also get flip-flopping behavior. Finland 2010 92

Purpose of weighted-sums approach is to obtain information from the DM to create a λ-vector that causes composite gradient λ T C in the weighted-sums program T max{ λ Cx} s.. t x S to point in the same direction as the utility function gradient. Finland 2010 93

2. x 2 c 2 S c 1 x 1 Boss says to go with 50/50 weights. Finland 2010 94

x 2 c 2 λ T C S x 1 c 1 Boss likes resulting solution and is proud his 50/50 weights. Then asks that second objective be changed from cubic feet to board feet of timber production. x 1 Finland 2010 95

c 2 x 2 x 2 λ T C S x 1 c 1 x 1 With 50/50 weights, this causes composite gradient to point in a different direction. Get a completely different solution. Finland 2010 96

3. Boss says use 60/40 weights. Finland 2010 97

Then a constraint needs to be changed slightly. Get a completely different solution. Finland 2010 98

z 2 z 3 Z z 2 Utility function is quasiconcave. Assuming we get perfect information from DM, weighted-sums method will iterate forever! Finland 2010 99

20. e-constraint Method 1 max{ c x = z1 } M k max{ c x = z } s.. t x S k j max{ c x = z } i s.. t c x e i j i x S j Basically trial-and-error Finland 2010 100

p max{ c x = z } m st.. c x e e cx e x S p m e x 2 em ee ce c m c p x 1 Finland 2010 101

22. Overall Interactive Algorithmic Structure 23. Vector-Maximum/Filtering 24. Goal Programming 25. Lp-Metrics 26. Weighted Lp-Metrics 27. Reference Criterion Vector 28. Wierzbicki s Aspiration Criterion Vector Method 29. Lexicographic Tchebycheff Sampling Program 30. Tchebycheff Procedure (overview) 31. Tchebycheff Procedure (in more detail) 32. Tchebycheff Vertex λ-vector 33. How to Compute Dispersed Probing Rays 34. Projected Line Search Method 35. List of Interactive Procedures Finland 2010 102

22. Overall Interactive Algorithmic Structure start set controlling parameters for the 1st iteration solve optimization problem(s) examine criterion vector results done y stop Controlling Parameters: weighting vector e i RHS values aspiration vector others reset controlling parameters for the next iteration Finland 2010 103

23. Vector-Maximum/Filtering Let number of solutions shown be 8, convergence rate be 1/6. Solve an MOLP for, say 66,000, nondominated extreme points. Filter to obtain the 8 most different among the 66,000. Decision maker selects z (1), the most preferred of the 8. Filter to obtain the 8 most different among the 11,000 closest to z (1). Decision maker selects z (2), the most preferred of the new 8. Filter to obtain the 8 most different among the 1,833 closets to z (2). Decision maker selects z (3), the most preferred of the new 8. And so forth. Finland 2010 104

24. Goal Programming = 1 max{ c x z1 } = 2 achieve{ c x z2 } = 3 min{ c x z3 } s.. t x S min{ wd + wd + wd + wd } + + + + 1 1 2 2 2 2 3 3 1 st.. cx+ d1 t1 2 cx + d d = t + 2 2 2 3 cx + 3 3 x S + all d, d 0 i + d t i Must choose a target vector and then select deviational variable weights. Goal programming uses weighted L 1 -metric. Finland 2010 105

z 2 t2 t Z t1 Finland 2010 106

z 2 t2 t Z t1 Finland 2010 107

z 2 t2 t Z t1 Finland 2010 108

z 2 t2 t Z t1 Finland 2010 109

z 2 t2 t Z t1 Finland 2010 110

z 2 t2 z 2 t Z t1 Finland 2010 111

25. Lp-Metrics z k p ** z ** 1 i zi p = i= z = p { } ** max zi zi p = 1 i k 1 p 1, 2, K z** Finland 2010 112

z k p ** z ** 1 i zi p = i= z = p { } ** max zi zi p = 1 i k 1 p 1, 2, K z** Finland 2010 113

z k p ** z ** 1 i zi p = i= z = p { } ** max zi zi p = 1 i k 1 p 1, 2, K z** Finland 2010 114

26. Weighted Lp-Metrics z k ( ) ** p λi i i λ ** i= 1 z = p { } ** max λi zi zi p = 1 i k 1 p z z p = 1, 2, K z** Finland 2010 115

z k ( ) ** p λi i i λ ** i= 1 z = p { } ** max λi zi zi p = 1 i k 1 p z z p = 1, 2, K z** Finland 2010 116

z k ( ) ** p λi i i λ ** i= 1 z = p { } ** max λi zi zi p = 1 i k 1 p z z p = 1, 2, K z** Finland 2010 117

z k ( ) ** p λi i i λ ** i= 1 z = p { } ** max λi zi zi p = 1 i k 1 p z z p = 1, 2, K z** Finland 2010 118

27. Reference Criterion Vector Constructed so as to dominate every point in the nondominated set z 2 Z 3 Finland 2010 119

usually good enough to round to next largest integer z = z + ε ref max i i i z 2 z ref = (5, 4) Z 3 Finland 2010 120

28. Wierzbicki s Reference Point Procedure z 2 Z Finland 2010 121

z 2 z ref Z Finland 2010 122

First iteration z 2 z ref q (1) Z Finland 2010 123

z 2 z ref q (1) Z Finland 2010 124

z 2 z ref q (1) Z Finland 2010 125

z 2 z ref q (1) Z Finland 2010 126

z 2 z ref q (1) z (1) Z Finland 2010 127

Second iteration z 2 z ref q (2) Z Finland 2010 128

z 2 z ref q (2) Z Finland 2010 129

z 2 z ref q (2) Z Finland 2010 130

z 2 z ref q (2) Z Finland 2010 131

z 2 z ref z (2) q (2) Z Finland 2010 132

Third iteration z 2 z ref q (3) Z Finland 2010 133

z 2 z ref q (3) Z Finland 2010 134

z 2 z ref q (3) Z Finland 2010 135

z 2 z ref q (3) Z z (3) Finland 2010 136

29. Lexicographic Tchebycheff Sampling Program Geometry carried out by lexicographic Tchebycheff sampling program lex min{ α, z } i= 1 ref s.. t α λ ( z z ) i = 1, K, k f (x) = z i = 1, K, k i i x S k i i i Minimizing α causes non-negative orthant contour to slide up the probing ray until it last touches the feasible region Z. i k Perturbation term i= 1 z i is there to break ties. Direction of the probing ray emanating from z ref is given by 1 1, K, 1 λi 1 λk Finland 2010 137

z 2 z ref Z Finland 2010 138

z 2 z ref Z Finland 2010 139

z 2 z ref Z Finland 2010 140

Two lexicographic minimum solutions, but both nondominated z 2 z 2 z ref Z Finland 2010 141

30. Tchebycheff Method (Overview) z 2 z ref Z Finland 2010 142

First iteration z 2 z ref Finland 2010 143

z 2 z ref z (1) Finland 2010 144

Second iteration z 2 z ref Finland 2010 145

z 2 z ref z (2) Finland 2010 146

Third iteration z 2 z ref Finland 2010 147

z 2 z ref z (3) Finland 2010 148

start set controlling parameters for the 1st iteration solve optimization problem(s) examine criterion vector results done y stop Controlling Parameters: target vector, weights q (i) aspiration vectors λ i multipliers reset controlling parameters for the next iteration Finland 2010 149

31. Tchebycheff Method (in more detail) Let P = number of solutions to be presented to the DM at each iteration = 4 Let r = reduction factor = 0.5 Let t = number of iterations = 4 Finland 2010 150

z 2 Z Now, form reference criterion vector z ref. Finland 2010 151

z 2 z ref Z Now, form Λ (1) and obtain 4 dispersed λ-vectors from it. Finland 2010 152

z 2 z ref Z Now, solve four lexicographic Tchebycheff sampling programs (one for each probing ray). Finland 2010 153

z 2 z ref Z Now, select most preferred, designating it z (1). Finland 2010 154

z 2 z ref z (1) Z Now, form Λ (2) and obtain 4 dispersed λ-vectors from it. Finland 2010 155

32. Tchebycheff Vertex λ-vector z ref z (1) Finland 2010 156

33. How to Compute Dispersed Probing Rays z ref z (1) Finland 2010 157

λ 1 1 = k (1) i ref (1) ref (1) zi zi j= 1 zj zj 1 Finland 2010 158

z 2 z ref Z Now, solve four lexicographic Tchebycheff sampling programs. Finland 2010 159

z 2 z ref Z Now, select most preferred, designating it z (2). Finland 2010 160

z 2 z ref z (2) Z Now, form Λ (3) and obtain 4 dispersed λ-vectors from it. Finland 2010 161

z 2 z ref Z Now, solve four lexicographic Tchebycheff sampling programs. Finland 2010 162

z 2 z ref Z Now, select most preferred, designating it z (3) Finland 2010 163

z 2 z ref z (3) Z Now, form Λ (4) and obtain 4 dispersed λ-vectors from it. Finland 2010 164

z 2 z ref Z And so forth. Finland 2010 165

34. Projected Line Search Method z 2 z (1) Like driving across surface of moon. Finland 2010 166

z 2 q (2) z (1) Finland 2010 167

z 2 q (2) z (1) Finland 2010 168

z 2 q (2) z (2) z (1) Finland 2010 169

z 2 q (3) z (2) z (1) q (3) Finland 2010 170

z 2 q (3) z (3) z (2) z (1) Finland 2010 171

z 2 z (3) z (2) z (1) Drive straight awhile, turn, drive straight awhile, turn, drive straight awhile, and so forth. Finland 2010 172

35. List of Interactive Procedures 1. Weighted-sums (traditional) 2. e-constraint method (traditional) 3. Goal programming (mostly US, 1960s) 4. STEM (France & Russia, 1971) 5. Geoffrion, Dyer, Feinberg procedure (US, 1972) 6. Vector-maximum/filtering (US, 1976) 7. Zionts-Wallenius Procedure (US & Finland, 1976) 8. Wierzbicki s reference point method (Poland, 1980) 9. Tchebycheff method (US & Canada, 1983) 10. Satisficing tradeoff method (Japan, 1984) 11. Pareto Race (Finland, 1986) 12. AIM (US & South Africa, 1995) 13. NIMBUS (Finland, 1998) Finland 2010 173

The End Finland 2010 174