f(x) x Ω MOP
Ω R n f(x) = ( (x),..., f q (x)) f i : R n R i =,..., q max i=,...,q f i (x) Ω n P F P F + R + n
f 3 f 3 f 3 η =., η =.9 ( 3 )
( ) ( ) f (x) x + x min = min x Ω (x) x [ 5,] (x 5) + (x 5)
5 5 5 5 5 5 x -5 5 - -5 5 5 x 5 5 5 5 x 5-5 -5 5 x 5 5
5 5 5 5 5 5 5 5 5 5 5 5 ( ) ( ) f (x) ln(x ) ln(x min = min ) x Ω (x) x (,] x. + x
8 6 - -5 5 x 7 6 5 3 5 6 7 8 x 8 6 - -5 5 ( ) ( f (x) n min = min i= x i + n ) x Ω (x) n x [,3] 3 i= x ii= x3 i
8 8 6 6 f f - -5-5 f -5 5 f Figure : Multistart approach for (T6) for MHT (left) and DMS (right) and with f declared as expensive function. The unique efficient point for this optimization problem is x = (,, ) with the function values f ( x) = f ( x) =. For all considered starting points all three algorithms compute this unique nondominated point respectively a point with vanishing distance to it. This is shown in Figure 5 for one instance. The 8 8 6 f f 6.5.5.5 f 6 3 f 5 6 8 f 6.5.5 f.5 6 Figure 5: Test run for (T7) for MHT (top left), EFOS (top right) and DMS (bottom) 9
min x Ω ( f (x) (x) ) = min x [,] ( x g(x)h(x) ) g(x) = + x h(x) = (x /g(x)) (x /g(x)) sin(8πx ) 8 8 6 6 -...6.8 -...6.8 8 6 -...6.8
8 8 6 6 -...6.8 -...6.8 8 6 -...6.8 n = ( ) f (x) ( x min = min x Ω (x) x [,] g(x) ) x g(x) g(x) = + 3 i= x i
starting points. DMS and MHT compute nondominated points for all instances of this problem. The multistart approach with randomly chosen starting points in Figure 8 shows that both compute different nondominated points given different starting points. However, the points computed by DMS are better spread than the points computed by MHT. Regarding the required function evaluations no clear statement can be made which algorithm needs less (DMS 9-6, MHT 3-69). In all runs MHT needs many function 8 8 6 6 f f...6.8.. f.6.8 f Figure 8: Multistart approach for (Jin) for MHT (left) and DMS (right) evaluations in the end of the procedure. Due to the nonconvexity and the local search strategy this number of function evaluations is needed to ensure the stopping criterion being fulfilled. This is exemplarily shown for one specific run in Figure 9. This instance again illustrates the coordinate search of DMS. 8 8 6 6 f f...6.8. f..6.8 f Figure 9: Test run for (Jin) for MHT (left) and DMS (right) As last nonconvex test problem we consider FF with n = 3 from [] given by ( n ( ) ) ( ) exp i= xi n f (x) ( n ( min = min 3 ) ) f (x) x Ω x [,] exp xi + n i= 3 (FF)
x = (.6,.38,.833) ( x) =.996 ( x) =.53 x t t = 3.398 7 ω x ω( x) = 8.6 3 (n + )(n + )/ n +
n min x Ω ( f (x) (x) ) = min x [,] n ( ) n n i= x i n n i= (x i ) n {, 3,, 5,,, 3,, 5} n = 5 n = n = 3.5 3.5.5...6.8 n = 5 n = n = 5 n = n = 5 n = n =
3.5 3.5.5...6.8 n = 5 n = n
n
percentage of solved test instances.8.6. MHT. DMS EFOS 5 5 number of expensive function evaluations percentage of solved test instances.8.6.. MHT DMS EFOS 3 5 number of expensive function evaluations percentage of solved test instances.8.6.. MHT DMS EFOS 5 5 number of expensive function evaluations percentage of solved test instances.8.6.. MHT DMS EFOS 3 5 number of expensive function evaluations n
ω : R n R ω(x) := min d max i=,...,q xf i (x) d f i : R n R i =,..., q ω ω(x) x R n x R n MOP ω(x) = x f( x) ω( x) ε ε =. percentage of solved test instances.8.6.. MHT DMS EFOS 5 5 number of expensive function evaluations percentage of solved test instances.8.6.. MHT DMS EFOS 3 5 number of expensive function evaluations
MOP n = q = Ω = R (x) = x + x x (x) = x + x x MOP n = q = Ω = R (x) = sin x (x) = exp ( ( x ) ( x ) ) MOP n = q = Ω = [, ] (x) = x + (x) = x + x MOP n {, 3,, 5,,, 3,, 5} q = Ω = [, ] n n (x) = x i + (x) = i= n x i MOP n = q = Ω = (, 3] [, 3] i= (x) = x ln(x ) + x (x) = x + x
MOP n =, q = Ω = (, ] (x) = ln(x ) ln(x ) (x) = x + x MOP n = 3, q = Ω = [, 3] 3 n n (x) = x i + (x) = i= n i= MOP n = 3, q = 3 Ω = (, ] [, ] [, ] n (x) = i= n x 3 i x i i= x 3 i (x) = (x i ) + x n f 3 (x) = ln(x ) + 5 f 3 i= n i= x i R n Pn p p = (n + )(n + )/ ψ = {ψ, ψ,..., ψ p } Pn m(x) Pn p m(x) = α j ψ j (x) j= α R p m(y i ) = f(y i ) i =,,..., p
M(ψ, Y )α ψ = f(y ) (M(ψ, Y )) ij = ψ j (y i ) i, j =,..., p f(y ) = (f(y ), f(y ),..., f(y p )) α ψ = (α, α,..., α p ) M(ψ, Y ) Y = {y, y,..., y p } R n M(ψ, Y ) ψ P n M(ψ, Y ) ψ P d n m f : R n R Y R n m L = {l, l,..., l p } Y = {y, y,..., y p } R n l i (y j ) = { i = j. ψ := { } ψ,..., ψ } p = {, x, x,..., x n, x, x x,..., x n x n, x n. ψ i l i, i =,,..., p Y Y = p i =,,..., p j i = i j p l i (y j ) l i (y j i ) = Y y i y j i Y l i (x) = l i (x)/l i (y i ) l j (x) = l j (x) l j (y i )l i (x) j =,,..., p, j i Y = {y, y,..., y p } R n m(x) = p i= f(yi )l i (x)
P n f : R n R Y = {y, y,..., y p } R n m L : R n R m L (x) = p i= f(yi )l i (x) f = m L Y f Pn Pn α i R i {,..., n} f(x) = p i= α il i (x) x R n f(y j ) = p i= α il i (y j ) = α j f(y j ) = m L (y j ) = p i= f(yi )l i (y j ) = f(y j ) y j Y, j =,..., p α j = f(y j ) j {,..., p} f = m L Y B ψ i l i, i =,,..., p Y Y = p ini B i =,,..., p j i = i j pini l i (y j ) l i (y j i ) > i p ini y i y j i Y i p ini y i y i l i (x) x B l i (x) = l i (x)/l i (y i ) l j (x) = l j (x) l j (y i )l i (x) j =,,..., p, j i B B k Y x k B k