A Filled Function Method with One Parameter for R n Constrained Global Optimization
|
|
- Tobias Stephens
- 6 years ago
- Views:
Transcription
1 A Filled Function Method with One Parameter for R n Constrained Global Optimization Weixiang Wang Youlin Shang Liansheng Zhang Abstract. For R n constrained global optimization problem, a new auxiliary function with one parameter is proposed for escaping the current local minimizer. First, a new definition of filled function is given and under mild assumptions, it is proved to be a filled function. Then a new algorithm is designed according to the theoretical analysis and some preliminary numerical results are given to demonstrate the efficiency of this global method for R n constrained global optimization. Key Words. Local minimizer, Global minimizer, Constrained global optimization, Filled function method. Mathematics Subject Classification (2000): 90C30, 49M30 1. Introduction Optimization theory and methods have been widely used with real life application for many years. Yet, most classical optimization techniques only find local optimizers. Recently, the focus of the studies of Optimization theory and methods shifts to global optimization. Numerous algorithms have been developed for finding a global optimal solution to problems with nonlinear, nonconvex stucture and can be found in the surveys by Horst and Pardalos [6], Ge [1] as well as in the introductory textbook by Horst et al.[3]. These global optimization methods can be classified into two groups: stochastic(see [5]) and deterministic methods(see[1],[3], [4]). The methods of filled function and tunnel function belong to the latter category. There are many papers about filled function method devoted to unconstrained global optimization in the open literatures, for example, [1], [2], [8],[9],[7],[10],[11], etc. However, up to now, to our best knowledge, there are fewer literatures for finding a global minimizer on R n constrained global optimization problem. In this paper, we consider the following global optimization: (P ) min x S f(x) (1) where S = {x R n : g(x) 0}, g(x) = (g 1 (x),..., g m (x)), and propose a new filled function method for solving problem (P ). The method iterates from one local minimum to a better one. In each This work was supported by the National Natural Science Foundation (No ) Corresponding author, Department of Mathematics, Shanghai University, 99 Shangda Road, Baoshan Shanghai, , China, address: zhesx@126.com Department of Mathematics, Henan University of Science and Technology, Luoyang , China; Department of Applied Mathematics, Tongji University, Shanghai , China. Department of Mathematics, Shanghai University, 99 Shangda Road, Baoshan Shanghai, , China.
2 iteration, we construct a filled function, and a local minimizer of the filled function then leads to a new solution of reduced objective function value. To proceed further, this paper requires the following assumptions: Problem P has at least one global minimizer. The sets of global minimizers and local minimizers are denoted by G(P ) and L(P ), respectively. L x = {x L(P ) : f(x) = f(x )}is a closed bounded set, where x L(P ). The number of minimizers of (P ) can be infinite, but the number of the different value of minimizer of (P ) is finite. It require f(x), f(x), g i (x), g i (x), i = 1,, m to be Lipschitz continuously differentiable over R n, i.e. there exist L f > 0, L f > 0, L gi, L gi, i = 1,, m, such that for any x, y R n, the following hold true: f(x) f(y) L f x y, f(x) f(y) L f x y g i (x) g i (y) L gi x y, g i (x) g i (y) L gi x y, i = 1,, m. What s more, we give a new definition of filled function as the follows: Suppose x is a current local minimizer of (P ). P (x, x ) is said to be a filled function of f(x) at x, if it satisfies the following properties: (P1) x is a strict local maximizer of P (x, x ) on R n ; (P2) P (x, x ) 0, for any x S 1 \{x } or x R n \S, where S 1 = {x S : f(x) f(x )}; (P3) If x is not a global minimizer, then there exists at least one point x 1 S 2 such that x 1 is a local minimizer of P (x, x ). Adopting the concept of the filled function, a global optimization problem can be solved via a twophase cycle. In Phase I, we start from a given point and use any local minimization procedure to locate a local minimizer x of f(x). In Phase II, we construct a filled function at x and minimize it in order to identify a point x S with f(x ) < f(x ). If such an x is obtained, we can then use x as the initial point in Phase I again, and find a better minimizer x of f(x) with f(x ) < f(x ). This process repeats until the time when minimizing a filled function does not yield a better solution. The current local minimum will be then taken as a global minimizer of f(x). This paper is organized as follows. Following this introduction, a new filled function is proposed and the properties of the new filled function are investigated. In Section 3, a solution algorithm is suggested and preliminary numerical experiments are performed. Finally, some conclusions are drawn in Section A new filled function and its properties Consider the one parameter filled function for problem (P ) T (x, x, ρ) = x x 4 ρ([max(0, f(x) f(x ))] 3 i=1 [max(0, g i(x) g i (x ))] 3 ) + 1 ρ (exp[min(0, max(f(x) f(x ), g i (x), i = 1,...m))] 3 1) (2) where x is a current local minimizer of f(x), indicates the Euclidean vector norm. The following theorems show that T (x, x, ρ) is a filled function when ρ > 0 small enough. 2
3 Theorem 2.1 Assume that x is a local minimizer of (P), then x is a strict local maximizer of T (x, x, ρ). Proof. Since x is a local minimizer of (P ), there exists a neighborhood N(x, σ ) of x, σ > 0 such that f(x) f(x ), g(x ) 0, g(x) 0 for all x N(x, σ ) S. Therefore, for all x N(x, σ ) S, x x, it holds and we have {min[0, max(f(x) f(x ), g(x))]} = 0, T (x, x, ρ) = x x 4 ρ([max(0, f(x) f(x ))] 3 + n i=1 ([max(0, g i(x) g i (x ))] 3 ) < 0 = T (x, x, ρ). For all x N(x, σ ) (R n \S), there exists at least one index i 0 1,, n such that g i0 (x) 0. Therefore, we have {min[0, max(f(x) f(x ), g(x))]} = 0 T (x, x, ρ) = x x 4 ρ([max(0, f(x) f(x ))] 3 + n i=1 ([max(0, g i(x) g i (x ))] 3 ) < 0 = T (x, x, ρ). From the above discussion, we obtain that x is a strict local maximizer of T (x, x, ρ), and this completes the proof. Remark: Obviously, for all x N(x, σ ), σ > 0, we have {min[0, max(f(x) f(x ), g(x))]} = 0 T (x, x, ρ) = 4 x x 2 (x x ) 3ρ( f(x)[max(0, f(x) f(x ))] 2 i=1 g i(x)[max(0, g i (x) g i (x ))] 2 ) 2 T (x, x, ρ) = 4 x x 2 E 8(x x )(x x ) 3ρ[ 2 f(x)(max(0, f(x) f(x ))) f(x) f(x) max(0, f(x) f(x )) i=1 2 g i (x)(max(0, g i (x) g i (x ))) m i=1 g i(x) g i (x) max(0, g i (x) g i (x ))] where E is an identity matrix. For any d D = {d R n : d = 1}, if then we have 0 < ρ < 4 3L 2 f L 2 f + 3 m i=1 L2 g i L 2 g i, d T 2 T (x, x, ρ)d = 4 x x 2 8((x x ) T d) 2 3ρ[d 2 f(x)d(max(0, f(x) f(x ))) 2 + 2(d f(x)) 2 max(0, f(x) f(x )) i=1 d 2 g i (x)d(max(0, g i (x) g i (x ))) 2 i=1 2(d g i (x)) 2 ] max(0, g i (x) g i (x ))] 4 x x 2 3ρ[d 2 f(x)d(max(0, f(x) f(x ))) 2 i=1 d 2 g i (x)d(max(0, g i (x) g i (x ))) 2 4 x x 2 + 3ρ[L 2 f L 2 f i=1 L2 g i L 2 g i ] x x 2 < 0, 3
4 where L 2 f and L 2 g i, i = 1, 2..., m are maximum of 2 f(x) and 2 g i (x) on N(x, σ ), respectively. Therefore, 2 T (x, x, ρ) is negative definite, x is an isolated local maximizer of T (x, x, ρ) and T (x, x, ρ) 0 for all x N(x, σ )\{x }. Theorems 2.1 reveals that the proposed new filled function satisfies property (P1). The following theorem shows that when confined on S 1 \{x } or R n \S not on R n, T (x, x, ρ) is continuously differentiable and T (x, x, ρ) 0. Theorem 2.2 Assume that x is a local minimizer of (P) and x S 1 \{x } or x R n \S, then for ρ > 0 small enough, it holds T (x, x, ρ) 0. Proof. Since x is a local minimizer of (P ), by Theorem 2.1, there exists a neighborhood N(x, σ ) such that f(x) f(x ) and g(x) 0 hold true for any x N(x, σ ) S. Consider the following three situations: (1): x N(x, σ ). In this case, by the remark, it holds T (x, x, ρ) 0 (2): x S\N(x, σ ), and f(x) f(x ). In this case, we have x x σ 1 and T (x, x, ρ) = 4 x x 2 (x x ) 3ρ( f(x)[max(0, f(x) f(x ))] 2 i=1 g i(x)[max(0, g i (x) g i (x ))] 2 ). (x x ) T (x, x, ρ) = 4 x x 4 3ρ[(x x ) f(x)(f(x) f(x )) 2 i=1 (x x ) g i (x)[max(0, g i (x) g i (x ))] 2 ] = 4 x x 4 3ρ[(x x ) ( f(x) f(x ))(f(x) f(x )) 2 Where ρ > 0 should be small enough. +(x x ) f(x )(f(x) f(x )) 2 i=1 (x x ) g i (x )[max(0, g i (x) g i (x ))] 2 i=1 (x x ) ( g i (x) g i (x ))[max(0, g i (x) g i (x ))] 2 ] 4 x x 4 + 3ρ[L 2 f L f x x 4 i=1 L2 g i L gi x x 4 +L 2 f f(x ) x x 3 x x σ i=1 L2 g i g i (x ) x x 4 σ ] = x x 4 ( 4 + 3ρ[L 2 f L f + L2 f f(x ) σ i=1 i=1 L2 g i L gi L 2 g i g i (x ) σ ]) < 0. (3): x R n \{N(x, σ ) S}. In this case, by considering two situations: f(x) f(x ) and f(x) < f(x ) and using the way similar to the proof of this theorem in the second case, we can obtain that T (x, x, ρ) 0. Therefore, for any x S 1 \{x } or x R n \S, it holds T (x, x, ρ) 0. Theorem 2.2 reveals that the proposed new filled function satisfies property (P2). 4
5 Theorem 2.3 Assume that x is not a global minimizer of (P ) and that clints = cls holds, then there exists a point x 0 S 2 such that x 0 is a local minimizer of T (x, x, ρ) when ρ > 0 is small enough. Proof. Since x is a local but not global minimizer of (P ), there exists another local minimizer of f(x), x 1, such that f(x 1 ) < f(x ) and g(x 1 ) 0. Consider the following two cases: Case (1): S is a bounded closed set. In this case, by the assumption cl(ints) = cls, there exists another point x 2 O(x 1, δ) ints such that f(x 2) < f(x ), g(x 2) < 0. Where δ > 0 sufficiently small and these two expressions imply that exp[min(0, max(f(x 2) f(x ), g i (x 2), i = 1,...m))] 3 1 < 0, m T (x 2, x, ρ) = x 2 x 4 ρ([ [max(0, g i (x 2) g i (x ))] 3 ) i=1 + 1 ρ (exp[max(f(x 2) f(x ), g i (x 2), i = 1,...m)] 3 1)). On the other hand, for any x S, there exists at least one index i 0 g i0 (x) = 0. Therefore, it is easy to obtain {1,..., m} such that min(0, max(f(x) f(x ), g i (x), i = 1,...m)) = 0. m T (x, x, ρ) = x x 4 ρ( [max(0, g i (x) g i (x ))] 3 i=1 +(max(f(x) f(x ), 0)) 3 ). All the above relations imply that for any x S with small ρ > 0, we must have T (x, x, ρ) > T (x 2, x, ρ). Clearly, S is a closed bounded set and S\ S is an open bounded set. There must exist one point S\ S such that x 0 min T (x, x S x, ρ) = T (x 0, x, ρ) = min T (x, x S\ S x, ρ). where obviously we have x 0 ints and f(x 0 ) < f(x ). Case (2): S is a unbounded closed set. 5
6 In this case, by Assumption 2, L x 1 = {x L(P ) : f(x) = f(x 1 )} is a bounded closed set. Denote B(θ, 1) = {x R n : x x 1} and let ε 0 > 0 be small enough, it is easy to know that L x 1 ε 0 = L x 1 + ε0 B(θ, 1) is also a closed bounded set. By the condition of cl(ints) = cls, there exists one point x 2 Lx 1 ε 0 ints such that These expressions imply that f(x 1) f(x 2) < f(x ), g(x 2) < 0. m T (x 2, x, ρ) = x 2 x 4 ρ([ [max(0, g i (x 2) g i (x ))] 3 ) Therefore, we must have i=1 + 1 ρ (exp[max(f(x 2) f(x ), g i (x 2), i = 1,...m)] 3 1)) < 0. min x L x 1 ε 0 T (x, x, ρ) = T (x 0, x, ρ) T (x 2, x, ρ) < 0. Next, we will show that the following two situations can not occur: x 0 Lx 1 ε 0 S or x 0 L x 1 ε 0 (R n \S). By contradiction, if x 0 Lx 1 ε 0 S or x 0 L x 1 ε 0 (R n \S), then there exists one index i 0 {1,..., m} such that g i0 (x) = 0 or g i0 (x) > 0, this means min(0, max(f(x) f(x ), g i (x), i = 1,...m)) = 0. and m T (x 0, x, ρ) = x 0 x 4 ρ( [max(0, g i (x 0) g i (x ))] 3 i=1 +(max(f(x 0) f(x ), 0)) 3 ). Since x 0 Lx 1 ε 0, x 0 x, max(0, g i (x 0 ) g i(x )) and max(f(x 0 ) f(x ), 0) must be bounded. So, if ρ > 0 is chosen to be small enough, we will have T (x 0, x, ρ) > T (x 2, x, ρ) which contradicts with T (x 0, x, ρ) T (x 2, x, ρ). Therefore, it must have x 0 Lx 1 ε 0 ints and f(x 0 ) < f(x ). Theorem 2.3 clearly state that the proposed filled function satisfies property (P3) and Theorems state that the proposed filled function satisfies properties (P1)-(P3). Theorem 2.4 If x is a global minimizer of (P), then T (x, x, ρ) < 0 for all x S\{x }. Proof. Since x is a global minimizer of (P), we have f(x) f(x ) for all x S\{x }. Thus, by Theorem 2.1, T (x, x, ρ) < 0 for all x S\{x }. 6
7 3. Solution algorithm and preliminary numerical implementation The theoretical properties of the proposed filled function T (x, x, ρ) were discussed in the last section. The development in this section is to suggest an algorithm and perform preliminary numerical experiments to give an initial feeling of the efficiency of the proposed algorithm Algorithm Initialization Step 1. Choose a tolerance ɛ > 0, e.g., set ɛ := Choose a fraction ˆρ > 0, e.g., set ˆρ := Choose a lower bound of r such that ρ L > 0, e.g., set ρ L := Select an initial point x 1 S. 5. Set k := 1. Main Step 1. Starting from an initial point x 1 S, minimize f(x) and obtain the first local minimizer x 1 of f(x). 2. Choose a set of initial points {x i k+1 : i = 1, 2,, m} such that xi k+1 S\N(x k, σ k) for some σ k > Let ρ := Set i := If i m, then set x := x i k+1 and go to 6; Otherwise, the algorithm is incapable of finding a better minimizer starting from the initial points, {x i k+1 }. The algorithm stops and x 1 is taken as a global minimizer. 6. If f(x) < f(x 1 ) and x S, then use x as an initial point for a local minimization method to find x k+1 such that f(x k+1 ) < f(x k ), set x 1 = x k+1, k := k + 1 and go to 2; Otherwise, go to If T (x, x 1, ρ) nɛ, go to Inner Circular Step 10 ; Otherwise, go to Reduce ρ by setting ρ := ˆρ ρ. (a) If ρ ρ L, then go to 4. (b) Otherwise, go to 5. Inner Circular Step 1 0 Let x k := x, g k = T (x k, x 1, ρ), k := Let d k = g k. 3 0 Get the step size α k, and let x k +1 = x k + α k d k. If x k +1 attains the boundary of S during minimization, then set i := i + 1 and go to Main Step 5; Otherwise go to If f(x k +1) < f(x 1 ), go to Main Step 6; Otherwise, go to Let g k +1 = T (x k +1, x 1, ρ), go to If k > n, let x = x k +1, go to 1 0 ; Otherwise, go to Let d k +1 = g k +1 + β k d k. where β k = g k +1 T (g k +1 g k ) g T k g k. Let k := k + 1, go to 3 0. The motivation and mechanism behind the algorithm are explained below. A set of m = 4n + 4 initial points is chosen in Step 2 to minimize the filled function. We set the initial points symmetric about the current local minimizer. For example, when n = 2, the initial 7
8 points are: x k + σ(1, 0), x k + σ(0, 1), x k σ(1, 0), x k σ(0, 1), x k + σ(1, 1), x k σ(1, 1), x k + 0.5σ(1, 0), x k + 0.5σ(0, 1), x k 0.5σ(1, 0), x k 0.5σ(0, 1), x k + 0.5σ(1, 1), x k 0.5σ(1, 1), where σ > 0 is a pre-selected step-size. Step 6 represents a transition from minimizing the filled function T (x, x, ρ) to a local search for the original objective function f(x) when a point x with f(x) < f(x 1 ) is located. It can be concluded from Theorem 2.3 that this point x must be in the set S 2. We can thus use x as the initial point to minimize f(x) and obtain a better local minimizer. Step 3 0 finds a new x in the direction d k such that T (x, x, ρ) can reduce to certain extent. Then, one can use the new x to minimize the filled function again. In Step 3 0, use α k = 0.1 as the step size. Recall from Theorem 2.3 that the value of ρ should be selected small enough. Otherwise, there could be no better minimizer of T (x, x, ρ). Thus, the value of ρ is decreased successively in Step 8 of the solution process if no better solution is found when minimizing the filled function. If all the initial points were chosen to calculate and ρ reaches its lower bound ρ L and no better solution is found, the current local minimizer is taken as a global minimizer In Inner Circular Step, we use PRP Conjugate Gradient Method to get the search direction. In fact, it can also use other local method to get the search direction. Remark: Please notes that from the above algorithm, the process of minimization of T (x, x, ρ) is taken only on the set S 1 or R n \S on which T (x, x, ρ) is differentiable. Once a point x belonging to neither S 1 nor R n \S is obtained, then the process of minimization of T (x, x, ρ) should be terminated immediately and returned to the the process of minimization of f(x). So We only see to it that T (x, x, ρ) is differentiable on the set S 1 or R n \S rather than on the whole R n Preliminary numerical experiment Although the focus of this paper is more theoretical than computational, we still test our algorithm on several global minimization problems to have an initial feeling of the potential practical value of the filled function algorithm. All the numerical experiments are implemented in Fortran 95, under Windows XP and Pentium (R) 4 CPU 2.80GMHZ. The Fortran 95 function nconf in module IMSL is used in the algorithm to find local minimizers of (P). Problem 1 s.t f(x) = x x 2 2 cos(17x 1 ) cos(17x 2 ) + 3 g 1 (x) = (x 1 2) 2 + x g 2 (x) = x (x 2 3) The global minimum solutions: x = ( , ), and f = Problem 2 f(x) = (4 2.1x x4 1 3 )x2 1 + x 1 x 2 + ( 4 + 4x 2 2)x 2 2 8
9 s.t g(x) = sin(4πx 1 ) + 2 sin 2 (2πx 2 ) 0 1 x 1, x 2 1 The global minimum solutions: x = ( , ), and f = Problem 3 f(x) = 25(x 1 2) 2 (x 2 2) 2 (x 3 1) 2 (x 4 4) 2 (x 5 1) 2 (x 6 4) 2 s.t g 1 (x) = (x 3 3) 2 + x 4 4 g 2 (x) = (x 5 3) 2 + x 6 4 g 3 (x) = x 1 3x 2 2 g 4 (x) = x 1 + x 2 2 g 5 (x) = x 1 + x g 6 (x) = x 1 + x x 1, x x x 3, x x 6 10 The global minimum solutions: x = ( , , , , , ), and f = The main iterative results are summarized in tables for each function. The symbols used are shown as follows: x 0 k : The k-th initial point k: The iteration number in finding the k-th local minimizer x k : The k-th local minimizer f(x k ): The function value of the k-th local minimizer Table 1. Problem 1 k x 0 k x k f(x k ) 1 (2,1.5) ( , ) ( , ) ( , ) ( , ) ( , )
10 Table 2. Problem 2 k x 0 k x k f(x k ) 1 (0.5,-0.9) ( , ) ( , ) ( , ) ( , ) ( , ) Table 3. Problem 3 k x 0 k x k f(x k ) 1 ( , , , ( , , , , , ) , , ) 2 ( , , , ( , , , , , ) , , ) 3 ( , , , , , ) ( , , , , , ) Conclusions In this paper, we first give a definition of a filled function for constrained minimization problem and construct a new filled function with one parameter. And then design an elaborate solution algorithm based on this filled function. Finally, we perform some numerical experiments to give initial impression of the potentiality of this filled function method. Of course, the efficiency of this filled function method relies on the efficiency of the local minimization method for constraint optimization problem. Meanwhile, from the preliminary numerical results, we can see that algorithm can move successively from one local minimum to another better one, but in most cases, we have to use more time to judge the current point is a global minimizer than to find a global minimizer. Recently, global optimality conditions have been derived in [12] for quadratic binary integer programming problems. However, A global optimality conditions for continuous variables are still an open problem, in general. The criterion of a global minimizer will provide solid stopping conditions for a continuous filled function method. References [1] Ge, R.P., A Filled Function Method for Finding a Global Minimizer of a Funciton of Several Variables, Mathematical Programming, Vol. 46, pp , [2] Ge, R.P. and Qin, Y. F., A Class of Filled Functions for Finding Global Minimizers of a Function of Several Variables, Joural of Optimization Theory and Applications, Vol. 54, pp , [3] Horst, R., Pardalos, P.M. and Thoai, N.V., Introduction to Global Optimization, Kluwer Academic Publishers, Dordrecht, [4] Horst, R. and Tuy, H., Global Optimization: Deterministic Approaches, Second Edition, Springer, Heidelberg, [5] Kan, A.H.G.R, and TIMMER,G.T., A Stochastic Approach to Global optimization, Numeerical Optimization, EDitedb by P.T.Boggs, R.H.Byrd, and R.B.Schnabel, SIAM, Philadelphia, Pennsylvania, pp ,
11 [6] Pardalos, P. M., Romeijn, H.E. and Tuy, H., Recent development and trends in global otpinmization, Journal of Computational and Applied Mathematics, Vol. 124, pp , [7] Xu, Z, Huang, H. X, Pardalos, P. M. and Xu, C. X, Filled Functions for Unconstrained Global Optimization, Journal of Global Optinization, Vol. 20, pp , [8] Yang, Y. J, Shang, Y.L, A New Filled Function Method for Unconstrained Global Optimization, Applied Mathematics and Computation, Vol. 173, pp , [9] Wang, X. L, Zhou, G. B, A new filled function for unconstrained global optimization, Applied Mathematics and Computation, Vol. 174, pp , [10] Yao,Y., Dynamic Tunneling Algorithm for Global Optimization, IEEE Transactions on Systems, Man and cybernetics, Vol. 19, pp , [11] Zhang, L. S, NG,. C. K, Li, D. and Tian, W.W., A New Filled Function Method for Global Optimization, Journal of Global Optimization, Vol. 28, pp , [12] Zheng, Q. and Zhuang, D., Integral Golbal Minimization: Algorithms, Implementations and Numerical Tests, Jounal of Global Optimization, Vol. 7, pp ,
Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization
Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 00, Article ID 843609, 0 pages doi:0.55/00/843609 Research Article Finding Global Minima with a Filled Function Approach for
More informationResearch Article Modified T-F Function Method for Finding Global Minimizer on Unconstrained Optimization
Mathematical Problems in Engineering Volume 2010, Article ID 602831, 11 pages doi:10.1155/2010/602831 Research Article Modified T-F Function Method for Finding Global Minimizer on Unconstrained Optimization
More informationA NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION. 1. Introduction Consider the following unconstrained programming problem:
J. Korean Math. Soc. 47, No. 6, pp. 53 67 DOI.434/JKMS..47.6.53 A NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION Youjiang Lin, Yongjian Yang, and Liansheng Zhang Abstract. This paper considers the
More informationA filled function method for finding a global minimizer on global integer optimization
Journal of Computational and Applied Mathematics 8 (2005) 200 20 www.elsevier.com/locate/cam A filled function method for finding a global minimizer on global integer optimization You-lin Shang a,b,,,
More informationResearch Article Identifying a Global Optimizer with Filled Function for Nonlinear Integer Programming
Discrete Dynamics in Nature and Society Volume 20, Article ID 7697, pages doi:0.55/20/7697 Research Article Identifying a Global Optimizer with Filled Function for Nonlinear Integer Programming Wei-Xiang
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationGlobal minimization with a new filled function approach
2nd International Conference on Electronics, Networ and Computer Engineering ICENCE 206 Global minimization with a new filled function approach Weiiang WANG,a, Youlin SHANG2,b and Mengiang LI3,c Shanghai
More informationResearch Article A Novel Filled Function Method for Nonlinear Equations
Applied Mathematics, Article ID 24759, 7 pages http://dx.doi.org/0.55/204/24759 Research Article A Novel Filled Function Method for Nonlinear Equations Liuyang Yuan and Qiuhua Tang 2 College of Sciences,
More informationAn approach to constrained global optimization based on exact penalty functions
DOI 10.1007/s10898-010-9582-0 An approach to constrained global optimization based on exact penalty functions G. Di Pillo S. Lucidi F. Rinaldi Received: 22 June 2010 / Accepted: 29 June 2010 Springer Science+Business
More informationWhy Locating Local Optima Is Sometimes More Complicated Than Locating Global Ones
Why Locating Local Optima Is Sometimes More Complicated Than Locating Global Ones Olga Kosheleva and Vladik Kreinovich University of Texas at El Paso 500 W. University El Paso, TX 79968, USA olgak@utep.edu,
More informationGradient method based on epsilon algorithm for large-scale nonlinearoptimization
ISSN 1746-7233, England, UK World Journal of Modelling and Simulation Vol. 4 (2008) No. 1, pp. 64-68 Gradient method based on epsilon algorithm for large-scale nonlinearoptimization Jianliang Li, Lian
More informationExact Penalty Functions for Nonlinear Integer Programming Problems
Exact Penalty Functions for Nonlinear Integer Programming Problems S. Lucidi, F. Rinaldi Dipartimento di Informatica e Sistemistica Sapienza Università di Roma Via Ariosto, 25-00185 Roma - Italy e-mail:
More informationResearch Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search
Abstract and Applied Analysis Volume 013, Article ID 74815, 5 pages http://dx.doi.org/10.1155/013/74815 Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search Yuan-Yuan Chen
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationNew hybrid conjugate gradient methods with the generalized Wolfe line search
Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:
More informationNewton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions
260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum
More informationVariable Objective Search
Variable Objective Search Sergiy Butenko, Oleksandra Yezerska, and Balabhaskar Balasundaram Abstract This paper introduces the variable objective search framework for combinatorial optimization. The method
More informationStep lengths in BFGS method for monotone gradients
Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly
More informationOne Mirror Descent Algorithm for Convex Constrained Optimization Problems with Non-Standard Growth Properties
One Mirror Descent Algorithm for Convex Constrained Optimization Problems with Non-Standard Growth Properties Fedor S. Stonyakin 1 and Alexander A. Titov 1 V. I. Vernadsky Crimean Federal University, Simferopol,
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationMetric Spaces and Topology
Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies
More information1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is
New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,
More informationSOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION
International Journal of Pure and Applied Mathematics Volume 91 No. 3 2014, 291-303 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijpam.v91i3.2
More informationOn the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search
ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised
More informationMathematics for Economists
Mathematics for Economists Victor Filipe Sao Paulo School of Economics FGV Metric Spaces: Basic Definitions Victor Filipe (EESP/FGV) Mathematics for Economists Jan.-Feb. 2017 1 / 34 Definitions and Examples
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationJournal of Computational and Applied Mathematics
Journal of Computational and Applied Mathematics 234 (2) 538 544 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam
More informationTHE solution of the absolute value equation (AVE) of
The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationA quasisecant method for minimizing nonsmooth functions
A quasisecant method for minimizing nonsmooth functions Adil M. Bagirov and Asef Nazari Ganjehlou Centre for Informatics and Applied Optimization, School of Information Technology and Mathematical Sciences,
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationFinite Groups with ss-embedded Subgroups
International Journal of Algebra, Vol. 11, 2017, no. 2, 93-101 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ija.2017.7311 Finite Groups with ss-embedded Subgroups Xinjian Zhang School of Mathematical
More informationA Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions
The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded
More informationAn efficient Newton-type method with fifth-order convergence for solving nonlinear equations
Volume 27, N. 3, pp. 269 274, 2008 Copyright 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam An efficient Newton-type method with fifth-order convergence for solving nonlinear equations LIANG FANG 1,2, LI
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient
More informationResearch Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming
Mathematical Problems in Engineering Volume 2010, Article ID 346965, 12 pages doi:10.1155/2010/346965 Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming
More informationNonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization
Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong
More information1 Introduction. 2 Binary shift map
Introduction This notes are meant to provide a conceptual background for the numerical construction of random variables. They can be regarded as a mathematical complement to the article [] (please follow
More informationStep-size Estimation for Unconstrained Optimization Methods
Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations
More informationLecture 3: Basics of set-constrained and unconstrained optimization
Lecture 3: Basics of set-constrained and unconstrained optimization (Chap 6 from textbook) Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 9, 2018 Optimization basics Outline Optimization
More informationOn the relation between concavity cuts and the surrogate dual for convex maximization problems
On the relation between concavity cuts and the surrogate dual for convex imization problems Marco Locatelli Dipartimento di Ingegneria Informatica, Università di Parma Via G.P. Usberti, 181/A, 43124 Parma,
More informationSTRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS
ARCHIVUM MATHEMATICUM (BRNO) Tomus 45 (2009), 147 158 STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS Xiaolong Qin 1, Shin Min Kang 1, Yongfu Su 2,
More informationMATHEMATICAL ECONOMICS: OPTIMIZATION. Contents
MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More informationStopping Rules for Box-Constrained Stochastic Global Optimization
Stopping Rules for Box-Constrained Stochastic Global Optimization I. E. Lagaris and I. G. Tsoulos Department of Computer Science, University of Ioannina P.O.Box 1186, Ioannina 45110 GREECE Abstract We
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationOn the simplest expression of the perturbed Moore Penrose metric generalized inverse
Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated
More informationResearch Article An Auxiliary Function Method for Global Minimization in Integer Programming
Mathematical Problems in Engineering Volume 2011, Article ID 402437, 13 pages doi:10.1155/2011/402437 Research Article An Auxiliary Function Method for Global Minimization in Integer Programming Hongwei
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationA class of Smoothing Method for Linear Second-Order Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More information3 Guide function approach
CSMA 2017 13ème Colloque National en Calcul des Structures 15-19 Mai 2017, Presqu île de Giens (Var) Continuous Global Optimization Using A Guide Function Approach H. Gao 1,2, P. Breitkopf 2, M. Xiao 3
More informationCubic regularization of Newton s method for convex problems with constraints
CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized
More informationLecture Note 1: Introduction to optimization. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 1: Introduction to optimization Xiaoqun Zhang Shanghai Jiao Tong University Last updated: September 23, 2017 1.1 Introduction 1. Optimization is an important tool in daily life, business and
More informationA Continuation Approach Using NCP Function for Solving Max-Cut Problem
A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut
More informationA NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang
A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationOn Penalty and Gap Function Methods for Bilevel Equilibrium Problems
On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of
More informationarxiv: v1 [math.oc] 1 Jul 2016
Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the
More informationNumerical Optimization
Unconstrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 01, India. NPTEL Course on Unconstrained Minimization Let f : R n R. Consider the optimization problem:
More informationStability of efficient solutions for semi-infinite vector optimization problems
Stability of efficient solutions for semi-infinite vector optimization problems Z. Y. Peng, J. T. Zhou February 6, 2016 Abstract This paper is devoted to the study of the stability of efficient solutions
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationNew Class of duality models in discrete minmax fractional programming based on second-order univexities
STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 5, September 017, pp 6 77. Published online in International Academic Press www.iapress.org) New Class of duality models
More informationIterative common solutions of fixed point and variational inequality problems
Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1882 1890 Research Article Iterative common solutions of fixed point and variational inequality problems Yunpeng Zhang a, Qing Yuan b,
More informationViscosity approximation method for m-accretive mapping and variational inequality in Banach space
An. Şt. Univ. Ovidius Constanţa Vol. 17(1), 2009, 91 104 Viscosity approximation method for m-accretive mapping and variational inequality in Banach space Zhenhua He 1, Deifei Zhang 1, Feng Gu 2 Abstract
More informationTwo-parameter regularization method for determining the heat source
Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for
More informationA Derivative-Free Algorithm for Constrained Global Optimization Based on Exact Penalty Functions
J Optim Theory Appl (2015) 164:862 882 DOI 10.1007/s10957-013-0487-1 A Derivative-Free Algorithm for Constrained Global Optimization Based on Exact Penalty Functions Gianni Di Pillo Stefano Lucidi Francesco
More informationSelected Topics in Optimization. Some slides borrowed from
Selected Topics in Optimization Some slides borrowed from http://www.stat.cmu.edu/~ryantibs/convexopt/ Overview Optimization problems are almost everywhere in statistics and machine learning. Input Model
More informationA DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION
1 A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION G E MANOUSSAKIS, T N GRAPSA and C A BOTSARIS Department of Mathematics, University of Patras, GR 26110 Patras, Greece e-mail :gemini@mathupatrasgr,
More informationFIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow
FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationGlobal optimization on Stiefel manifolds some particular problem instances
6 th International Conference on Applied Informatics Eger, Hungary, January 27 31, 2004. Global optimization on Stiefel manifolds some particular problem instances János Balogh Department of Computer Science,
More informationOn a Polynomial Fractional Formulation for Independence Number of a Graph
On a Polynomial Fractional Formulation for Independence Number of a Graph Balabhaskar Balasundaram and Sergiy Butenko Department of Industrial Engineering, Texas A&M University, College Station, Texas
More informationComplexity of gradient descent for multiobjective optimization
Complexity of gradient descent for multiobjective optimization J. Fliege A. I. F. Vaz L. N. Vicente July 18, 2018 Abstract A number of first-order methods have been proposed for smooth multiobjective optimization
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization February 12, 2002 Overview The Practical Importance of Duality ffl Review of Convexity ffl A Separating Hyperplane Theorem ffl Definition of the Dual
More informationBounded uniformly continuous functions
Bounded uniformly continuous functions Objectives. To study the basic properties of the C -algebra of the bounded uniformly continuous functions on some metric space. Requirements. Basic concepts of analysis:
More informationSECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS
Journal of Applied Analysis Vol. 9, No. 2 (2003), pp. 261 273 SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS I. GINCHEV and V. I. IVANOV Received June 16, 2002 and, in revised form,
More informationAn Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace
An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well
More informationA projection-type method for generalized variational inequalities with dual solutions
Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method
More information4y Springer NONLINEAR INTEGER PROGRAMMING
NONLINEAR INTEGER PROGRAMMING DUAN LI Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong Shatin, N. T. Hong Kong XIAOLING SUN Department of Mathematics Shanghai
More informationSubdifferential representation of convex functions: refinements and applications
Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential
More informationContinuity. Matt Rosenzweig
Continuity Matt Rosenzweig Contents 1 Continuity 1 1.1 Rudin Chapter 4 Exercises........................................ 1 1.1.1 Exercise 1............................................. 1 1.1.2 Exercise
More informationA Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems
A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com
More informationAnalysis II - few selective results
Analysis II - few selective results Michael Ruzhansky December 15, 2008 1 Analysis on the real line 1.1 Chapter: Functions continuous on a closed interval 1.1.1 Intermediate Value Theorem (IVT) Theorem
More informationOptimization and Optimal Control in Banach Spaces
Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationA proximal-like algorithm for a class of nonconvex programming
Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationA Solution Method for Semidefinite Variational Inequality with Coupled Constraints
Communications in Mathematics and Applications Volume 4 (2013), Number 1, pp. 39 48 RGN Publications http://www.rgnpublications.com A Solution Method for Semidefinite Variational Inequality with Coupled
More informationComplexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization
Mathematical Programming manuscript No. (will be inserted by the editor) Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Wei Bian Xiaojun Chen Yinyu Ye July
More informationAn Efficient Modification of Nonlinear Conjugate Gradient Method
Malaysian Journal of Mathematical Sciences 10(S) March : 167-178 (2016) Special Issue: he 10th IM-G International Conference on Mathematics, Statistics and its Applications 2014 (ICMSA 2014) MALAYSIAN
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationi=1 α i. Given an m-times continuously
1 Fundamentals 1.1 Classification and characteristics Let Ω R d, d N, d 2, be an open set and α = (α 1,, α d ) T N d 0, N 0 := N {0}, a multiindex with α := d i=1 α i. Given an m-times continuously differentiable
More informationResearch Article Existence and Duality of Generalized ε-vector Equilibrium Problems
Applied Mathematics Volume 2012, Article ID 674512, 13 pages doi:10.1155/2012/674512 Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Hong-Yong Fu, Bin Dan, and Xiang-Yu
More informationNew Inexact Line Search Method for Unconstrained Optimization 1,2
journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.
More informationThe Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces
Applied Mathematical Sciences, Vol. 11, 2017, no. 12, 549-560 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2017.718 The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive
More informationLCPI method to find optimal solutions of nonlinear programming problems
ISSN 1 746-7233, Engl, UK World Journal of Modelling Simulation Vol. 14 2018) No. 1, pp. 50-57 LCPI method to find optimal solutions of nonlinear programming problems M.H. Noori Skari, M. Ghaznavi * Faculty
More information