On Stability of Fuzzy Multi-Objective. Constrained Optimization Problem Using. a Trust-Region Algorithm

Size: px
Start display at page:

Download "On Stability of Fuzzy Multi-Objective. Constrained Optimization Problem Using. a Trust-Region Algorithm"

Transcription

1 Int. Journal of Math. Analysis, Vol. 6, 2012, no. 28, On Stability of Fuzzy Multi-Objective Constrained Optimization Problem Using a Trust-Region Algorithm Bothina El-Sobky Department of Mathematics, Faculty of Science Alexandria University, Alexandria, Egypt bothinaelsobky@yahoo.com Yusria Abo-Elnaga Department of Basic Science, Higher Technological Institute Tenth of Ramadan City, Egypt Abstract In this paper, a fuzzy multi-objective constrained optimization problem (FMCOP) is converted to a single-objective constrained optimization problem with equality and inequality constraints (SCOP) by using α level set of the fuzzy vector and weighting approach. A trust-region algorithm for solving problem (SCOP) is introduced to obtain α pareto optimal solutions of problem (FMCOP). An ε α stability set for problem (FMCOP) which represent a range of α pareto optimal solutions is obtained. A numerical example to clarify the proposed of the paper is introduced in the end. Keywords: Multi-objective, Fuzzy environment, Single-objective, Trustregion, Stability, Active set

2 1368 B. El-Sobky and Y. Abo-Elnaga 1 Introduction A fuzzy multi-objective optimization problem is minimize subject to : c(x) 0, f(x, p) =(f 1 (x, p 1 ),..., f m (x, p m )) T (1.1) where x Ω={x R n c(x) 0} and p =( p 1,..., p m ) T represented a vector of fuzzy parameters that can be characterized as a fuzzy number Sakawa [15]. The functions f i (x, p i ), i =1, 2,..., m, and c(x) R me are twice continuously differentiable functions. In this paper, we convert the above problem to a single-objective constrained optimization problem with equality and inequality constraints (SCOP) by using α level set of the fuzzy vector and weighting approach. To obtain an α pareto optimal solutions of problem (FMCOP), we solve the singleobjective constrained optimization problem by using a trust-region algorithm. A trust-region algorithms have proved to be robust techniques for solving unconstrained or constrained single-objective optimization problems, see([3], [5], [6], [7], [8], [9], [13], and [19]). In an earlier work, Orloviski[12] formulate multi-objective nonlinear programming problem with fuzzy parameters. Many authors such as [[10], [11], [14], [15], [16], [17], [18]] introduced the stability of fuzzy multi-objective optimization problem. But in many applications of fuzzy multi-objective optimization problem, the decision maker not only needs an α pareto optimal solution but also he prefers to get a set of α pareto optimal solutions which denoted by ε α stability set. Our proposed approach is allowing a decision maker to control the α pareto optimal set by choosing an appropriate ε value according to his needs. In this paper, the ε α stability set for problem (FMCOP) is obtained. The ε α stability set is the set of all α pareto optimal solutions of problem (FM- COP) that lie in the range of α pareto optimal solutions. In the next section, some basic fuzzy concepts and how the problem (FM- COP) is converted to problem (SCOP) are discussed. In section 3, the trustregion algorithm for solving the single-objective optimization problem is introduced to obtain α pareto optimal solutions of problem (FMCOP). In section 4, an ε α stability set is introduced.

3 Fuzzy multi-objective constrained optimization problem 1369 In section 5, a numerical example for the multi-objective optimization problem is introduced in the end to clarify the proposed of the paper. Finally Section 6, contains concluding remarks. The following notations are used throughout the rest of the paper. Subscripted functions are function values at particular points; for example, f k = f(x k ), g k = g(x k ), h k = h(x k ), f k = x f(x k ), g k = x g(x k ), h k = x h(x k ), l k = l(x k,λ k,ν k ), l k = x l(x k,λ k,ν k ), U k = U(x k ), and so on. However, the arguments of the functions are not abbreviated when emphasizing the dependence of the functions on their arguments. The i th component of a vector ν k is denoted by (ν k ) i. Finally, all norms used in this paper are l 2 norms. 2 Theoretical Fuzzy Foundations Fuzzy set theory has developed for solving problems in which descriptions of activities and observation are imprecise, vague and uncertain. The term fuzzy refers to the situation in which here are no well-defined boundaries of the set of activities or observations to which the descriptions apply. A fuzzy set is a class of objects with membership grades. A membership function, which assigns to each object a grade of membership, is associated with each fuzzy set. Usually the membership grades are in [0, 1]. When the grade of the membership for an object in a set is one, this object is absolutely in that set when the grade of membership function is zero, the object is absolutely not in that set. Borderline cases are assigned numbers between zero and one. A fuzzy number is defined differently by many authors such as [[14], [18]]. That most frequently used definition belongs to a trapezoidal fuzzy type as follows: Definition 2.1 (A fuzzy number) A fuzzy number is appropriate to recall that a real fuzzy number p is a continuous fuzzy subset from real line R whose membership function μ p (β) defined by: 1. μ β(β) :R [0, 1], is continuous function, 2. μ β(β) =0 β (,β 1 ], 3. μ β(β) is strictly increasing on [β 1,β 2 ],

4 1370 B. El-Sobky and Y. Abo-Elnaga 4. μ β(β) =1 β [β 2,β 3 ], 5. μ β(β) is strictly decreasing on [β 3,β 4 ], 6. μ β(β) =0 β [β 4, ]. For more details see [14]. Here, the vector of fuzzy parameters p involved in problem(1.1) is a vector of fuzzy numbers whose membership functions is μ p (p). Throughout this paper, a membership function in the following form will be elicited: 0 if <β β 1, β β 1 β 2 β 1 if β 1 β β 2, μ β(β) = 1 if β 2 β β 3, β 4 β β 3 β 4 if β 3 β β 4, (2.1) 0 if β 4 β<, Definition 2.2 (α level set): α level set of vector of fuzzy parameter p in problem(1.1) is defined as the ordinary set L α ( p) for which the degree of its membership function exceeds that level α [0, 1], where: L α ( p) ={p R μ p (p) α}. (2.2) For more details see [14]. For a certain degree of α [0, 1], estimated by the decision maker, the problem (1.1) can be written as the following α multi-objective constrained optimization problem minimize f(x, p) =(f 1 (x, p 1 ),..., f m (x, p m )), subject to : c(x) 0, p L α ( p), (2.3) where p =(p 1,..., p m ) T. It should be emphasized here in problem(2.3) that the vector of parameters p i, i =1,..., m is treated as a vectors of decision variables

5 Fuzzy multi-objective constrained optimization problem 1371 rather than the constraints. On the basis of the α level sets of the fuzzy numbers, the compact of α pareto optimal solution to the problem (2.3) is given by the following definition. Definition 2.3 (α pareto optimal solutions): A point (x,p ) where x Ω and p L α ( p) is said to be an α pareto optimal solution to the problem (2.3), if and only if there does not exists x Ω and p L α ( p), such that f(x, p) <f(x,p ), where the corresponding value of parameter p is called α optimal parameter of p. By using a weighting approach, we transform problem (2.3) to the following single-objective constrained optimization problem (SCOP): minimize f(x, p, w) = m i=1 w if i (x, p i ), subject to c j (x) 0, j=1,...,me p i1 p i p i2, i=1,...,m, w i 0, i=1,...,m, m i=1 w i =1, (2.4) where [p i1,p i2 ]=L α ( p). In the following section, we introduce the trust-region algorithm for solving (SCOP) problem (2.4) to obtain α pareto optimal solution (x,p ) of problem (FMCOP). In this algorithm, an active set strategy is used together with a reduced Hessian technique to convert the computation of the trial step to two easy trust-region subproblem similar to those for the unconstrained case. A dogleg method is used to compute a trial step in the following algorithm. 3 A Trust-region algorithm for solving (SCOP) problem In this section, we consider the solution of (SCOP) problem (2.4)which can be rewritten as follows minimize f( x) subject to h( x) = 0, (3.1) g( x) 0, where x =(x, p, w) T R n+2m, f( x) =f(x, p, w), h( x) = m i=1 w i 1, and g( x) = (c j (x),p i1 p i,p i p i2, w i ) T. The functions f( x) :R n+2m R,

6 1372 B. El-Sobky and Y. Abo-Elnaga h( x) : R n+2m R, and g( x) : R n+2m R 3m+me are twice continuously differentiable. The Lagrangian function associated with problem (3.1) is the function l( x, λ, ν) =f( x)+λ T h( x)+ν T g( x), (3.2) where λ Rand ν R 3m+me are the Lagrange multiplier vectors associated with equality and inequality constraint respectively. By using active set method, we transform the problem (3.1) to the following problem minimize f( x)+ν T g( x)+ ρ 2 U( x)g( x) 2 2, (3.3) subject to h( x) =0, where U( x) R 3m+me 3m+me is a 0-1 diagonal indicator matrix, whose diagonal entries are { 1 if g i ( x) 0, u i ( x) = (3.4) 0 if g i ( x) < 0. and ρ is a positive parameter. This matrix is similar to the one used by Dennis, El-Alem, and Williamson [1]. The Lagrangian function associated with Problem (3.3) is given by L( x, λ, ν; ρ) =l( x, λ, ν)+ ρ 2 U( x)g( x) 2 2, (3.5) and the augmented Lagrangian is the function Φ( x, λ, ν; ρ; r) =l( x, λ, ν)+ ρ 2 U( x)g( x) r h( x) 2 2, (3.6) where r>0 is a penalty parameter. In the following section, we introduce a trust-region algorithm outline for solving problem (3.1) 3.1 Algorithm Outline This section is devoted to presenting the detailed description of the trustregion algorithm for solving problem (3.1). A global convergence theory of this algorithm is proved in El-Sobky [4]. In this algorithm a reduced Hessian approach is used to compute a trial step s k. In this approach, the trial step s k is decomposed into two orthogonal components; the normal component s n k and the tangential component st k. The

7 Fuzzy multi-objective constrained optimization problem 1373 trial step s k has the form s k = s n k + Z k s t k, where Z k is a matrix whose columns form an orthonormal basis for the null space of h T k. We obtain the normal component s n k by solving the following trust-region subproblem 1 minimize 2 ht k sn + h k 2 (3.7) subject to s n ζδ k, for some ζ (0, 1), where δ k is the trust-region radius. Let the quadratic model of the Lagrangian function (3.5) be q k (s) =l k + lk T s st H k s + ρ k 2 U k(g k + gk T s) 2. (3.8) where l k is the Lagrangian function (3.2) and H k is the Hessian of the lagrangian function l k or approximation to it. Given the normal component s n k, we compute the tangential component s t k = Z k s t k by solving the following trust-region subproblem minimize [Z T k ( l k + H k s n k + ρ k g k U k g k )] T s t stt Z T k B kz k s t subject to Z k s t Δ k, (3.9) where Δ k = δk 2 sn k 2 and B k = H k + ρ k g k U k gk T. Once the trial step is computed, it needs to be tested to determine whether it will be accepted. To do that, a merit function is needed. We use the augmented Lagrangian (3.6) as a merit function. Once the trial step is computed, we test it to determine whether it is accepted. To test the step, estimates for the two Lagrange multipliers λ k+1 and ν k+1 are needed. We compare the actual reduction in the merit function in moving from ( x k,λ k,ν k )to( x k +s k,λ k+1,ν k+1 ) versus the predicted reduction. We define the actual reduction as Ared k =Φ( x k,λ k,ν k ; ρ k ; r k ) Φ( x k + s k,λ k+1,ν k+1 ; ρ k ; r k ). The predicted reduction in the merit function is defined to be Pred k = q k (0) q k (s k ) Δλ T k (h k + h T k s k) (3.10) Δνk T (g k + gk T s k )+r k [ h k 2 h k + h T k s k 2 ], where Δλ k = λ k+1 λ k and Δν k = ν k+1 ν k.

8 1374 B. El-Sobky and Y. Abo-Elnaga We define the tangential predicted decrease Tpred k to be the decrease at the k th iteration the quadratic model of the Lagrangian function (3.5) by the step s t k = Z k s t k. It is defined to be Tpred k = (Z T k ( l k + H k s n k ))T s t k 1 2 stt k ZT k H kz k s t k (3.11) + ρ k 2 [ U kg k 2 U k (g k + gk T Z k s t k) 2 ]. After computing a trial step and updating the Lagrange multipliers, the penalty parameter is updated to ensure that Pred k 0. To update r k,we use a scheme that has the flavor of the scheme proposed by El-Alem [2]. This scheme is described in Step 6 of algorithm (3.1) below. After that, the step is tested to know whether it is accepted. This is done by comparing Pred k against Ared k. Our way of evaluating the trial steps and updating the trust-region radius is presented in Step 7 of algorithm (3.1) below. After accepting the step, we update the parameter ρ k and the Hessian matrix H k. To update ρ k, we use a scheme suggested by Yuan [19]. In this scheme, another parameter σ k has to be updated with ρ k. This scheme is described in Step 8 of algorithm (3.1) below. Finally, the algorithm is terminated when either Z T k l k + g k U k g k + h k ɛ 1, or s k ɛ 2 for some ɛ 1 > 0 and ɛ 2 > 0. A formal description of our trust-region algorithm for solving (SCOP)problem is presented in the following algorithm. Algorithm 3.1 (A trust-region algorithm for solving (SCOP) problem ) Step 0. (Initialization) Given x 1 R n+2m. Compute U 1. Evaluate ν 1 and λ 1 (see Step 5 with k =0and λ 0 =(0, 0,..., 0) T ). Set ρ 1 =1, r 0 =1, σ 1 =1, and β 0 =0.1. Compute δ 1 Choose ɛ 1 = ɛ 2 =10 8, α 1 =0.05, α 2 =2, η 1 =10 4, and η 2 =0.5 such that 0 <α 1 < 1 <α 2, and 0 <η 1 <η 2 < 1. Set δ min =10 3 and δ max =10 5 δ 1 such that δ min δ 1 δ max. Set k =1.

9 Fuzzy multi-objective constrained optimization problem 1375 Step 1. (Test for convergence) If Zk T l k + g k U k g k + h k ɛ 1, then terminate the algorithm. Step 2. (Compute a trial step) If h k =0, then a) Set s n k =0. b) Compute the step s t k by solving problem (3.9)with sn k = 0. c) Set s k = Z k s t k. Else a) Compute s n k by solving problem (3.7). b) If Zk T ( l k+ρ k g k U k g k +B k s n k ) =0, then set st k =0. Else, compute s t k by solving problem (3.9), end if. c) Set s k = s n k + Z k s t k and x k+1 = x k + s k. End if Step 3. (Test for termination) If s k ɛ 2, then terminate the algorithm. Step 4. (Update the active set) Compute U k+1. Step 5. (Compute the Lagrange multipliers ν k+1 and λ k+1 ) a) Compute ν k+1 by solving minimize Z T k+1 ( f k+1 + g k+1 U k+1 ν) 2 subject to U k+1 ν 0 (3.12) b) If f k+1 + h k+1 λ k + g k+1 U k+1 ν k+1 ɛ 1, then set λ k+1 = λ k. Else, compute λ k+1 by solving minimize f k+1 + g k+1 ν k+1 + h k+1 λ 2. End if.

10 1376 B. El-Sobky and Y. Abo-Elnaga Step 6. (Update the penalty parameter r k ) a) Set r k = r k 1. b) If Pred k r k 2 [ h k 2 h k + h T k s k 2 ], then set r k = 2[q k(s k ) q k (0) + Δλ T k (h k + h T k s k)+δν T k (g k + g T k s k)] h k 2 h k + h T k s k 2 +β 0, End if Step 7. (Test the step and update the trust-region radius) If Ared k Pred k <η 1 Reduce the trust-region radius by setting δ k = α 1 s k and go to step 2. Else if η 1 Ared k Pred k <η 2, then Accept the step: x k+1 = x k + s k. Set the trust-region radius: δ k+1 = max(δ k,δ min ). Else, accept the step: x k+1 = x k + s k. Set the trust-region radius: δ k+1 = min{δ max, max{δ min,α 2 δ k }}. End if. Step 8. (Update the parameters ρ k and σ k ) a) Set ρ k+1 = ρ k and σ k+1 = σ k. b) If 1Tpred 2 k Δνk T (g k+ gk T s) σ k g k U k g k min{ g k U k g k, Δ k }, then set ρ k+1 =2ρ k and σ k+1 = 1σ 2 k. Step 9. Set k = k +1and go to Step 1. Notice that, we use the above algorithm for solving (SCOP) problem to obtain α pareto optimal solutions (x,p ) of (FMCOP) problem, the weighting vector w, and the Lagrange multiplier vectors λ and ν.

11 Fuzzy multi-objective constrained optimization problem An ε α stability set of (FMCOP) problem This section deals with the stability set for the α pareto optimal solutions of (FMCOP) problem, so we start with the definition of solvability set of problem (2.4). Definition 4.1 (Solvability set): A solvability set is the set of α pareto optimal solutions of (FMCOP) problem. Now, assume that (x,p )isanα pareto optimal solutions of (FMCOP) problem and p is the α level optimal parameter of problem (2.4), then the stability set of the first kind of (FMCOP) problem corresponding to (x,p ) is given by the following definition. Definition 4.2 (Stability set) A stability set of the first kind of (FMCOP) problem corresponding to (x,p ) is defined by the set of all {[p i1,p i2 ] R 2m such that (x,p ) is an α pareto optimal solution of (FMCOP) problem. Now, our aim is to get another α pareto optimal solutions (ˆx, ˆp ) such that the norm of the difference between them and (x,p ) is limited by a small attribute ε. So we set the following definition Definition 4.3 (ε α stability set): Let (x,p ) be α pareto optimal solutions of (FMCOP) problem, then ε α stability set of (FMCOP) problem which we denoted it by G εα is defined as follows : G εα = {(ˆx, ˆp ) R n+m (ˆx, ˆp ) (x,p ) ε.} To obtain G εα set, we added the ε α stability condition (ˆx, ˆp) (x,p ) ε, to the constraints of problem (2.4). Then, we have the following singleobjective constrained optimization problem: minimize subject to f(ˆx, ˆp, ŵ) = m i=1 ŵif i (ˆx, ˆp i ), m i=1 ŵi 1=0, c j (ˆx) 0, j=1,...,me, p i1 ˆp i p i2 i=1,...,m, (ˆx, ˆp) (x,p ) ε, ŵ i 0, i=1,...,m. (4.1) We can obtain some α pareto optimal solutions (ˆx, ˆp ) of (FMCOP) problem which lie in the range of α pareto optimal solutions (x,p ) by solving problem (4.1) by using the trust-region algorithm (3.1)above. Then we obtain G εα set.

12 1378 B. El-Sobky and Y. Abo-Elnaga 5 Test Problem In this section, we introduce a fuzzy multi-objective constrained optimization test problem. Our programs are written in MATLAB and run under MATLAB Version 7.0 with machine epsilon The fuzzy multi-objective constrained optimization test problem is minimize (x 1 + p 1,x 2 + p 2 ), subject to (x 1 3) 2 +(x 2 2) 2 4, 2x 1 + x 2 0. (5.1) With the membership functions (2.1) of the fuzzy numbers p 1 and p 2 for i =1, 2 are p 1 =(1, 2, 4, 5) and p 2 =(3, 5, 9, 10). At α =0.36, we get, 1.36 p , and 3.72 p Then, α multi-objective constrained optimization problem is minimize (x 1 + p 1,x 2 + p 2 ), subject to : (x 1 3) 2 +(x 2 2) 2 4, 2x 1 + x 2 0, 1.36 p , 3.72 p (5.2) By using the weighting approach, the α multi-objective constrained optimization problem (5.2) converted to the following single objective constrained optimization problem minimize w 1 (x 1 + p 1 )+w 2 (x 2 + p 2 ), subject to : w 1 + w 2 =1, (x 1 3) 2 +(x 2 2) 2 4, 2x 1 + x 2 0, 1.36 p , 3.72 p , w 1 0, w 2 0, (5.3) To solve the above problem, we use the trust-region algorithm (3.1). Then α pareto optimal solutions of problem(5.1) is (x 1,x 2,p 1,p 2 )=(2.44, 0.08, 1.36, 3.72) T, and the values of wights is w =( , ) T. To obtain ε α stability set at α =0.36 and ε =0.1, we added the ε α stability condition (ˆx, ˆp) (x,p ) ε, to the constraints of problem (5.3) and solve

13 Fuzzy multi-objective constrained optimization problem 1379 this problem again by using the trust-region algorithm (3.1). This problem has the form minimize ŵ 1 (ˆx 1 +ˆp 1 )+ŵ 2 (ˆx 2 +ˆp 2 ), subject to ŵ 1 +ŵ 2 =1, (ˆx 1 3) 2 +(ˆx 2 2) 2 4, 2ˆx 1 +ˆx 2 0, 1.36 ˆp , 3.72 ˆp , (ˆx 1 2) 2 +(ˆx ) 2 +(ˆp ) 2 +(ˆp ) 2 0.1, ŵ 1 0, ŵ 2 0. (5.4) Using the trust-region algorithm (3.1) to solve the above problem, we have the following ε α stability set of (FMCOP) problem which is obviously in the following table. ˆx 1 ˆx 2 ˆp 1 ˆp 2 ŵ 1 ŵ Table 1: ε α stability set of(fmcop) problem. From the previous results we conclude that p , p So the ε α stability set at α =0.36 and ε =0.1 is as follows G εα = {(x, p) p , p , x , x }.

14 1380 B. El-Sobky and Y. Abo-Elnaga 6 Concluding Remarks A trust-region algorithm for solving a single objective constrained optimization problem is used to get the stability set of the third kind for fuzzy multiobjective constrained optimization problem, which represents the set of all fuzzy parameter for which a set of α pareto optimal solutions for one fuzzy parameters has been introduced. The following are the significant contributions of this paper The stability set is determined for α pareto optimal solutions not for an α pareto optimal point The stability set is determined numerically by using trust-region algorithm. This approach seems to be an interactive approach where the decision maker specifies the epsilon value his needs. Allowing the decision maker to control the resolutions of the pareto set by choosing the epsilon value according his needs. References [1] J.Dennis, M.El-Alem, and K.Williamson, A trust-region approach to nonlinear systems of equalities and inequalities, SIAM J Optimization, 9(1999), [2] M. El-Alem, A global convergence theory for a class of trust-region algorithms for constrained optimization, PhD thesis, Department of Mathematical Sciences, Rice University, Houston, Texas, [3] M. El-Alem, M.Abdel-Aziz, and A.El-Bakry, A projected Hessian gaussnewton algorithm for solving systems of nonlinear equations and inequalities, International J of Mathematics and Mathematica Science, [4] B. El-Sobky, A robust trust-region algorithm for general nonlinear constrained optimization problems, PhD thesis, Department of Mathematics, Alexandria University, Alexandria, Egypt, 1998.

15 Fuzzy multi-objective constrained optimization problem 1381 [5] B. El-Sobky, A global convergence theory for an active trust region algorithm for solving the general nonlinear programming problem, Applied Mathematics and Computation Archive,144(1)(2003), [6] B.El-Sobky, A global convergence theory for trust-region algorithm for general nonlinear programming problem, Publicities in fourth Saudi science conference Contribution of Science Faculties in the development process of Kingdom of Saudi Arabia sponsored by Taibah University, Al-Madinah Al-Munawarah, Kingdom of Saudi Arabia, March, 21st -24th, [7] B.El-Sobky and Y.Abo-Elnaga, An active-set trust-region algorithm for solving constrained multi-objective optimization problem, Applied mathematics science, 6(33)(2012), [8] J. Moré, Recent developments in algorithms and software for trust-region methods, In A. Bachem, M. Grotschel, B. Korte, editors, Mathematical Programming, , Springer-Verlag, New York, [9] E. Omojokun, Trust-region strategies for optimization with nonlinear equality and inequality constraints, PhD thesis, Department of Computer Science, University of Colorado, Boulder, Colorado, [10] M.Osman and El-Banna, Stability of multi-objective non-linear programming problems with fuzzy parameters, Mathematics and Computers in Simulation, 35(1993), [11] M.Osman, M. abo-sinna, and Y. Abo-Elnaga, On recent approaches in solving multi-objective programming problems, PhD thesis, Department of basic science, Faculty of engineering, Ain Shames university, Cairo, Egypt, [12] S. Orlovski, Multi-objective programming problems with fuzzy parameters, International institute for applied systems analysis, Austria, [13] M. Powell, Convergence properties of a class of minimization algorithms, In O. L.Mangasarian, R. R. Meyer, and S. M. Robinson, editors, Nonlinear Programming 2, 1:27, Academic Press, New York, 1975.

16 1382 B. El-Sobky and Y. Abo-Elnaga [14] M. Sakawa, An interactive fuzzy satisfying method for multi-objective non-linear programming problems with fuzzy parameters, Electron. Commun. Japan, [15] M. Sakawa, An interactive fuzzy decision-making in multi-objective linear programming problem and its application, I.E.C.E. Japan, (1982), [16] M. Sakawa, H. Yano, and J. Takahashi, Pareto optimality for multiobjective linear fractional programming problems with fuzzy parameters, Information Sciences, 63(1992), [17] H. Tanaka and K. Asai, Formulation of linear programming problem by fuzzy function, Systems and Control 6, (1981). [18] H. Tanaka, H. Ichihashi and K. Asai, Formulation of fuzzy linear programming problem by fuzzy objective function, J. Oper. Res. Soc. Japan 2, (1984). [19] Y. Yuan, On the convergence of a new trust region algorithm, Numer. Math. 70(1995), Received: December, 2011

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n Journal of Computational Mathematics, Vol.21, No.2, 2003, 229 236. A SELF-ADAPTIVE TRUST REGION ALGORITHM Λ1) Long Hei y (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

A Trust-Funnel Algorithm for Nonlinear Programming

A Trust-Funnel Algorithm for Nonlinear Programming for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October

More information

CHARACTERIZATION OF THE STABILITY SET FOR NON-DIFFERENTIABLE FUZZY PARAMETRIC OPTIMIZATION PROBLEMS

CHARACTERIZATION OF THE STABILITY SET FOR NON-DIFFERENTIABLE FUZZY PARAMETRIC OPTIMIZATION PROBLEMS CHARACTERIZATION OF THE STABILITY SET FOR NON-DIFFERENTIABLE FUZZY PARAMETRIC OPTIMIZATION PROBLEMS MOHAMED ABD EL-HADY KASSEM Received 3 November 2003 This note presents the characterization of the stability

More information

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem SIAM J. OPTIM. Vol. 9, No. 4, pp. 1100 1127 c 1999 Society for Industrial and Applied Mathematics NEWTON S METHOD FOR LARGE BOUND-CONSTRAINED OPTIMIZATION PROBLEMS CHIH-JEN LIN AND JORGE J. MORÉ To John

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Inexact-Restoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming 1 2

Inexact-Restoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming 1 2 Inexact-Restoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming 1 2 J. M. Martínez 3 December 7, 1999. Revised June 13, 2007. 1 This research was supported

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function

Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function Michael Ulbrich and Stefan Ulbrich Zentrum Mathematik Technische Universität München München,

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

A GLOBALLY CONVERGENT STABILIZED SQP METHOD A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained

More information

Some Theoretical Properties of an Augmented Lagrangian Merit Function

Some Theoretical Properties of an Augmented Lagrangian Merit Function Some Theoretical Properties of an Augmented Lagrangian Merit Function Philip E. GILL Walter MURRAY Michael A. SAUNDERS Margaret H. WRIGHT Technical Report SOL 86-6R September 1986 Abstract Sequential quadratic

More information

Trust-region methods for rectangular systems of nonlinear equations

Trust-region methods for rectangular systems of nonlinear equations Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Structural Dynamics and Materials Conference, Lake Tahoe, Nevada, May 2{

Structural Dynamics and Materials Conference, Lake Tahoe, Nevada, May 2{ [15] Jaroslaw Sobieszczanski-Sobieski, Benjamin James, and Augustine Dovi. Structural optimization by multilevel decomposition. In Proceedings of the AIAA/ASME/ASCE/AHS 24th Structures, Structural Dynamics

More information

Key words. minimization, nonlinear optimization, large-scale optimization, constrained optimization, trust region methods, quasi-newton methods

Key words. minimization, nonlinear optimization, large-scale optimization, constrained optimization, trust region methods, quasi-newton methods SIAM J. OPTIM. c 1998 Society for Industrial and Applied Mathematics Vol. 8, No. 3, pp. 682 706, August 1998 004 ON THE IMPLEMENTATION OF AN ALGORITHM FOR LARGE-SCALE EQUALITY CONSTRAINED OPTIMIZATION

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

1. Introduction. We consider the general smooth constrained optimization problem:

1. Introduction. We consider the general smooth constrained optimization problem: OPTIMIZATION TECHNICAL REPORT 02-05, AUGUST 2002, COMPUTER SCIENCES DEPT, UNIV. OF WISCONSIN TEXAS-WISCONSIN MODELING AND CONTROL CONSORTIUM REPORT TWMCC-2002-01 REVISED SEPTEMBER 2003. A FEASIBLE TRUST-REGION

More information

Flexible Penalty Functions for Nonlinear Constrained Optimization

Flexible Penalty Functions for Nonlinear Constrained Optimization IMA Journal of Numerical Analysis (2005) Page 1 of 19 doi: 10.1093/imanum/dri017 Flexible Penalty Functions for Nonlinear Constrained Optimization FRANK E. CURTIS Department of Industrial Engineering and

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

On the acceleration of augmented Lagrangian method for linearly constrained optimization

On the acceleration of augmented Lagrangian method for linearly constrained optimization On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization

Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization J. M. Martínez M. Raydan November 15, 2015 Abstract In a recent paper we introduced a trust-region

More information

5.6 Penalty method and augmented Lagrangian method

5.6 Penalty method and augmented Lagrangian method 5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the

More information

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Frank E. Curtis, Lehigh University Beyond Convexity Workshop, Oaxaca, Mexico 26 October 2017 Worst-Case Complexity Guarantees and Nonconvex

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

A bisection algorithm for fuzzy quadratic optimal control problems

A bisection algorithm for fuzzy quadratic optimal control problems A bisection algorithm for fuzzy quadratic optimal control problems Silvio Giove Department of Applied Mathematics, Ca Foscari University of Venice, Dorsoduro 3825/E, 30123 Venice, Italy sgiove@unive.it

More information

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t. AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The

More information

Second Order Optimization Algorithms I

Second Order Optimization Algorithms I Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The

More information

Analysis of Inexact Trust-Region Interior-Point SQP Algorithms. Matthias Heinkenschloss Luis N. Vicente. TR95-18 June 1995 (revised April 1996)

Analysis of Inexact Trust-Region Interior-Point SQP Algorithms. Matthias Heinkenschloss Luis N. Vicente. TR95-18 June 1995 (revised April 1996) Analysis of Inexact rust-region Interior-Point SQP Algorithms Matthias Heinkenschloss Luis N. Vicente R95-18 June 1995 (revised April 1996) Department of Computational and Applied Mathematics MS 134 Rice

More information

On Rough Multi-Level Linear Programming Problem

On Rough Multi-Level Linear Programming Problem Inf Sci Lett 4, No 1, 41-49 (2015) 41 Information Sciences Letters An International Journal http://dxdoiorg/1012785/isl/040105 On Rough Multi-Level Linear Programming Problem O E Emam 1,, M El-Araby 2

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Global convergence of trust-region algorithms for constrained minimization without derivatives

Global convergence of trust-region algorithms for constrained minimization without derivatives Global convergence of trust-region algorithms for constrained minimization without derivatives P.D. Conejo E.W. Karas A.A. Ribeiro L.G. Pedroso M. Sachine September 27, 2012 Abstract In this work we propose

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

On constraint qualifications with generalized convexity and optimality conditions

On constraint qualifications with generalized convexity and optimality conditions On constraint qualifications with generalized convexity and optimality conditions Manh-Hung Nguyen, Do Van Luu To cite this version: Manh-Hung Nguyen, Do Van Luu. On constraint qualifications with generalized

More information

A Recursive Trust-Region Method for Non-Convex Constrained Minimization

A Recursive Trust-Region Method for Non-Convex Constrained Minimization A Recursive Trust-Region Method for Non-Convex Constrained Minimization Christian Groß 1 and Rolf Krause 1 Institute for Numerical Simulation, University of Bonn. {gross,krause}@ins.uni-bonn.de 1 Introduction

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

Trust Regions. Charles J. Geyer. March 27, 2013

Trust Regions. Charles J. Geyer. March 27, 2013 Trust Regions Charles J. Geyer March 27, 2013 1 Trust Region Theory We follow Nocedal and Wright (1999, Chapter 4), using their notation. Fletcher (1987, Section 5.1) discusses the same algorithm, but

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

Relaxed linearized algorithms for faster X-ray CT image reconstruction

Relaxed linearized algorithms for faster X-ray CT image reconstruction Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20 Statistical image reconstruction

More information

Quasi-Newton methods for minimization

Quasi-Newton methods for minimization Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1

More information

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

March 8, 2010 MATH 408 FINAL EXAM SAMPLE March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page

More information

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University Numerical Comparisons of Path-Following Strategies for a Basic Interior-Point Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPC-TR97777-S Revised August 1998

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

The Arzelà-Ascoli Theorem

The Arzelà-Ascoli Theorem John Nachbar Washington University March 27, 2016 The Arzelà-Ascoli Theorem The Arzelà-Ascoli Theorem gives sufficient conditions for compactness in certain function spaces. Among other things, it helps

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

Sequential Quadratic Programming Methods

Sequential Quadratic Programming Methods Sequential Quadratic Programming Methods Klaus Schittkowski Ya-xiang Yuan June 30, 2010 Abstract We present a brief review on one of the most powerful methods for solving smooth constrained nonlinear optimization

More information

8. Constrained Optimization

8. Constrained Optimization 8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer

More information

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Improved Damped Quasi-Newton Methods for Unconstrained Optimization Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics

More information

Computational Optimization. Augmented Lagrangian NW 17.3

Computational Optimization. Augmented Lagrangian NW 17.3 Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday

More information

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Interactive Decision Making for Hierarchical Multiobjective Linear Programming Problems with Random Variable Coefficients

Interactive Decision Making for Hierarchical Multiobjective Linear Programming Problems with Random Variable Coefficients SCIS & ISIS 200, Dec. 8-2, 200, Okayama Convention Center, Okayama, Japan Interactive Decision Making for Hierarchical Multiobjective Linear Programg Problems with Random Variable Coefficients Hitoshi

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer

More information