Rachid Benouahboun 1 and Abdelatif Mansouri 1
|
|
- Octavia Dickerson
- 5 years ago
- Views:
Transcription
1 RAIRO Operations Research RAIRO Oper. Res DOI:.5/ro:252 AN INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMMING WITH STRICT EQUILIBRIUM CONSTRAINTS Rachid Benouahboun and Abdelatif Mansouri Abstract. We describe an interior point algorithm for convex quadratic problem with a strict complementarity constraints. We show that under some assumptions the approach requires a total of O nl number of iterations, where L is the input size of the problem. The algorithm generates a sequence of problems, each of which is approximately solved by Newton s method. Keywords. Convex quadratic programming with a strict equilibrium constraints, interior point algorithm, Newton s method.. Introduction A mathematical programming with equilibrium constraints MPEC is a constrained optimization problem in which the essential constraints are defined by a complementarity system or variational inequality. This field of mathematical programming has been of much interest in the recent years. This is mainly because of its practical usage in many engineering design [3, ], economic equilibrium [6], multi-level game-theoretic and machine learning problems [4, 9]. The monograph [2] presents a comprehensive study of this important mathematical programming problem. Definition. Let M be a n nreal matrix. We say that a matrix M is copositive if for all x, Mx. Received February 27, 22. Accepted November 9, 24. Département de Mathématiques, Faculté des Sciences Semlalia, Marrakech, Maroc; mansouri@ucam.ac.ma c EDP Sciences 25 Article published by EDP Sciences and available at or
2 4 R. BENOUAHBOUN AND A. MANSOURI It is easily seen that if M is co-positive, then for x, we have XMX x T Mx, and XMx x T Mx, where X = diagx,..., x n. In this paper, we consider the following convex quadratic program with a strict linear complementarity constraint: Minimize 2 xt Gx + d T x s.t. CQPEC Ax = b, x T Qx =, x, where x and d are n-vectors, b is an m-vectors, A is an m n matrix with ranka =m<n, G is a symmetric positive semidefinite n n matrix, Q is a symmetric co-positive n n matrix and the superscript T denotes transposition. Our purpose is to construct an interior point algorithm to solve the problem CQPEC. Using Newton s direction the algorithm generates a sequence of interior points which under some conditions converges to a solution stationary point of CQPEC in a polynomial time. The algorithm has a complexity of On 3.5 L where L is the input length for the problem. The paper is organized as follows. In Section 2, we give some applications of the problem CQPEC in mathematical programming. In Section 3, we present some theoretical background. In Section 4, we present the algorithm and we prove some results related to the convergence properties of the algorithm. Finally, in Section 6 we discuss the initialization of the algorithm. To illustrate our approach we conclude the paper with some numerical results in Section Some applications of CQPEC in mathematical programming The problem CQPEC have a wide range of applications for example in economic equilibrium, multi-level game and machine learning problems. In this section, we show some applications in mathematical programming. The problems that we consider in this section, are generally NP-Complete. Nevertheless, the approach presented in this paper solves a particular cases of these problems in polynomial times. 2.. Optimization over the efficient set Consider the multiobjective linear program { }} Minimize C x s.t. x χ = {Ã x = b, x where C is an k n matrix. Recall that a point x χ is an efficient solution of if and only if there exists no x χ such that Cx Cx and Cx Cx. Let E
3 AN INTERIOR POINT METHOD FOR CONVEX QUADRATIC PROBLEM 5 denote the set of the efficient solutions. Consider the problem Minimise { } 2 xt G x + dt x s.t x E 2 where G is a symmetric positive semidefinite n n matrix and d is an n-vector. In the linear case this problem has several applications in multiobjective programming see [, 2]. 2 can be written in the following form Minimize 2 xt G x + dt x s.t à x = b, e T λ =, 3 à T ỹ + z C T λ =, x T z =, x, z, λ. By taking ỹ = ỹ ỹ 2,withỹ, and ỹ 2, x = x z ỹ ỹ 2 λ 3isequivalenttoCQPEC.,A= à I n à T ÃT C T, I n I n b Q = and b = 2.2. Goal programming Consider the multiobjective problem. To solve this problem, several approaches have been developed. One class of a very used methods is the Distance- Based Methods. These methods assume that the decision-maker can select a point, at each iteration, which can be considered as an ideally best point from his viewpoint. If the ideal point is not feasible, the process gives the closer efficient solution to the ideal point as possible. The research of such a solution can be summarized by the mathematical programming problem: Minimize { df, C x } s.t. x χ 4
4 6 R. BENOUAHBOUN AND A. MANSOURI where f is the ideal point and d.,. is a distance of point in R k. Inthecaseof the L -distance and by introducing the following notation { d + f i = i c T i x, if f i ct i x otherwise { d c T i = i x fi, if f i < ct i x otherwise the problem 4 can be written as follows: Minimize k w i d + i + d i i= s.t. Ã x = b C x d + d + = f, d + T d =, d +, d, x 5 where w is a vector of weights. This problem is often called goal programming. In many applications, a mixture of the above process and the method of sequential optimization is referred as a goal programming approach or model. Obviously the problem 5 is equivalent to CQPEC. 3. Preliminaries The algorithm that we consider in this paper is motivated by the application of the mixed penalty technique to problem CQPEC. The mixed penalty consists of examining the family of problems { P Minimize f x = x 2 xt Gx + d T T Qx } 2 n x + logx i : s.t x S 4 where S = {x : Ax = b, x > }, all x S is called an interior point of the problem CQPEC, and > is the penalty parameter. This technique is well-known in the context of general constrained optimization problems. One solves the penalized problem for several values of the parameter, with decreasing to zero, and the result is a sequence of interior points, called centers, converging to a stationary solution of the original problem [5]. Our aim is to construct a polynomial algorithm. Since, generally, it is impossible to solve P exactly in a finite time, we must renounce to the determination of centers, and work in their neighborhood. In other words, we can generate a sequence of interior points { x k} close to the path of centers such that x k T Qx k and converges to a stationary solution of the problem CQPEC. i=
5 AN INTERIOR POINT METHOD FOR CONVEX QUADRATIC PROBLEM 7 The KKT conditions applied to CQPEC implies that if x is a stationary, then there exist y R m, β R and z R n, such that: A T y Gx βqx + z = d, Ax = b, x T Qx =, x T z =, x, z. Let T = { x, y, z, β : Ax = b, A T y Gx βqx + z = d, x, z, β }, and consider the merit function defined for all x, y, z, β T,by x, y, z, β =x T Qx + x T z. With this merit function, the above stationary condition system becomes { x, y, z, β T, x, y, z, β =. For the time being, our objective is to construct a sequence { x k,y k,z k,β k } T such that x k is an interior point for all k. with an upper bound for the merit function x k,y k,z k,β k = x k T Qx k +x k T z k, is driven to zero at a fixed rate of σ n, where σ is a given constant. For this, we make the following assumptions. Assumptions. a There exists w R m such that A T w>andb T w>. b If x is feasible for CQPEC, then we have for all i {,..., n}, x i > if and only if [Qx] i =. c For all x, y, z, β T, we have β ρ, where ρ is a given constant. Remark. i The assumption a is used, in section 6, to transform the problem CQPEC on an equivalent problem for which an initial interior point x S is known in advance. As in [7], we show that there exists y such that: X A T y = e. ii The assumption b implies that the condition of Constraint Qualification holds for the problem CQPEC see [2]. Remark also that this assumption is satisfied for the problems described in Section 2. iii We give an upper bound for ρ in the next section. Throughout this paper, we use the following notation. When x is a lower case letter denotes a vector x =x,..., x n T, then a capital letter will denote the diagonal matrix with the components of the vector on the diagonal, i.e., X = diag x,..., x n, and. denotes the Euclidean norm.
6 8 R. BENOUAHBOUN AND A. MANSOURI 4. The algorithm Given ancqpec problem in standard form, in the Section 6, we discuss how to transform this problem into a form for which an initial interior point x S such that there exists y R m satisfying X A T y = e, 6 is given. For the moment we can suppose that a such initial point is known in advance. For a current iterate x S, let x denote the next iterate. The direction d x, chosen to generate x is defined as the Newton direction associated with the penalized problem P. The Newton direction at the point x is the optimal solution of the quadratic problem: QP Minimize 2 dt x 2 f xd x + f x T d x s.t. Ad x =. Let M denote the Hessian of the penalty function f at the point x. The following lemma show that if x T Qx <, then the matrix M is positive definite and by consequence the problem QP has an unique solution. Lemma. If x T Qx <, then the matrix M = G+ x T Qx Q+ QxxT Q+X 2 is positive definite. Proof. We have M = X XGX + x T Qx XQX + 2 XQxx T QX + I X. 2 2 XQX x T Qx 2 <, and so x T Qx On the other hand we have I + x T Qx XQX is positive definite. Thus M is also positive definite. 2 By the KKT condition, d x equations: is determined by the following system of linear [ G + x T Qx x T Qx Ad x =. Q + QxxT Q + X 2 ] d x + Gx + d+ Qx X e = A T ŷ, 7
7 AN INTERIOR POINT METHOD FOR CONVEX QUADRATIC PROBLEM 9 Via a simple calculation, we obtain for d x, x, ŷ, ẑ and β : [ d x = M M A T AM A T ] AM ϑ x = x + d x ŷ = AM A T AM ϑ 8 β = xt Qx ẑ = G x + d + βq x A T ŷ where ϑ = X e Gx d x T Qx Qx. We are now ready to describe the algorithm. Let δ, η, ρ and σ be constants satisfying: δ δ δ +δ η η η<δ,ρ<min, 9 +δ δθ 2 + θ δθ θ 2 σ <Min n θ ρ 2 θ δ ρ + δ + θ 2 δ +δ θ 2 η, ρ δθ θ 2 η where θ =+δ and θ 2 = δ + n. The following lemma gives some results, related to theses constants, that will be useful to prove Lemma 6. Lemma 2. Let δ, η, ρ and σ be constants satisfying 9 and. Then: i ii iii iv ρθ θ 2 2 σ <ρθ θ 2. n α = σ n ρθ θ 2 >. [ ] ηθ + δ 2 + ρ 2 θ θ θ 2 ρθ 2 + σ δα. 2 ρθ θ 2 σ δ η. n Proof. σ i From we obtain: n δ < ρ η θ θ 2 < ρθ θ 2, which implies ρθ θ 2 2 σ n ρθ θ 2. ii From it follows that σ n >ρθ θ 2. Thus α = σ n ρθ θ 2 >.
8 2 R. BENOUAHBOUN AND A. MANSOURI iii By we have ] σ n θ [ρ 2 δ θ ρ δ +δ η θ 2 + δ + θ 2 [ ] [ ] σ n θ 2 ρ 2 θ θ θ δ 2 ρθ 2 ρθ θ 2 δ + θ δ +δ η σ n δ + [ ] n ρ 2 θ θ θ 2 ρθ 2 + δ δ 2 ηθ ρθ θ 2 δ [ ] σ n δ + σ + δ 2 + ηθ + ρ 2 θ θ θ 2 ρθ 2 δ ρθ θ 2 δ [ ] δ 2 + ηθ + ρ 2 θ θ θ 2 ρθ 2 + σ δ n σ ρθ θ 2 [ ] δ 2 + ηθ + ρ 2 θ θ θ 2 ρθ 2 + σ δα. The third inequality follows from the fact that θ =+δ and θ 2 = δ + n. iv By we have σ n ρ δθ θ 2, then we obtain ρθ θ 2 η η σ n δ Hence 2 ρθ θ 2 σ δ η. n We now state the algorithm. ALGORITHM Step.: letx S be a given interior point which satisfies 6, for an initial penalty parameter >, and ρ, δ, η and σ a given positive constants satisfying 9 and. Let ε> be a tolerance for the merit function. Set k :=. Step.: compute d k x,yk+,β k+ and z k+ by using 8. Step.2: setx p+ := x p + d k x and k+ := k σ n Set k := k +. Step.3: compute the merit function: x k,y k,z k,β k := x k T Qx k +x k T z k. If x k,y k,z k,β k ε, stop. In the next section, we prove that all points generated by the algorithm lie in the set T and that they remain close to the central path T c. We also show that the algorithm terminates in at most O nl iterations. This fact will enable us to show that the algorithm performs no more than On 3.5 L arithmetic operations until its termination.
9 AN INTERIOR POINT METHOD FOR CONVEX QUADRATIC PROBLEM 2 5. Convergence We begin this section by stating the following main results. Theorem 3. Let ρ, δ, η and σ be positive constants satisfying relations 9 and. Assume that x and d x satisfy X d x δ and XQxx T Qd x η 2. Let = σ/ n and consider the point x, ŷ, ẑ, β R n R m R n R given by 8. Then, we have: a x, ŷ, ẑ, β T ; b c x T ẑ θ 2 δ + η + n and xt Q x ρ σ/ n < ; X d x δ and XQ x x T d x η 2. This theorem show that if, the current iterate is close to the central path, so it is for the next iterate. Proof. To prove this theorem, we use the following Lemmas 4, 5 and 6 see the annex for the proofs of those lemmas. Lemma 4. Under the assumptions of the above theorem, we have i x, ŷ, ẑ, β T and x T Qx ρ; ii x T ẑ θ 2 δ + η + n and xt Q x ρ σ/ n The following lemma will be used in the proof of Lemma 6. Lemma 5. Let x, d x, and be as above. If XQxx T Qd x η 2 and X d x δ, then, we have i XX I δ,and XQ x ρθ θ 2 ; [ ] ii xt Q x xt Qx θ ρ 2 σ/ n ; iii xt Q x d x T Qd x ρθ θ X 2 2 d x ; iv XQxx T Qd 2 x ηθ. Now, we prove that if X d x δ and XQxx T d x η 2 are small, then they are also for the next iteration. Lemma 6. Let δ, η and σ be as above. If X d x δ, and XQxx T d x η 2, then we have X d x δ and XQ x x T d x η 2. θ 2 2 θ 2 2
10 22 R. BENOUAHBOUN AND A. MANSOURI Let x S be an initial interior point which satisfies 6. The following lemma shows how to choose >, such that the Newton step d x, at x, satisfies X Qx x T Qd x η 2 and X δ. Lemma 7. Let δ, η, and ρ satisfy 9, : X Gx + d [ δ ρ 2 + δ ] 2 and x T Qx ρ. 3 Then X δ and X Qx x T Qd x η 2. Proof. From 22 we have X 2 d x T Gx + d x T Qx d x T Qx On the other hand, we have Then 4 becomes ρ 2 X x T Qx d x T Qd x d x T X e. 4 x T Qx d x T Qd x x T Qx X QX X x T Qx 2 X d 2 x ρ 2 X 2. d 2 x 2 d x T Gx + d x T Qx d x T Qx d x T X e. From Ad x =, we have d x T A T y =, then ρ 2 X 2 d x T Gx + d x T Qx d x T QX d x T X [ e X A T y ].
11 AN INTERIOR POINT METHOD FOR CONVEX QUADRATIC PROBLEM 23 Using 6, we conclude that ρ 2 X d 2 x d x T X [X Gx + d + x T Qx ] X Qx [ X X d x Gx + d x T Qx + X Qx ] [ X X Gx + d x T Qx 2 ] + X X [[ δ ρ 2 + δ ] + ρ 2] ρ 2 δ. Hence ρ 2 X d 2 x X which implies X d x δ. For the last inequality of the lemma, we have X Qx x T Qd x 2 ρ 2 δ X Qx 2 X x T Qx 2 X ρ 2 δ. By we have ρ 2 η δ and hence X Qx x T Qd x 2 η δ δ = η2. As a consequence of Theorem, we have the following result: Corollary 8. Let x S be an initial point which satisfies 6, be an initial penalty parameter, and δ, η, σ and ρ constants satisfying the conditions of Lemmas 2 and 5. Then the sequence { } x k,y k,z k,β k generated by the algorithm satisfies for all k i Xk Qx k x k T Qd k x η 2 k and X k dk x δ; ii x k,y k,z k,β k T with x k T z k k δ + nδ + η + n; iii x k T Qx k k ρ; where k = σ/ n k. Proof. This result follows trivially from Theorem. We now derive an upper bound for the total number of iterations performed by the algorithm.
12 24 R. BENOUAHBOUN AND A. MANSOURI Proposition 9. The total number of iterations performed by the algorithm is no greater than k = n σ log [ ε [ σ/ n ρ +δ + nδ + η + n] ], where ε> denotes the tolerance for the merit and is the initial penalty parameter. Proof. The algorithm terminates whenever [ ] k δ + n δ + η + n + σ/ n ρ ε. Thus, it is enough to show that k satisfies this inequality. By the definition of k, we have log ε = kσ n +log [ [ σ/ n ρ +δ + nδ + η + n ]] k log σ/ n +log [ [ ]] σ/ n ρ +δ + nδ + η + n k [ ] log [ ] σ/ n σ/ n ρ +δ + nδ + η + n log [ [ ]] k σ/ n ρ +δ + nδ + η + n. The second inequality is due the fact that log x x, for all x<. Therefore k satisfies k [δ + nδ + η + n+ σ/ n ρ] ε. If we define L to be the number of bits necessary to encode the data of problem CQPEC, then the following corollary clearly holds. Corollary. If log 2 =OL, log 2 ε = OL and the hypothesis of the last theorem holds, then the algorithm terminates in at most O nl iterations and with On 3.5 L arithmetic operations. Proof. From log 2 = OL andlog 2 ε = OL, it is clear that we have k = O nl. Since for each iteration we have to solve one linear system, which requires On 3 arithmetic operations, the algorithm converges in at most On 3.5 L. 6. Initialization of the algorithm Consider the convex quadratic problem with a strict equilibrium constraint CQPEC Minimize 2 xt G x + dt x à x = b, x T Q x =, x,
13 AN INTERIOR POINT METHOD FOR CONVEX QUADRATIC PROBLEM 25 where x and d are vectors of length ñ, b is a vector of length m, à is an m ñ matrix, G is a symmetric positive semidefinite ñ ñ matrix, Q is a symmetric co-positive ñ ñ matrix. We assume that Assumption. A There exists w R m such that: à T w > and b T w [ >. ] A 2 For all feasible point x, we have: x i > if and only if Q x =. i A 3 For all x, v, w, β Ϝ we have β ρ, where Ϝ = with ρ satisfies 9. { u, w, v, β : à T w G x β Q x + v = d, } x, v, β The aim of this section is to transform this problem into a convex quadratic problem with equilibrium constraints that satisfies assumptions a, b andc and has an interior point x which satisfies 6. The approach that we propose is one that has been suggested by numerous authors for transforming linear and convex quadratic programs into a form suitable for interior point algorithms. Let N be a large positive constant. For all i we tack x i = [ÃT w ] i and λ = x T à T w bt w = ñ bt w 5 As in [8,] it is clear that the following convex quadratic problem with equilibrium constraint Minimize 2λ xt G x + dt x + α N à x +λ b à x α = λ b, CQPEC α + α 2 =2 α α 2 = λ xt Q x =, x,α,α 2, where x = λ x, canbewrittenintheformofcqpewith x = λ x [ α à λ b à x,a= α 2 G λ Q λ G =,Q=,d= ],b= [ λ b 2 d N. ],
14 26 R. BENOUAHBOUN AND A. MANSOURI Let x = x [ w and y = problem CQPEC. Using 5 we obtain ]. It is clear that x isafeasiblepointforthe à T w X A T y = X λb T w x T à T w + à T w = X X à T w = = ẽ = e. Then x is an initial interior point for the problem CQPEC which satisfies 6. On the other hand, we have A T y > andb T y >. Thus assumption a is verified for CQPEC. The augmented set T is of the form { T = x, y, z, β, τ,τ 2 : à x +λ b à x α = λ b, α + α 2 =2, à T y λ G x β λ Q x + z = d, λ b à x T y + u τα2 + v = N, u τα + v 2 =, x,α,α 2, z, v,v 2, β, τ }. It is easily seen that if x, α,α 2,y,u,z,v,v 2,β,τ T, then we have x λ,y,z,β Ϝ and by the assumption A 3 we obtain β ρ. Thus the augmented problem CQPEC satisfies assumption c andbya 2 this problem satisfies also the assumption b. Then we can apply the algorithm to CQPEC for a large enough N. If x = x, α,α 2 is an optimal solution of CQPEC, then we have: If α =, then x = x λ is an optimal solution of CQPEC. Else, CQPEC has no solution.
15 AN INTERIOR POINT METHOD FOR CONVEX QUADRATIC PROBLEM Numerical example To illustrate the result of the paper, the progress of the algorithm is presented on a numerical example. The example is a - convex quadratic problem. Minimize x 2 +2x2 2 3x x 2 s.t..25x +.25x x +.375x x +.5x x +.25x x +.25x x.25x x +.75x x +.25x x i {, }. Adding the necessary surplus variables, this problem becomes Minimize 2 xt Gx + d T x s.t Ax + z = b x i {, } z. Converting this system of constraints to the form required by our proposed approach, results in augmented system given by Minimize 2 xt G x + dt x s.t à x = b x T Q x = x, where x = x z R 2 A I G, à =, G = I I e x Q = I, b b =. e I The augmented system shown above has the initial interior point: x = T.
16 28 R. BENOUAHBOUN AND A. MANSOURI Table. iteration x x 2 merit d T x Figure. By starting the algorithm with this initial solution with σ =. and =.5, the process terminates at the solution: x = T. A summary of the first 3 iterations is given in Table. The trace of the interior solution trajectory is shown in Figure. 8. Conclusion We have established polynomial method for convex quadratic programming with strict equilibrium constraints. To our knowledge this is the first time an interior
17 AN INTERIOR POINT METHOD FOR CONVEX QUADRATIC PROBLEM 29 point algorithm solves the MPEC in the convex quadratic case at polynomial time. The complexity obtained here coincides with the bound obtained for linear programming by the most of interior point methods. Annex ProofofLemma4. i By the definition of x and ẑ we have: and x = x + d x = X e + X d x > 6 ẑ = X e QxxT Qd x X 2 d x 7 = X e 2 XQxxT Qd x X d x >. ii This means that x, ŷ, ẑ, β T. On the other hand, we have β = x T Qx and from assumption c weobtainx T Qx ρ. From 6 and 7 we have, x T ẑ e + X d x e 2 XQxx T Qd x X d x θ 2 δ + η + n. 8 Thus, using the fact that XQX x T Qx ρ we obtain x T Q x X x 2 XQX x T Qxθ 2 2 ρθ2 2. So we have x T Q x ρ θ2 2 σ/ n ProofofLemma5. i We have XX { } xi I = max i n x i the last inequality of i, we have XQ x = max i n XX XQX X x ρ + δ δ + n = ρθ θ 2. { } h i x x i δ. To prove The second inequality follows from the fact that XQX x T Qx ρ and X x = e + X h x δ + n.
18 3 R. BENOUAHBOUN AND A. MANSOURI ii Form i of the last lemma, it follows that x T Q x xt Qx = x T Qx +2 xt Qd x ρ +2ρδ + ρδ2 [ ] ρ + δ2. + dt x Qd x Thus x T [ Q x xt Qx ρ θ 2 ] σ/ n. 9 iii From ii of the above Lemma, we conclude x T Q x d x T Qd x ρ θ2 2 σ/ n X 2 d x XX 2 XQX ρ 2 θ2 2 θ2 σ/ n X 2 d x ρθ 2 θ X d x 2. iv The last inequality is obtained by i of Lemma 2. We have XQxx T 2 Qd x XX 2 XQxxT Qd x ηθ. The last inequality follows from the fact that 2 XQxx T Qd x η. ProofofLemma6. From 7 d x and d x satisfy: [ G + x T Qx Q + QxxT Q + X 2 ] d x + Gx + d x T Qx + Qx X e = A T ŷ 2 and [ x T Q x G + Q + ] Q x xt Q + X x 2 T Q x d x + G x + d + Q x X e = A T ỹ. 2
19 AN INTERIOR POINT METHOD FOR CONVEX QUADRATIC PROBLEM 3 Using the fact that Ad x =and x = x + d x, and multiplying 2 and 2 by d Ṱ x we conclude that: d Ṱ x [ ] x QxxT Q + X 2 d x + d Ṱ T x G x + dṱ x d + Qx d Ṱ x Q x dṱ x X e = and d Ṱ x [ G + x T Q x Q + ] Q x xt Q + X 2 x d x + d Ṱ T x G x + dṱ x d + Q x d Ṱ x Q x d Ṱ x X e =. 22 So, this implies that d Ṱ x [G + ] [ ] Q x xt Q + X 2 d x = d Ṱ x QxxT Q + X 2 d x x T Qx + xt Q x x d Ṱ T x Q x Q x d Ṱ x Qd x + d Ṱ X x e d Ṱ x X e. We know that d Ṱ x Q x xt Qd x = x T Qd x 2 andd Ṱ x Gd x, then we have X 2 x T Q x d x + [ d Ṱx Qd x d Ṱx X XQxx T 2 Q + XX 2 d x + x T Qx xt Q x XQx + ] e XX e. Using e = e σ n e and the identity e XX e = X d x, we can write the above expression as: X 2 x T Q x d x + d Ṱ x Qd x d Ṱ x X [ 2 XQxx T Qd x + XX I X d x + x T Qx xt Q x XQx σ ] e. n Now by iioflemma5wehave: x T Q x d x T Qd x ρθ θ X 2 2 d x.
20 32 R. BENOUAHBOUN AND A. MANSOURI Thus X 2 d x ρθ θ X 2 [ 2 d x X d x XQxx T 2 Qd x + XX I X d x + x T Qx xt Q x XQx ] + σ. Hence α X d x XQxx T 2 Qd x + XX I X d x + x T Qx xt Q x XQx + σ, with α = ρθ θ 2. By using 9, i andiii of the above lemma, we obtain: α X [ d x ηθ + δ 2 + ρ 2 θ 2 ] θ θ 2 σ/ n + σ. From i of Lemma 2, it follows that This implies σ/ n ρ + δδ + n 23 X [ [ ] ] d x ηθ + δ 2 + ρ 2 θ θ θ 2 + σ. 24 α ρθ 2 On the other hand, from iii oflemma2weconclude Now we prove that X d x δ. XQ x x T Qd x 2 η. We have XQ x x T Qd x XQ x 2 X d x,
21 AN INTERIOR POINT METHOD FOR CONVEX QUADRATIC PROBLEM 33 Using i of Lemma 5, and X d x δ, we obtain XQ x x T Qd x 2 2 ρθ θ 2 2 δ. 2 By iv of Lemma 2, we conclude ρθ θ 2 2 δ η. Therefore XQ x x T Qd x 2 η. References [] H.P. Benson, Optimization over the efficient set. J. Math. Anal. Appl [2] H.P. Benson, An algorithm for optimizing over the weakly-efficient set. European J. Oper. Res [3] Y. Chen and M. Florian, O-D demand adjustment problem with cojestion: Part I. Model analysis and optimality conditions. Publication CRT [4] C. Cinquini and R. Contro, Optimal design of beams discretized by elastic plastic finite element. Comput. Structures [5] A.V. Fiacco and G.P. McCormick, Nonlinear Programming: Sequential Unconstrained Minmization Techniques. John Wiley, New York 968. [6] D. Gale, The theory of Linear Economic. McGraw-Hill Book Company, New York 96. [7] C. Gonzaga, Path-following methods for linear programming. SIAM Rev [8] D. Goldfarb and S. Liu, An On 3 L primal interior point algorithm for convex quadratic programming. Math. Prog / [9] M. Kočvara and J.V. Outrata, On the solution of optimum design problems with variational inequalities, in Recent Advances in Nonsmooth Optimization, editedbyd.z.du,l.qiand R. Womersly, World Sciences November 995. [] G. Maier, A quadratic programming approach for certain classes of nonlinear structural problems. Meccanica [] D.C. Monteiro and I. Adler, Interior path following primal-dual algorithms. Part I: Linear programming. Math. Prog [2] Z.Q. Luo, J.S Pang and D. Ralph, Mathematical Programs with Equilibrium Constraints. Cambridge University Press 996. To access this journal online:
Interior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationA new primal-dual path-following method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationFIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow
FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationA FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationA PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More informationOn Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *
Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for
More informationA FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS
A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationA Regularized Interior-Point Method for Constrained Nonlinear Least Squares
A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationAn Infeasible Interior Point Method for the Monotone Linear Complementarity Problem
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,
More informationFollowing The Central Trajectory Using The Monomial Method Rather Than Newton's Method
Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationCHAPTER 2: QUADRATIC PROGRAMMING
CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationA Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationLectures on Parametric Optimization: An Introduction
-2 Lectures on Parametric Optimization: An Introduction Georg Still University of Twente, The Netherlands version: March 29, 2018 Contents Chapter 1. Introduction and notation 3 1.1. Introduction 3 1.2.
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationAN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow
AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics and Statistics
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationA SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES
A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma
More informationA class of Smoothing Method for Linear Second-Order Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin
More informationOn well definedness of the Central Path
On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationDifferentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA
Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationSecond Order Optimization Algorithms I
Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationA Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme
A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationSelf-Concordant Barrier Functions for Convex Optimization
Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationINTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationInfeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel
More informationCONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationA Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm
Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:
More informationOn Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming
On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence
More informationLecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008
Lecture 13 Newton-type Methods A Newton Method for VIs October 20, 2008 Outline Quick recap of Newton methods for composite functions Josephy-Newton methods for VIs A special case: mixed complementarity
More informationEnlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions
Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm
More informationJournal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION. Mikhail Solodov. September 12, 2005
Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION Mikhail Solodov September 12, 2005 ABSTRACT We consider the problem of minimizing a smooth
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationNumerical Optimization
Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More informationImproved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationInterior-Point Methods
Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More informationNew Infeasible Interior Point Algorithm Based on Monomial Method
New Infeasible Interior Point Algorithm Based on Monomial Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)
More informationCE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions
CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura
More informationA Smoothing Newton Method for Solving Absolute Value Equations
A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationDerivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming
Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Martin Peter Neuenhofen October 26, 2018 Abstract We present an NLP solver for nonlinear optimization with quadratic penalty
More informationGlobal Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition
Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality
More informationA Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationREGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS
REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis
More informationarxiv: v1 [math.na] 8 Jun 2018
arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationDecision Science Letters
Decision Science Letters 8 (2019) *** *** Contents lists available at GrowingScience Decision Science Letters homepage: www.growingscience.com/dsl A new logarithmic penalty function approach for nonlinear
More informationSolving Obstacle Problems by Using a New Interior Point Algorithm. Abstract
Solving Obstacle Problems by Using a New Interior Point Algorithm Yi-Chih Hsieh Department of Industrial Engineering National Yunlin Polytechnic Institute Huwei, Yunlin 6308 Taiwan and Dennis L. Bricer
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationAccelerated primal-dual methods for linearly constrained convex problems
Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia
More informationA sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm
Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential
More informationA null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties
A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationOPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS
OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS Xiaofei Fan-Orzechowski Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony
More informationAdvanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs
Advanced Mathematical Programming IE417 Lecture 24 Dr. Ted Ralphs IE417 Lecture 24 1 Reading for This Lecture Sections 11.2-11.2 IE417 Lecture 24 2 The Linear Complementarity Problem Given M R p p and
More informationA full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function
Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,
More informationOptimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function
More information1. Introduction. We consider mathematical programs with equilibrium constraints in the form of complementarity constraints:
SOME PROPERTIES OF REGULARIZATION AND PENALIZATION SCHEMES FOR MPECS DANIEL RALPH AND STEPHEN J. WRIGHT Abstract. Some properties of regularized and penalized nonlinear programming formulations of mathematical
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationAn O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015
An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search
More informationHIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES
- TAMKANG JOURNAL OF MATHEMATICS Volume 48, Number 3, 273-287, September 2017 doi:10.5556/j.tkjm.48.2017.2311 - - - + + This paper is available online at http://journals.math.tku.edu.tw/index.php/tkjm/pages/view/onlinefirst
More informationDetermination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study
International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:
More informationCOMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM
STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIII, Number 3, September 2008 COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM ADNAN
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD
A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained
More informationIterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming
Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study
More information