Rapid infeasibility detection in a mixed logarithmic barrier-augmented Lagrangian method for nonlinear optimization

Size: px
Start display at page:

Download "Rapid infeasibility detection in a mixed logarithmic barrier-augmented Lagrangian method for nonlinear optimization"

Transcription

1 Rapid infeasibility detection in a mixed logarithmic barrier-augmented Lagrangian method for nonlinear optimization Paul Armand, Ngoc Nguyen Tran To cite this version: Paul Armand, Ngoc Nguyen Tran. Rapid infeasibility detection in a mixed logarithmic barrieraugmented Lagrangian method for nonlinear optimization. [Research Report] Université de Limoges, France; XLIM <hal > HAL Id: hal Submitted on 4 May 208 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

2 Rapid infeasibility detection in a mixed logarithmic barrier-augmented Lagrangian method for nonlinear optimization Paul Armand a and Ngoc Nguyen Tran b a,b Laboratoire XLIM, Université de Limoges, 23 av. A. Thomas, 87060, Limoges, France ARTICLE HISTORY Compiled May 4, 208 ABSTRACT We present a modification of a primal-dual algorithm [7] based on a mixed augmented Lagrangian and a log-barrier penalty function. The goal of this new feature is to quicly detect infeasibility. An additional parameter is introduced to balance the minimization of the objective function and the realization of the constraints. The rules of updating the parameters are based on [8]. The global convergence of the modified algorithm is analysed under mild assumptions. We also show that under a suitable choice of the parameters along the iterations, the rate of convergence of the algorithm to an infeasible stationary point is superlinear. This is the first local convergence result for the class of interior point methods in the infeasible case. We finally report some numerical experiments to show that this new algorithm is quite efficient to detect infeasibility and does not deteriorate the overall behavior in the general case. KEYWORDS infeasibility detection, nonlinear optimization, primal-dual methods, interior-point method, augmented Lagrangian method. Introduction Nonlinear optimization algorithms focus on two tass: minimizing the objective function and satisfying the constraints. When the set of the optimal solutions is nonempty, many efficient algorithms in the literature are designed to find a local optimal solution of the optimization problem. On the other hand, infeasible instances also appear quite a lot. They can arise, for example, in mathematical modeling, by varying the parameters of a model to study the system response. Infeasibility may also appear when solving a sequence of subproblems in an algorithm lie a branch-and-bound method. Even if the problem is feasible, a numerical optimization algorithm may encounter difficulties in finding a feasible point. In that case it would be nice to quicly return an infeasible stationary point, to avoid long sequence of iterations or convergence to a spurious solution. In this context, a rapid infeasibility detection becomes an important issue in nonlinear optimization. In the literature, a variety of algorithms using different approaches has been proposed to handle infeasibility. Martínez and Prudente [2], Birgin et al. [9, 0] and Gonçalves et al. [9] investigated infeasibility detection for augmented Lagrangian algorithms. Byrd et al. [2] and Bure et al. [] have also proposed infeasibility detection CONTACT P. Armand. paul.armand@unilim.fr

3 strategies for sequential quadratic programming methods. Especially, local convergence analyses in the infeasible case have been done in these papers. In the framewor of interior point method, the solver IPOPT [25], which is a filter line search algorithm, uses a feasibility restoration phase. Besides finding a new acceptable iterate to the filter, a feature of this phase is to detect (local) infeasibility. The local convergence of IPOPT near an infeasible stationary point was not made. The algorithm of Curtis [4] is a combination of a penalty method and an interior point method. The numerical results showed that this approach is efficient to detect infeasibility. Nocedal et al. [22] presented a trust region interior point method with two phases: a main phase (solution of the barrier problem with a decrease of the barrier parameter) and the feasibility phase (minimization the feasibility violation measure). Numerical experiments showed that the new method is better than the older one KNITRO/CG [3] in detecting infeasible problems and there is no loss of robustness in solving feasible problems. Despite good numerical results obtained, there is no complete global or local convergence analysis for two latter algorithms ([4, 22]). Recently, Armand and Omheni [7] proposed a nonlinear optimization algorithm, called SPDOPT, which is a mix of an interior point method and of an augmented Lagrangian method. The capability of this algorithm to detect infeasibility is closely related to the behaviors of the penalty parameter and of the dual variables. In particular, an infeasible stationary point can be detected when the sequence of dual variables tends to infinity and the penalty parameter converges to zero. Nevertheless, a fast infeasibility detection has not been observed in practice or theoretically proved for this algorithm. This fact motivated us to modify SPDOPT in order to accelerate the infeasibility detection without losing its good performances in the feasible case. More specifically, a new parameter, called the feasibility parameter, is introduced to balance the minimization of the barrier function and the realization of the equality constraints. If a nearly feasible point is detected, the feasibility parameter remains constant and our algorithm acts as the original algorithm. When the algorithm tends to an infeasible stationary point, the feasibility parameter acts as a barrier parameter. In this case, the exact solution of the perturbed system parametrized by the feasibility parameter defines a smooth trajectory. With a suitable rule of updating the feasibility parameter, the iterates tangentially follow this trajectory. Consequently, when the sequence of iterates converges to an infeasible stationary point, the algorithm can achieve a superlinear rate of convergence. To the best of our nowledge, this is the first local convergence analysis in the infeasible case related to interior point methods. The idea to introduce a new parameter is inspired from [8]. This one is a modification of SPDOPT-AL [6]. Besides analyzing the local convergence near an infeasible stationary point, numerical experiments demonstrated a better performance of the modified algorithm compared with the predecessor in detecting infeasibility. The paper is organized as follows. The current section is an introduction with some notations which will be used throughout the paper. The algorithm is described in the next section. Section 3 and Section 4 are devoted to the global and the local convergence analysis. The implementation of our algorithm and some numerical experiments are reported in the last section. Notation Vector inequalities are understood componentwise. For a vector x, the capital letter stands for the diagonal matrix X = diag(x) and the ith component of this vector is 2

4 denoted by [x] i or simply x i when there is no ambiguity. The letter e is used for the vector of all ones. For two real vectors x and y of same lengths in R n, x y is their Euclidean scalar product and x = (x x) /2 is the associated norm. The Hadamard product of two vectors x and y is defined by [x y] i = [x] i [y] i for all indices i. The notation x y means that the vectors x and y are orthogonal. For a real matrix M, the induced matrix norm is M = max{ Md : d }. The inertia of a real symmetric matrix M, denoted In(M) := (ι, ι, ι 0 ), is the numbers of positive, negative and null eigenvalues. For a function f and an iterate x, to simplify the notation we denote f = f(x ). Liewise, f stands for f(x ), and so on. The positive part of a real number r is defined by r = max{r, 0}. Let {a } and {b } be nonnegative scalar sequences. We write a = O(b ), or equivalently b = Ω(a ), if there exists a constant c > 0 such that a c b for all N. We also write a = Θ(b ) if a = O(b ) and a = Ω(b ). The notation a = o(b ) means that there exists a positive sequence {ɛ } converging to zero such that a = ɛ b for all N. 2. Algorithm Consider the following nonlinear optimization problem: minimize f(x) subject to c(x) = 0 and x 0, () x R n where f : R n R and c : R n R m are twice continuously differentiable functions. Any optimization problem with equality and inequality constraints can be formulated under this form by possibly adding slac variables and splitting free variables. Let us recall some basic definitions about stationarity. A point x R n is called a Fritz-John (FJ) point of problem () if there exists a nonzero vector (u, y, z) R R m R n such that ug(x) A(x)y z = 0, u 0, c(x) = 0 and 0 x z 0, where we denote g(x) = f(x) the gradient of f at x and A(x) = c(x) the transpose of the Jacobian matrix of c at x. When u > 0, a FJ point x is called a Karush-Kuhn- Tucer (KKT) point. By dividing the first equation by u, we see that y/u and z/u are the vectors of Lagrange multipliers associated with the equality and the bound constraints. Whenever u = 0, a FJ point x is called a singular stationary point. Equivalently, a singular stationary point is a feasible solution for () at which the Mangasarian-Fromovitz constraint qualification (MFCQ) does not hold. The point x is called an infeasible stationary point, if there exists z R n such that c(x) 0, A(x)c(x) z = 0 and 0 x z 0. In other words, an infeasible stationary point is not feasible for (), but is a KKT point for the feasibility problem minimize x R n 2 c(x) 2 subject to x 0. (2) 3

5 To the problem (), we associate the mixed penalty function ϕ ρ,λ,σ,µ (x) = ρf(x) λ c(x) 2σ c(x) 2 ρµ i log x i, (3) where ρ > 0 is the feasibility parameter, λ R m is an estimate of the vector of Lagrange multipliers associated with the equality constraints, σ > 0 is the quadratic penalty parameter and µ > 0 is the barrier parameter. This penalty function is a mixed of the augmented Lagrangian and of the logarithmic barrier function. It can be interpreted as the augmented Lagrangian associated with the log-barrier problem minimize x R n ρ(f(x) µ i log x i), subject to c(x) = 0. (4) The first order optimality conditions for (3) are ρg(x) A(x)(λ σ c(x)) ρµx e = 0. (5) By introducing the dual variables y = λ σ c(x) and z = ρµx e, the equation (5) is equivalently formulated as ρg(x) A(x)y z Φ(w, λ, ρ, σ, µ) := c(x) σ(λ y) = 0, (6) XZe ρµe where w := (x, y, z) R N, with N = n m n. This primal-dual system of equation can be seen as a perturbation of the FJ optimality conditions of problem (). The equality constraints is perturbed thans to the term σ(λ y). Note that when λ = y, this system is the primal-dual optimality system associated with the problem (4). Each complementarity products is perturbed thans to the term ρµ. An important feature of our approach is that each time the parameters are updated, either ρ is reduced, or µ is reduced, but not both. We will see that, under some usual regularity assumptions, this allows to guarantee the superlinear convergence in case of convergence to a local minimum or to an infeasible stationary point. The algorithm involves applying a Newton-type method for the solution of the system Φ = 0, while updating the parameters along the iterations to control the convergence to a FJ point or to an infeasible stationary point. There are two inds of iteration: outer and inner. At an outer iteration, the parameters are updated, while at an inner iteration the parameters are ept constant. At each iteration, outer or inner, a candidate iterate w is computed as a solution of the following linear system: J ρ,θ,δ (w)(w w) = Φ(w, λ, ρ, σ, µ) where the matrix is defined as H ρ,θ (x, y) A(x) I J ρ,θ,δ (w) := A(x) δi 0, Z 0 X 4

6 with m H ρ,θ (x, y) := ρ 2 f(x) y i 2 c i (x) θi. The regularization parameters θ 0 and δ > 0 are chosen such that the matrix i= K ρ,θ,δ (w) := H ρ,θ (x, y) X Z δ A(x)A(x) is positive definite. This property implies that not only the matrix J ρ,θ,δ (w) is nonsingular, but also that the solution of the linear system solved during the inner iterations is a descend direction for some merit function. We now describe in detail the outer iteration algorithm. At the beginning, a starting point w 0 = (x 0, y 0, z 0 ) R 2nm satisfying v 0 = (x 0, z 0 ) > 0 is chosen, then we set λ 0 = y 0. Besides, we choose σ 0 > 0, µ 0 > 0, ρ 0 = and four constants κ, d, τ (0, ) and l N. The outer iteration counter is set to = 0. A feasibility tolerance ɛ > 0 must be chosen. A flag is used to indicate if the algorithm is in the feasibility detection phase (F = ) or not (F = 0). Initially F =, then this value is ept until a feasible, or nearly feasible, point has been detected. The index i stores the last iteration number prior to at which inequality (7) is satisfied. It is initialized to i 0 = 0. The algorithm can be seen as an extension of [7, Algorithm ]. The first four steps are related to the updating of the parameters and are drawn from the feasibility detection algorithm for the equality constrained case proposed in [8]. The first step aims to detect a feasible or nearly feasible point. Whenever this is the case, the flag is set to F = 0 and this value is ept for all further iterations. In this case, the algorithm is exactly the same as [7, Algorithm ]. In particular, the sequence {ρ } is eventually constant. It is worth noting that this switching mechanism is necessary to avoid the undesirable situation in which the condition (7) is alternatively satisfied and not satisfied an infinite number of times, for example when the feasibility measure decreases very slowly. In this case, it would be difficult to distinguish between a convergence to a KKT point or to a singular stationary point. Moreover, in practice, a point is deemed to be feasible if the norm of the constraints is smaller than some predefined tolerance. Suppose, for example, that we want to minimize x in R, subject to the constraint e x 0. Most of well established softwares for nonlinear numerical optimization return a message lie optimal solution found when solving this problem. Hence, it seems natural to state that the problem is feasible, whenever the feasibility tolerance is small enough. Depending on the reduction of the feasibility measure, the trial values ρ and µ for the feasibility and barrier parameters, as well as the new values σ and λ for the augmented Lagrangian parameters, are chosen. If the inequality (7) is satisfied, the feasibility parameter is ept constant, new values the barrier and quadratic penalty parameters are chosen, the Lagrange multiplier estimate is set to the current value of the dual variable. As shown in [6], from a theoretical point of view and in practice, it is better to force the convergence of {σ } to zero to get a rapid rate of convergence. On the other hand, if the condition (7) is not satisfied, then there are two situations. The first situation is when the algorithm is still in the feasibility detection phase (F = ). In that case, the feasibility parameter is sufficiently reduced, while the barrier and quadratic penalty parameters are ept constant, and then the Lagrange multiplier estimate is rescaled. When these updates are always done from some iteration, this scaling of the 5

7 Algorithm (th outer iteration) () If c ɛ, then set F = 0. (2) Choose ζ > 0 such that {ζ } 0. If = 0 or c κ max{ c ij : ( l) j } ζ (7) then set i = and go to Step 4, otherwise set i = i. (3) If F =, then choose 0 < ρ dρ and set σ = σ, µ = µ, λ = ρ λ, else set ρ = ρ, choose 0 < σ dσ, 0 < µ dµ and set λ = λ. Go to Step 5. (4) Set ρ = ρ, choose 0 < σ σ and 0 < µ dµ. Set λ = y. (5) If ρ = ρ, then choose δ = Ω(µ ), else set δ = σ. Choose regularization parameter θ 0 such that K ρ,θ,δ(w ) 0. Set J = J ρ,θ,δ(w ). Compute by solving the linear system w J (w w ) = Φ(w, λ, ρ, σ, µ ). (8) (6) Choose τ [ τ, [. Compute α as the largest α (0, ] such that v α(v v ) ( τ )v, (9) where v = (x, z ). Choose a vector a = (a x, ay, zz ) [α, ] N such that v a v (v v ) > 0, where a v = (ax, az ). (7) Set ŵ = w a (w w ), (ρ, µ ) = (ρ, µ ) α (ρ ρ, µ µ ). (8) Choose ε > 0 such that {ε } 0. If Φ(ŵ, λ, ρ, σ, µ ) ε, then set w = ŵ. Otherwise, apply the inner iteration algorithm to find w such that Φ(w, λ, ρ, σ, µ ) ε (0) Lagrange multiplier estimate is useful for the convergence of the iterates to an infeasible stationary point (see Theorem 3.-ii). The second situation is when the algorithm has left the feasibility detection phase (F = 0). In that case, the feasibility parameter is ept constant, the barrier parameter is reduced as in a standard interior point method, the quadratic penalty parameter is also reduced to penalize the constraints violation and the Lagrange multiplier estimate is ept constant as in a classical augmented Lagrangian algorithm. At Step 5, the choice of the regularization parameter δ of the regularized Jacobian matrix is done as follows. When the feasibility parameter is unchanged, because (7) is satisfied or F = 0, we set δ = Ω(µ ). This choice is imposed by the global convergence theory of the algorithm in the feasible case, see [5, Theorem 3.3] and [7, Theorem 4.2]. In case the feasibility parameter is reduced, we set δ = σ. This choice is motivated to get a rapid convergence when the sequence of iterates converges to an infeasible stationary point, see Lemma 4.3 below. Once the Newton iterate w is computed at Step 5, the fraction to the boundary rule is applied to ensure the positivity of the primal and dual variables. As in [5], the step length can be selected componentwise to calculate the trial iterate ŵ. The 6

8 values of barrier and feasibility parameters are then updated according to formulae at Step 7. These formulae avoid too large discrepancies between the magnitude of these parameters and the one of Φ and increase robustness [4]. Finally, at Step 8, if the trial iterate ŵ satisfies a sufficient reduction of the residual norm of the perturbed optimality conditions, then w = ŵ. If this is not the case, a sequence of inner iterations with all the parameters fixed to their current values will be carried out to find the new iterate w. The inner iteration algorithm is a bactracing line search applied to the primal-dual merit function M ρ,λ,σ,µ (w) = ϕ ρ,λ,σ,µ (x) ν ψ σ,λ (x, y) ν 2 π ρ,µ (x, z) where ϕ ρ,λ,σ,µ (x) is defined by (3), ψ σ,λ (x, y) = 2σ c(x) σ(λ y) 2 and π ρ,µ (x, z) = x z ρµ p log(x i z i ), where ν > 0, ν 2 > 0 are scaling parameters. This is motivated by the fact that the first order optimality conditions for minimizing this merit function correspond to (6). We refer the reader to [7, Algorithm 2] for a complete description of this algorithm. To simplify the presentation, we consider that the quadratic parameter σ remains constant all along the inner iterations, while in [7] it can be increased. This choice has no impact from the theoretical point of view and in our numerical experiments, the value of this parameter is allowed to increase during the inner iterations. It has been shown that if the sequence of primal inner iterates remains in a compact set, then the inner iteration algorithm succeeds in finding an iterate that satisfies (0) after a finite number of iterations [7, Theorem 4.]. We end this section by showing some properties related to the behavior of the parameters with respect to the feasibility detection by means of the criterion (7). Lemma 2.. Assume that Algorithm generates an infinite sequence of iterates {w }. Let K N be the set of iteration indices at which the inequality (7) is satisfied. (i) If K is infinite, then {c } K converges to zero and {ρ } is eventually constant. (ii) If K is finite, then lim inf c > 0. In addition, suppose that the sequence {H ρ,θ(x, y ), g, A } is bounded and that the matrices K ρ,θ,δ(w ) are uniformly positive definite for N. (iii) If K is infinite, then the sequence {µ } converges to zero. (iv) If K is finite, then the sequences {σ ρ } and {σ λ } converge to zero. Proof. The outcomes (i) and (ii) are proved in [8, Lemma ], while (iii) is a direct consequence of [7, Theorem 4.2 (iv)]. To prove (iv), suppose that K is finite and let us define 0 := max K. Two cases are considered. The first case is when c ɛ at some iteration. We then have F = 0 for all further iterations. The update of the parameters at Step 3 implies that both sequences {ρ } and {λ } are eventually constant and {σ } tends to zero. It follows that the two sequences {σ ρ } and {σ λ } converge to zero. The second case is when c > ɛ for all N, which implies that F = at each iteration. In that case, for all 0, ρ dρ, σ = σ 0, µ = µ 0 and λ dρ λ dρ y 0. i= 7

9 By using similar arguments as in the proof of [7, Theorem 4.2 (iv)] and [5, Theorem 3.3], we will show that {ρ } converges to zero, which will imply that the two sequences {σ ρ } and {σ λ } converge to zero. The proof is by contradiction by supposing that {ρ } is bounded away from zero by some constant ρ > 0. For all 0, we have ρ dρ with d (0, ). From Step 7 of Algorithm, for all 0 we have ρ = ρ α (ρ ρ ) ( ( d)α )ρ. Because {ρ } is supposed to be bounded away from zero, this inequality implies that {α } converges to zero. We will get a contradiction by showing that {α } is bounded away from zero. The inequality (33) in [7, Theorem 4.2] shows that for all large enough, we have τ w w / ρ α. () Recall that the inequality (0) is satisfied at each iteration with a sequence {ε } going to zero. Therefore, the sequence {Φ(w, λ, ρ, σ, µ )} converges to zero. In particular, {X Z e ρ µ e} tends to zero. Consequently, for large enough [x ] i [z ] i ρµ 0, for all i =,..., n. 2 Keeping in mind all the assumptions and the previous lower bound on {x z }, [2, Theorem ] shows that there exists a constant K > 0, such that for all 0 From the definition of Φ( ), for all 0 we have Φ(w, λ, ρ, σ, µ ) = Φ(w, λ, ρ, σ, µ ) J ρ,θ,δ(w ) K. (2) ((ρ ρ )g, σ (λ λ ), (ρ ρ )µ e ). By using (8), (0), (2) and noting that σ = σ = σ 0, µ = µ = µ 0, λ = ρ λ and ρ ρ, for large enough we then get w w = J ρ,θ,δ(w ) Φ(w, λ, ρ, σ, µ ) K ( Φ(w, λ, ρ, σ, µ ) ρ ρ ( g µ 0 e ) σ 0 ρ λ ) K (ε g µ 0 e σ 0 λ ). Because {ε } tends to zero, {g } is bounded and λ d y 0, we deduce that w w is bounded from above, which contradicts inequality () and the fact that {α } is supposed to converge to zero. We then deduce that ρ 0, which completes the proof of (iv). 8

10 3. Global convergence analysis For the global convergence of the inner iterations see [7, Theorem 4.]. Roughly speaing, this theorem shows that under some standard assumptions, the inner iteration algorithm is able to find a new iterate w satisfying the stopping test (0) after a finite number of iterations. Hence, we can assume that the inner iteration algorithm successfully terminates each time it is called at Step 8. We then have the following result about the global convergence of Algorithm. Theorem 3.. Assume that all the assumptions of Lemma 2. are satisfied and that Algorithm generates an infinite sequence {w }. Let K N be the set of iteration indices for which the condition (7) is satisfied. (i) If K is infinite, then ρ = ρ > 0 for large enough and the iterates approach stationary of the problem (), i.e., the sequences { ρg A y z }, {c } K and {X z } converge to zero. (ii) If K is finite, then { c } is bounded away from zero and the sequence of primal iterates approaches stationarity of the feasibility problem, i.e., there exists a sequence {u } R n such that lim (A c u, X u ) = 0. Proof. Let us denote Φ := Φ(w, λ, ρ, σ, µ ) for N. Step 8 of Algorithm implies that {Φ } converges to zero. Suppose that K is infinite. Lemma 2.-(i) shows that {c } K tends to zero and that ρ = ρ for sufficiently large. The first bloc of Φ is ρ g A y z, therefore lim ρg A y z = 0. The third bloc of Φ is X Z e ρ µ e. Lemma 2.-(iii) implies that {µ } tends to zero. Therefore {X z } tends to zero, which concludes the proof of outcome (i). Suppose now that K is finite. Lemma 2.-(ii) implies that the sequence { c } is bounded away from zero. Let us define u := σ z > 0 for N. For all N, we have and A c u = σ (ρ g A y z ) σ ρ g A (c σ (λ y )) σ A λ X u = σ (X Z e ρ µ e) σ ρ µ e. By taing the norm on both sides, for all we then get A c u X u (2σ g 2 A µ e ) max{ Φ, σ ρ, σ λ }. By assumptions, the sequences {g } and {A } are bounded. The first assertion of this proof and Lemma 2.-(iv) imply that the second term of this inequality tends to zero. We then deduce that {(A c u, X u )} converges to zero, which ends the proof. The next result summarizes the behavior of the Algorithm under a mild and usual assumption about the boundedness of the sequence of primal iterates. Theorem 3.2. Assume that Algorithm generates an infinite sequence of iterates {w } such that {x } lies in a compact set. Assume also that the regularization parameters chosen at Step 5 are such that the matrices K ρ,θ,δ(w ) are uniformly positive definite for N. 9

11 (i) Any feasible limit point of {x } is a Fritz-John point of problem (). (ii) If {x } has no feasible limit point, then any limit point is an infeasible stationary point of problem (). Proof. The compactness assumption implies that the sequence {(g, A )} is bounded. Assume that {x } has a feasible limit point x, i.e., c = 0. Lemma 2.-(ii) implies that the updating condition (7) is satisfied an infinite number of times. Let K N such that lim K x = x. From Lemma 2.-(i), we have ρ = ρ for large enough. We consider two situations regarding the boundedness of the sequence {y } K. Suppose that {y } K is bounded. With the compactness of {x } we have that the sequence {H ρ,θ(x, y )} is bounded and so Theorem 3. applies. The sequence {z } is also bounded. Indeed, for all N, we have z ρ g A y z ρ g A y. The first term on the right hand side tends to zero and the second one is bounded. Consequently, by virtue of Theorem (3.)-(i), any limit point of the sequence {x, y, z } K satisfies the FJ conditions. Because ρ > 0, x is a KKT point of problem (). The second situation is when {y } K is unbounded. For all K, let (a, b ) := (y,z) (y. Note,z ) that b > 0, because of Step 6 of Algorithm. By taing a subsequence if necessary, we can assume that lim K (a, b ) = (ā, b) 0, with b 0. For all N we have A a b (y, z ) ( ρ g A y z ρ g ) Because {ρ g A y z } converges to zero, {g } is bounded, {ρ } is eventually constant and {y } K is unbounded, we get Āā b = 0. For all N we have X b = (y, z ) ( X Z e ρ µ e ρ µ e ) Because {X Z e ρ µ e} tends to zero and {(y, z )} K is unbounded, we also get X b = 0. We can conclude that x is a singular stationary point of problem (). Let us now consider the second outcome for which any limit point of {x } is infeasible. It follows from Lemma 2.-(i) that the Step 3 of Algorithm is executed at all iteration 0 for some 0 N. There are two cases depending on the boundedness of {y }. If {y } is bounded, then the sequence {H ρ,θ(x, y )} is bounded. Therefore, Theorem 3.-(ii) shows that any limit point x of {x } is an infeasible stationary point of (). In the second case, {y } is unbounded. From the convergence to zero of the sequence {c σ (λ y )} and the boundedness of {λ } (since λ λ 0 for all 0 ), we deduce that the sequence {σ } tends to zero. As a consequence, the sequences {σ ρ } and {σ λ } converge to zero. We then obtain the same conclusion as Lemma 2.-(iv). It suffices to repeat the proof of Theorem (3.)-(ii) to show that any limit point {x } is an infeasible stationary point of (). 4. Asymptotic behavior There are two cases to analyse. The first one is when {w } converges to a primaldual solution of problem (). Because the flag F is switched to zero at some iteration, 0

12 the feasibility parameter becomes constant after a finite number of iterations. Consequently, the analysis in [23, Section.4] can be directly applied to demonstrate that under some classical assumptions and a suitable choice of the parameters, the rate of convergence of the sequence {w } is superlinear. The second case is when Algorithm generates a convergent sequence {x } to an infeasible stationary point x R n of the problem (). This section concentrates to this case. 4.. Assumptions and basic results The first assumption is about the regularity of the problem data. This assumption is standard in our framewor. Assumption. The function f and c are twice continuously differentiable and their second derivatives are Lipschitz continuous over an open neighborhood of x. To analyse the rate of convergence of the sequence of iterates to an infeasible stationary point, a natural assumption is that the whole sequence of iterates converges to such a point. Assumption 2. Algorithm generates an infinite sequence {w } which converges to w = (x, y, z ) R 2nm, where x is an infeasible stationary point of problem (). This assumption implies that the algorithm always stays in feasibility detection phase, i.e., the feasibility flag eeps the value F = all along the iterations. More precisely, some direct consequences are the following. Lemma 4.. Under Assumption 2, F = at each iteration, there exists 0 N such that for all 0, σ = σ := σ 0 > 0 and µ = µ := µ 0 > 0, {ρ } converges to zero and λ = o(ρ ). Proof. Assumption 2 implies that {c } is bounded away from zero. By virtue of Lemma 2.-(i), the inequality (7) is satisfied only a finite number of times. It follows that Step 3 of Algorithm is always executed for sufficiently large. For all N, we have c c σ (λ y ) σ λ σ y. Inequality (0) implies that the first term of the right-hand side tends to zero. The update of λ at Step 3 implies that {λ } remains bounded. Since {y } is supposed to be convergent, the sequence {σ } cannot go to zero, which implies that F = at each iteration. Hence, there exists 0 N such that for all 0, σ = σ 0, µ = µ 0 and λ /ρ = ρ...ρ 0 λ 0. By using the same arguments as in the proof of Lemma 2.- (iv), we deduce that the sequence {ρ } converge to zero. Than to the fact that ρ ρ, we get λ = o(ρ ). For w = (x, y, z) R 2nm and ρ > 0, let us define ρg(x) A(x)y z F (w, ρ) = c(x) σy, XZe ρµe where σ and µ are the values defined by Lemma 4.. From the fact that

13 lim Φ(w, λ, ρ, σ, µ ) = Φ(w, 0, 0, σ, µ) = F (w, 0), one has y = σ c, z = σ A c and 0 x z 0. Let us denote the Hessian matrix of the function 2 c 2 at x R n by S(x) := m c i (x) 2 c i (x) A(x)A(x). i= The set of active bounds is denoted by A := {i : [x ] i = 0}. Assumption 3. The second order sufficient optimality conditions (SOSC) for the feasibility problem (2) hold at x, i.e., u S u > 0 for all u 0 satisfying [u] i = 0 for all i A. Assumption 4. Strict complementarity holds at w, that is a := min{[x ] i [z ] i : i =,..., n} > 0. The next lemma summarizes some basic results which are direct consequences of these assumptions. Lemma 4.2. Under Assumptions -4, there exist positive constants r, ρ, K, C, L, L, L 2 and a continuously differentiable function w : ( ρ, ρ ) R N, such that for all w, w B(w, r ) and for all ρ, ρ ( ρ, ρ ), we have (i) F w(w, ρ) K, (ii) F (w, ρ) = 0 if and only if w(ρ) = w, (iii) w(ρ) w(ρ ) C ρ ρ, (iv) F w(w, ρ) F w(w, ρ) L( w w ), (v) L w w F (w, ρ) F (w, ρ) L 2 w w. Proof. In order to prove (i), we only need to show that the matrix F w(w, 0) is nonsingular. Let u R N such that F w(w, 0)u = 0. By writing u := (u, u 2, u 3 ) and by using the fact that y = σ c, we have σ i c i 2 c i A I A σi 0 u u 2 = 0 0. Z 0 X 0 The third equation of this linear system and Assumption 4 imply that [u ] i = 0 for all i A and [u 3 ] i = 0 for all i / A. Consequently, one has u u 3 = 0. The second equation of the linear system gives us u 2 = σ A u. Substituting this equality into the first equation and premultiplying by u, we get σ u S u = 0. By Assumption 3 we deduce that u = 0, from which we deduce that u 2 = 0 and u 3 = 0. The properties (ii) and (iii) are direct consequences of the implicit function theorem. The Lipschitz continuity of F w, property (iv), follows from Assumption. The last assertion (v) is a consequence of [6, Lemma 5]. Our asymptotic analysis also requires some specifications on the choice of the feasibility parameter. For all N, the trial value of feasibility parameter ρ is chosen u 3 2

14 such that θ ρ t ρ θ 2ρ (3) for some constants θ > 0, θ 2 (0, ) and t (0, ). From the fact that {ρ } goes to zero, the left inequality of (3) implies that ρ 2 = o(ρ ). (4) At last, the parameter τ used in (9) must satisfy the following condition: lim ( τ ) ρ ρ = 0. (5) The conditions (3) and (5) are standard in the asymptotic analysis of this family of primal-dual methods, see e.g., [, 5]. The left inequality of (3) means that the parameter of the path w must converge to zero with a subquadratic rate of convergence in order that the Newton iterates, which converge naturally with a quadratic rate, catch the path prior the parameter is nearly zero. For large enough, we have ρ < ρ and σ = σ. Therefore, at Step 5 of Algorithm, the regularization parameter of the matrix J is set to δ = σ. The next lemma shows that the matrix J coincides with the Jacobian of F (, ρ ) at w when the feasibility parameter goes to zero. Lemma 4.3. Under Assumptions -4, for all N large enough, one has J = F w(w, ρ where K is defined by Lemma 4.2. ) and J K, Proof. For the first claim, it suffices to show that θ = 0 for large enough. This is true whenever K ρ,0,σ(w ) is positive definite. For all N we have K ρ,0,σ(w ) = H ρ,0(x, y ) σ A A X = σ (S S ) H ρ,0(x, y σ c ) σ S X Z. The first two matrices tend to zero when tends to infinity because {(x, y )} and {ρ } respectively converge to (x, σ c ) and zero. It remains to show that the matrices σ S X Z are uniformly positive definite for large enough. Let us define the n n diagonal matrix E, whose ith diagonal element is equal to one if i A and zero otherwise. Assumption 3 means that S is positive definite on the null space of E. Therefore, from Debreu s Lemma [5], there exists γ > 0 such that the matrix σ S γe is positive definite. For N, let us define the diagonal matrix Ξ whose ith diagonal element is [z ] i /[x ] i if i A and zero otherwise. Because of the strict complementarity assumption, the matrices X Z Ξ tend to zero and each nonzero component of Ξ goes to infinity. It follows that Ξ γe is positive semidefinite for all large enough. By writing σ S X Z = σ S γe Ξ γe X Z Ξ, 3 Z

15 we deduce that the matrices σ S X Z are uniformly positive definite for all large enough. The second assertion follows directly from the first claim and Lemma 4.2-(i). Throughout this section, we assume that Assumption -4 are satisfied. The following result gives an estimate of the distance of the Newton iterate to the trajectory w. Lemma 4.4. There exists M > 0, such that for all sufficiently large w w(ρ ) M( w w(ρ ) 2 ρ 2 λ ). Proof. Let N be large enough such that ρ ρ and w B(w, r ). Define e := w(ρ ) w. From the linear system (8), then using Φ(w, λ, ρ, σ, µ) = F (w, ρ) (0, σλ, 0) and F (w(ρ ), ρ ) = 0, we have w w(ρ ) = J Φ(w, λ, ρ, σ, µ) e ) = J (F (w(ρ ), ρ ) F (w, ρ ) (0, σλ, 0) J e = J 0 J (F w(w te, ρ ) F w(w, ρ ))e dt (F w(w, ρ ) J )e J (0, σλ, 0). By taing norm on both sides and next applying Lemma 4.2-(iv) and Lemma 4.3, we then have w w(ρ ) 2 KL e 2 Kσ λ. By applying Lemma 4.2-(iii) and by using the inequality ρ ρ, we also get e w w(ρ ) w(ρ ) w(ρ ) w w(ρ ) Cρ. Finally, by using the inequality (a b) 2 2(a 2 b 2 ) for two real numbers a and b, we deduce that w w(ρ ) KL w w(ρ ) 2 KLC 2 ρ 2 Kσ λ. Set M = K max{l, LC 2, σ} to complete the proof Asymptotic result According to Assumptions -4 and Lemma 4.4, the convergence to w of w implies that there exists r (0, min{r, a}), where a is defined in Assumption 4, such that for all large enough, w, w, w(ρ ), w(ρ ), and w(ρ ) belong to B(w, r) and ρ (0, ρ ). Without loss of generality, we can assume that these properties are true for all N. The following lemma gives an evaluation of the distance between the next iterate w and the path w. 4

16 Lemma 4.5. There exists a constant C > 0 such that for all N, w w(ρ ) C ( ŵ w(ρ ) λ ). (6) Proof. If there is no inner iteration, we then have w = ŵ, therefore the inequality (6) holds trivially with C =. Suppose now that w is obtained by applying a sequence of inner iterations. By virtue of Lemma 4.2-(v), Lemma 4.2-(ii) and Step 8 of Algorithm, there exists an index 0 N such that for all 0, L w w(ρ ) F (w, ρ ) F (w(ρ ), ρ ) = Φ(w, λ, ρ, σ, µ) (0, σλ, 0) ε σ λ < Φ(ŵ, λ, ρ, σ, µ) σ λ F (ŵ, ρ ) F (w(ρ ), ρ ) 2σ λ L 2 ŵ w(ρ ) 2σ λ. By defining C := L max{l 2, 2σ}, the inequality (6) is true for all 0. This inequality also holds for all < 0 by increasing the constant C if necessary. The next lemma gives a lower bound on the step length computed by applying the fraction to the boundary rule (9). The proof is given in [, Corollary ]. Lemma 4.6. For all N, the step length α computed by (9) satisfies where b = a r > 0. α τ b w w, (7) The first step of the asymptotic analysis is to show that the distance of the iterates to the trajectory w is upper bounded by a constant times the feasibility parameter. Lemma 4.7. The iterates w generated by Algorithm satisfy w w(ρ ) = O(ρ ) Proof. First of all, let us prove that there exist constants D (0, ), D 2 > 0 and D 3 > 0 such that for all N, d d 2 D ρ d D 2 ρ, (8) ρ where d := D 3 w w(ρ ). Indeed, if a sequence {d } satisfies this inequality, it has been proved in [, Lemma 7] that d = O(ρ ) and thus the conclusion of the lemma will follow. Let us choose 0. Invoing inequality (6) and Lemma 4.2-(iii), we get C w w(ρ ) ŵ w(ρ ) w(ρ ) w(ρ ) λ ŵ w(ρ ) C ρ ρ λ. (9) 5

17 By using the definition of ρ at Step 7 of Algorithm, one has ρ ρ = ( α )(ρ ρ ) ( α )ρ ( τ b w w )ρ, (20) where the last inequality is due to (7). In the same manner, we also have ŵ w(ρ ) = a (w w(ρ )) (e a ) (w w(ρ )). Taing the norm on both sides and noting that a and e a α, we obtain ŵ w(ρ ) a w w(ρ ) e a w w(ρ ) w w(ρ ) ( α ) w w(ρ ). Applying Lemma 4.2-(iii), Lemma 4.4 and inequality (7) to the previous inequality, we deduce ŵ w(ρ ) w w(ρ ) ( α )( w w(ρ ) Cρ ) M( w w(ρ ) 2 ρ 2 λ ) ( τ b w w )( w w(ρ ) Cρ ). (2) By using Lemma 4.2-(iii), Lemma 4.4, w w(ρ ) 2 r and ρ ρ, we obtain w w w w(ρ ) w(ρ ) w(ρ ) w(ρ ) w w w(ρ ) Cρ M( w w(ρ ) 2 ρ 2 λ ) K w w(ρ ) K 2 ρ M λ, (22) where K = 2M r and K 2 = C Mρ. Combining (20)-(22) into (9), using again w w(ρ ) 2 r and also the inequality ab 2 (a2 b 2 ) for two real numbers a and b, we obtain C w(ρ ) C 2 w w(ρ ) 2 ( τ ) w w(ρ ) C 3 ρ 2 2C( τ )ρ C 4 λ, where C 2 = M bk bck 2 bk 2, C 3 = M 2bCK 2 bck 2 bk 2 and C 4 = M 2bCM 2bM r. By using the properties (4) and (5), we get ( ρ C w w(ρ ) C 2 w w(ρ ) 2 ) o w w(ρ ) o(ρ ρ )C 4 λ. Now, we use λ = o(ρ ) given in Lemma 4. and ρ ρ to get w w(ρ ) C C 2 w w(ρ ) 2 o ( ρ ρ ) w w(ρ ) o(ρ ). 6

18 Multiplying both sides by D 3 := C C 2, for large, we get ( ) d d 2 o ρ d o(ρ ). ρ By increasing the constant D 2 N. > 0 if necessary, inequality (8) is satisfied for all Lemma 4.8. The Newton-lie iterate w generated at Step 5 of Algorithm satisfies w w = O(ρ ). Proof. Let N. In view of Lemmas 4.4, 4.7, 4. and the relation (4), we get w w(ρ ) M( w w(ρ ) 2 ρ 2 λ ) = o(ρ ) (23) By virtue of Lemmas 4.7, 4.2-(iii), (23) and (3), we then have w w w w(ρ ) w(ρ ) w(ρ ) w(ρ ) w O(ρ ) C ρ ρ o(ρ ) = O(ρ ). The following lemma shows that iterate ŵ is asymptotically tangent to the trajectory w. Lemma 4.9. The candidate iterate ŵ computed at Step 7 of Algorithm satisfies ŵ = w(ρ ) o(ρ ). Proof. Let N. A first order Taylor expansion of w at ρ = 0 and the definition of ρ at Step 7 of Algorithm yield w(ρ ) = w w (0)ρ o(ρ ) = α (w w (0)ρ ) ( α )(w w (0)ρ ) o(ρ ) = α (w(ρ ) o(ρ )) ( α )(w(ρ ) o(ρ )) o(ρ ) = α w(ρ ) ( α )w(ρ ) o(ρ ) By using the definition of ŵ at Step 7 of Algorithm, we get ŵ w(ρ ) = a w (e a ) w w(ρ ) = a (w w(ρ )) (e a ) (w w(ρ )) (a α e) (w(ρ ) w(ρ )) o(ρ ). Taing the norm on both sides, using a, e a α and a α e α and then invoing Lemma 4.2-(iii), Lemma 4.7 and inequality (23), we deduce 7

19 that ŵ w(ρ ) a w w(ρ ) e a w w(ρ ) The bound (7) on the step length gives a α e w(ρ ) w(ρ ) o(ρ ) w w(ρ ) ( α ) w w(ρ ) C( α )ρ o(ρ ) = o(ρ ) O(( α )ρ ) o(ρ ) (24) ( α )ρ ( τ b w w )ρ. Using Lemma 4.8, properties (5) and (4), we deduce that ( α )ρ = o(ρ ) O(ρ2 ) = o(ρ ) (25) We conclude by substituting (25) into (24) and by using the fact that ρ ρ. We now state the main result of this section, which states that the sequence of iterates becomes asymptotically tangent to the trajectory w. Theorem 4.0. Under Assumptions -4, if the feasibility parameter satisfies (3), if the fraction to the boundary parameter satisfies (5), if the tolerance is such that ε = Ω(ρ ), then we have the following results (i) Algorithm asymptotically need no inner iterations, i.e., w = ŵ for large enough. (ii) The iterates generated by Algorithm satisfy w = w(ρ ) o(ρ ). (iii) The unit step is asymptotically accepted by the fraction to the boundary rule, i.e., α = for large enough. In particular, {w } and {ρ } have the same rate of convergence, i.e., w w = Θ(ρ ). (26) Proof. To prove the result (i), it suffices to show that the stopping condition in Step 8 of Algorithm is satisfied for large enough. According to Lemmas 4.2-(ii), 4.2-(v), 4.9 and 4. with noting that ρ ρ, we have Φ(ŵ, λ, ρ, σ, µ) F (ŵ, ρ ) F (w(ρ ), ρ ) (0, σλ, 0) L 2 ŵ w(ρ ) σ λ = o(ρ ) < ε. The second result (ii) follows directly from (i) and Lemma 4.9. The proof of the acceptation of the unit step obtained by the fraction to the boundary rule (9) is similar to the one in [5, Lemma 4.20]. 8

20 From (ii) we have w w = w (0)ρ o(ρ ). The strict complementarity assumption implies that w (0) 0, from which we deduce (26). 5. Numerical experiments We refer to our algorithm as SPDOPT-ID (Strongly Primal-Dual Optimization with Infeasibility Detection) which has been implemented in C. We compared it with SP- DOPT [7] on two sets of problems. The standard set consists of 86 problems in the Hoc and Schittowsi collection [20] with at least one inequality. The infeasible set is created from the standard set by adding the constraint c 2 = 0 or (x u ) 2 = 0, where c is the first component of c and u is a bound of the first variable (x u or x u ). It is clear that all problems of the latter set is infeasible. With a default starting point x 0 and z 0 = (,..., ), y 0 is defined as the least squares solution of g 0 A 0 y z 0 = 0. The barrier parameter is initialized by µ 0 = 0.. If this parameter is updated with a trial value µ < µ, we adopt the rule described in [5, Algorithm 2]. The feasibility parameter is initially set to ρ 0 =. When F =, a trial value of the feasibility parameter in Step 3 is updated as follows ρ = min{0.2ρ, ρ.4 }. This choice of ρ satisfies the requirement (3) with t = 0.4, θ =, θ 2 = 0.2. A lower bound of 0 6 is also imposed on this parameter. For the fraction to the boundary rule, the choice τ = max{0.99, ρ µ } verifies condition (5). The regularization parameter δ is updated by the following rule: if F = and the condition (7) is not satisfied, then δ = σ ; otherwise, δ = max{0 2 µ, 0 8 }. It is easy to verify that all assumptions of δ in the global and the local analysis are fulfilled. The parameters σ and θ are updated as in [7, Algorithm ]. If (g A y /ρ z /ρ, c, x z /ρ ) ε tol with ε tol = 0 8, the algorithm is terminated and an optimal solution is declared to be found. Otherwise, if c > ε tol, Φ(w, 0, 0, σ, µ ) ε tol and ρ ε tol, the algorithm is stopped at an infeasible stationary point. For SPDOPT, the stopping conditions c > ε tol, A c x ε tol and σ ε tol are added to terminate this algorithm at an infeasible stationary point. For the aim of getting a fast local convergence when the algorithm converges to an infeasible stationary point, the feasibility tolerance at Step is set to ɛ = ε tol. At Step 2 of Algorithm, we choose c = 0.9, l = 2 and ζ = 0σ ρ for all iteration. The sequence of tolerance {ε } in Step 8 is defined by the following formula ε = 0.9 max{ Φ(w i, λ i, ρ i, σ i, µ i ) : ( 4) i } 0 min{α x, αz }0.2 µ ρ. By applying [3, Proposition ], it is easy to see that {ε } converges to zero. This choice meets the requirements to get a fast convergence in both feasible case, i.e., ε = Ω(µ ), and in the infeasible case, i.e., ε = Ω(ρ ). The linear solver MA57 [8] is used for all the algorithms. The maximum number of iterations, counting both the inner and the outer iterations, is limited to For the standard problems, only 82 problems solved by at least one of two algorithms are selected for the comparison purpose (problems hs099, hs02, hs03 and s332 have not been solved). Figure gives us the performance profiles of Dolan and 9

21 Function evaluations Gradient evaluations s( ) Function evaluations 0.7 Gradient evaluations s( ) 0.8 SPDOPT-ID 0.8 SPDOPT Figure Figure :. Performance profiles comparing the two 2 algorithms on the the set set of standard of standard problems problems Moré Figure [7] 2 on shows 2 the the numbers 4 performances 6 of 8function of 0these andalgorithms gradient evaluations. in terms 2 of 4 These numbers 6 profiles 8 function show 0 and gradient evaluations on a set of 69 infeasible problems (the problems hs033, that SPDOPT-ID and its predecessor SPDOPT have their own strengths hs044, in terms hs057, hs064, of efficiency hs083, and hs084, robustness. hs085, Inhs089, particular, hs099, SPDOPT hs05, is s220, slightly s33, more s332, efficient s356, than s357, s376, s383 have been eliminated since two SPDOPT-ID algorithms cannot SPDOPT SPDOPT-ID (4%). In term of robustness, SPDOPT-ID detect arethe succeeded infeasibility). 80We problems SPDOPT-ID which are more is the 2 problems most e cient thanalgorithm SPDOPT. forwe detecting can conclude infeasible thatproblems. the infeasibility In particu- observe that lar, detection Figure the e : ciency does Performance not ratechange of SPDOPT-ID profiles too much comparing isthe overperformances 65% 2 algorithms whereas that of on the oforiginal SPDOPT set standard algorithm IPOPT problems (SP- is less than DOPT) 40%. for In term solving of robustness, standard problems. SPDOPT-ID and IPOPT are more robust than SPDOPT-AL since Figure they 2can 2shows detect themore performances than 90% of problems, these of these algorithms while algorithms SPDOPT-AL in terms in of terms numbers onlyofdetects numbers of function 40%. of and gradient functionevaluations and gradient on a set evaluations of 69 infeasible on a set problems of 74(the infeasible problems problems hs033, (the hs044, problems hs057, Function evaluations Gradient evaluations hs064, hs044, hs083, hs057, hs084, hs064, hs085, hs089, hs083, hs099, hs084, hs05, hs099, s220, hs05, s33, s220, s332, s33, s356, s332, s357, s376, s383 s357, have s376 beenhave eliminated been eliminated since twosince algorithms two algorithms cannot detect cannotthe detect infeasibility). the infeasibility). We observe that WeSPDOPT-ID observe 0.8 thatisspdopt-id the most e cient is thealgorithm most efficient for 0.8 detecting algorithm infeasible for detecting problems. infeasible In particular, problems. the e ciency In particular, rate of SPDOPT-ID the efficiency is over rate65% of SPDOPT-ID whereas thatisof over SPDOPT 95%. In and term IPOPT of robustness, 40%. 0.6 InSPDOPT-ID term of robustness, is also more SPDOPT-ID robust than and IPOPT SPDOPT 0.6 aresince morethey robust canthan detect SPDOPT-AL more is less than since thanthey 93% can of detect problems more (74 than problems), 90% of problems, while thewhile ratespdopt-al of is only only detects 40%. 40%. s( ) Function evaluations Gradient evaluations s( ) SPDOPT-ID SPDOPT 0.4 Figure 2: Performance profiles comparing 2 algorithms on the set of infeasible problems SPDOPT-ID SPDOPT Figure Figure 2: 2. Performance profiles comparing the two 2 algorithms on the the set set of infeasible of infeasible problems problems 20 20

22 Figure 3 gives a general overview about the performances of these algorithms on the set of 369 problems Figure including 3 givesboth a general standard overview and infeasible about the problems. performances We are able of these to seealgorithms that SPDOPT-ID on and theipopt set of 356 are the problems most robust including algorithms. both standard Moreover, andspdopt-id infeasible problems. is more ewe cient are than able two other to see ones. that SPDOPT-ID is more efficient and more robust than SPDOPT. Function evaluations Gradient evaluations s( ) SPDOPT-ID SPDOPT Figure 3: Figure Performance 3. Comparison profiles thecomparing two algorithms 2 algorithms on the set of standard on the and set infeasible of standard problems and infeasible problems References 6. Conclusion [] Both P. in Armand theory and practice, J. Benoist. the capability A local convergence of the mixedproperty logarithmic of primal-dual barrier-augmented methods for Lagrangian nonlinearalgorithm programming. [7] tomath. quicly Program., detect infeasibility Ser. A(5):99 222, has been2008. improved. 4, 6But when the feasibility measure becomes smaller than the feasibility tolerance ( c ɛ), the [2] P. Armand and J. Benoist. Uniform boundedness of the inverse of a jacobian matrix arising new algorithm exhibits the same behavior as the original one. More precisely, in that in regularized interior-point methods. Mathematical Programming, 37(): ,203. case, the quadratic penalty parameter goes to zero and the multipliers associated to 7, 8 the equality constraints become unbounded. An open question would be to find an [3] interior P. Armand, point algorithm J. Benoist, with R. Omheni, a superlinear and V. rate Pateloup. of convergence Study ofina primal-dual any case, even algorithm if the for sequence equalitystays constrained infeasible, minimization. but becomes Comput. nearly Optim. feasible Appl., (thin59: , about the 204. realization 9 of the constraint e x 0). In practice we can always choose a feasibility tolerance small [4] enough, P. Armand, but from J. Benoist, a conceptual and D. Orban. point of Dynamic view this updates not entirely of the barrier satisfactory. parameter in primaldual methods paper completes for nonlinear the local programming. convergence Computational analysis of Optimization an interior point and Applications, method This for4(): 25, nonlinear optimization in the difficult case of infeasible problems. Note especially that no assumption on the linear independence of the gradient of active constraints is [5] used P. in Armand, our analysis, J. Benoist, contrary and D. to Orban. the analyses From global of Byrd to et local al. convergence [2] and Bure of interior et al. []. methods This for comes nonlinear from optimization. the fact that Optimization in the infeasible Methods case, and the Software, quadratics 28(5):05 088,203.5, penalty parameter provides 8, 8, 9 a natural regularization of the matrix of the linear system to solve at each iteration. As a consequence, an open question is whether we can analyse the local behavior of the mixed penalty method [7] in the feasible case without any constraint qualification. We note that the local convergence analysis of interior point methods were considered under MFCQ (see, e.g., Wright 2 and Orban [26], Vicente and Wright [24]) or the full ran of the Jacobian of equality constraints (see, e.g., Yamashita and Yabe [27]). On the other hand, when solving large scale problems, the algorithm [7] has good performances on degenerate problems, even if MFCQ is not valid. 2

An augmented Lagrangian method for equality constrained optimization with rapid infeasibility detection capabilities

An augmented Lagrangian method for equality constrained optimization with rapid infeasibility detection capabilities An augmented Lagrangian method for equality constrained optimization with rapid infeasibility detection capabilities Paul Armand, Ngoc Nguyen Tran To cite this version: Paul Armand, Ngoc Nguyen Tran. An

More information

A globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization

A globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 A globally and quadratically convergent primal dual augmented Lagrangian

More information

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

A proximal approach to the inversion of ill-conditioned matrices

A proximal approach to the inversion of ill-conditioned matrices A proximal approach to the inversion of ill-conditioned matrices Pierre Maréchal, Aude Rondepierre To cite this version: Pierre Maréchal, Aude Rondepierre. A proximal approach to the inversion of ill-conditioned

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

On constraint qualifications with generalized convexity and optimality conditions

On constraint qualifications with generalized convexity and optimality conditions On constraint qualifications with generalized convexity and optimality conditions Manh-Hung Nguyen, Do Van Luu To cite this version: Manh-Hung Nguyen, Do Van Luu. On constraint qualifications with generalized

More information

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties André L. Tits Andreas Wächter Sasan Bahtiari Thomas J. Urban Craig T. Lawrence ISR Technical

More information

New estimates for the div-curl-grad operators and elliptic problems with L1-data in the half-space

New estimates for the div-curl-grad operators and elliptic problems with L1-data in the half-space New estimates for the div-curl-grad operators and elliptic problems with L1-data in the half-space Chérif Amrouche, Huy Hoang Nguyen To cite this version: Chérif Amrouche, Huy Hoang Nguyen. New estimates

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October

More information

Norm Inequalities of Positive Semi-Definite Matrices

Norm Inequalities of Positive Semi-Definite Matrices Norm Inequalities of Positive Semi-Definite Matrices Antoine Mhanna To cite this version: Antoine Mhanna Norm Inequalities of Positive Semi-Definite Matrices 15 HAL Id: hal-11844 https://halinriafr/hal-11844v1

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

A globally convergent Levenberg Marquardt method for equality-constrained optimization

A globally convergent Levenberg Marquardt method for equality-constrained optimization Computational Optimization and Applications manuscript No. (will be inserted by the editor) A globally convergent Levenberg Marquardt method for equality-constrained optimization A. F. Izmailov M. V. Solodov

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Exact Comparison of Quadratic Irrationals

Exact Comparison of Quadratic Irrationals Exact Comparison of Quadratic Irrationals Phuc Ngo To cite this version: Phuc Ngo. Exact Comparison of Quadratic Irrationals. [Research Report] LIGM. 20. HAL Id: hal-0069762 https://hal.archives-ouvertes.fr/hal-0069762

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Fran E. Curtis February, 22 Abstract Penalty and interior-point methods

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018

More information

On Augmented Lagrangian Methods with General Lower-Level Constraints

On Augmented Lagrangian Methods with General Lower-Level Constraints On Augmented Lagrangian Methods with General Lower-Level Constraints R Andreani, Ernesto Birgin, J. Martinez, M. L. Schuverdt To cite this version: R Andreani, Ernesto Birgin, J. Martinez, M. L. Schuverdt.

More information

The Squared Slacks Transformation in Nonlinear Programming

The Squared Slacks Transformation in Nonlinear Programming Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints

More information

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0, AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

A GLOBALLY CONVERGENT STABILIZED SQP METHOD A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained

More information

Some explanations about the IWLS algorithm to fit generalized linear models

Some explanations about the IWLS algorithm to fit generalized linear models Some explanations about the IWLS algorithm to fit generalized linear models Christophe Dutang To cite this version: Christophe Dutang. Some explanations about the IWLS algorithm to fit generalized linear

More information

This manuscript is for review purposes only.

This manuscript is for review purposes only. 1 2 3 4 5 6 7 8 9 10 11 12 THE USE OF QUADRATIC REGULARIZATION WITH A CUBIC DESCENT CONDITION FOR UNCONSTRAINED OPTIMIZATION E. G. BIRGIN AND J. M. MARTíNEZ Abstract. Cubic-regularization and trust-region

More information

Sequential equality-constrained optimization for nonlinear programming

Sequential equality-constrained optimization for nonlinear programming Sequential equality-constrained optimization for nonlinear programming E. G. Birgin L. F. Bueno J. M. Martínez September 9, 2015 Abstract A novel idea is proposed for solving optimization problems with

More information

MODIFYING SQP FOR DEGENERATE PROBLEMS

MODIFYING SQP FOR DEGENERATE PROBLEMS PREPRINT ANL/MCS-P699-1097, OCTOBER, 1997, (REVISED JUNE, 2000; MARCH, 2002), MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY MODIFYING SQP FOR DEGENERATE PROBLEMS STEPHEN J. WRIGHT

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March

More information

Unfolding the Skorohod reflection of a semimartingale

Unfolding the Skorohod reflection of a semimartingale Unfolding the Skorohod reflection of a semimartingale Vilmos Prokaj To cite this version: Vilmos Prokaj. Unfolding the Skorohod reflection of a semimartingale. Statistics and Probability Letters, Elsevier,

More information

Dissipative Systems Analysis and Control, Theory and Applications: Addendum/Erratum

Dissipative Systems Analysis and Control, Theory and Applications: Addendum/Erratum Dissipative Systems Analysis and Control, Theory and Applications: Addendum/Erratum Bernard Brogliato To cite this version: Bernard Brogliato. Dissipative Systems Analysis and Control, Theory and Applications:

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well

More information

Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models by E. G. Birgin, J. L. Gardenghi, J. M. Martínez, S. A. Santos and Ph. L. Toint Report NAXYS-08-2015

More information

Solution to Sylvester equation associated to linear descriptor systems

Solution to Sylvester equation associated to linear descriptor systems Solution to Sylvester equation associated to linear descriptor systems Mohamed Darouach To cite this version: Mohamed Darouach. Solution to Sylvester equation associated to linear descriptor systems. Systems

More information

Interior Methods for Mathematical Programs with Complementarity Constraints

Interior Methods for Mathematical Programs with Complementarity Constraints Interior Methods for Mathematical Programs with Complementarity Constraints Sven Leyffer, Gabriel López-Calva and Jorge Nocedal July 14, 25 Abstract This paper studies theoretical and practical properties

More information

A Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent

A Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent A Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent Sasan Bahtiari André L. Tits Department of Electrical and Computer Engineering and Institute for Systems

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Derivative-free methods for nonlinear programming with general lower-level constraints*

Derivative-free methods for nonlinear programming with general lower-level constraints* Volume 30, N. 1, pp. 19 52, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam Derivative-free methods for nonlinear programming with general lower-level constraints* M.A. DINIZ-EHRHARDT 1, J.M.

More information

Low frequency resolvent estimates for long range perturbations of the Euclidean Laplacian

Low frequency resolvent estimates for long range perturbations of the Euclidean Laplacian Low frequency resolvent estimates for long range perturbations of the Euclidean Laplacian Jean-Francois Bony, Dietrich Häfner To cite this version: Jean-Francois Bony, Dietrich Häfner. Low frequency resolvent

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Symmetric Norm Inequalities And Positive Semi-Definite Block-Matrices

Symmetric Norm Inequalities And Positive Semi-Definite Block-Matrices Symmetric Norm Inequalities And Positive Semi-Definite lock-matrices Antoine Mhanna To cite this version: Antoine Mhanna Symmetric Norm Inequalities And Positive Semi-Definite lock-matrices 15

More information

A new simple recursive algorithm for finding prime numbers using Rosser s theorem

A new simple recursive algorithm for finding prime numbers using Rosser s theorem A new simple recursive algorithm for finding prime numbers using Rosser s theorem Rédoane Daoudi To cite this version: Rédoane Daoudi. A new simple recursive algorithm for finding prime numbers using Rosser

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

A stabilized SQP method: superlinear convergence

A stabilized SQP method: superlinear convergence Math. Program., Ser. A (2017) 163:369 410 DOI 10.1007/s10107-016-1066-7 FULL LENGTH PAPER A stabilized SQP method: superlinear convergence Philip E. Gill 1 Vyacheslav Kungurtsev 2 Daniel P. Robinson 3

More information

A trust region method based on interior point techniques for nonlinear programming

A trust region method based on interior point techniques for nonlinear programming Math. Program., Ser. A 89: 149 185 2000 Digital Object Identifier DOI 10.1007/s101070000189 Richard H. Byrd Jean Charles Gilbert Jorge Nocedal A trust region method based on interior point techniques for

More information

Fast Computation of Moore-Penrose Inverse Matrices

Fast Computation of Moore-Penrose Inverse Matrices Fast Computation of Moore-Penrose Inverse Matrices Pierre Courrieu To cite this version: Pierre Courrieu. Fast Computation of Moore-Penrose Inverse Matrices. Neural Information Processing - Letters and

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Frank E. Curtis April 26, 2011 Abstract Penalty and interior-point methods

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

IBM Research Report. Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence

IBM Research Report. Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence RC23036 (W0304-181) April 21, 2003 Computer Science IBM Research Report Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence Andreas Wächter, Lorenz T. Biegler IBM Research

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Confluence Algebras and Acyclicity of the Koszul Complex

Confluence Algebras and Acyclicity of the Koszul Complex Confluence Algebras and Acyclicity of the Koszul Complex Cyrille Chenavier To cite this version: Cyrille Chenavier. Confluence Algebras and Acyclicity of the Koszul Complex. Algebras and Representation

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

Constraint Identification and Algorithm Stabilization for Degenerate Nonlinear Programs

Constraint Identification and Algorithm Stabilization for Degenerate Nonlinear Programs Preprint ANL/MCS-P865-1200, Dec. 2000 (Revised Nov. 2001) Mathematics and Computer Science Division Argonne National Laboratory Stephen J. Wright Constraint Identification and Algorithm Stabilization for

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

A penalty-interior-point algorithm for nonlinear constrained optimization

A penalty-interior-point algorithm for nonlinear constrained optimization Math. Prog. Comp. (2012) 4:181 209 DOI 10.1007/s12532-012-0041-4 FULL LENGTH PAPER A penalty-interior-point algorithm for nonlinear constrained optimization Fran E. Curtis Received: 26 April 2011 / Accepted:

More information

Full-order observers for linear systems with unknown inputs

Full-order observers for linear systems with unknown inputs Full-order observers for linear systems with unknown inputs Mohamed Darouach, Michel Zasadzinski, Shi Jie Xu To cite this version: Mohamed Darouach, Michel Zasadzinski, Shi Jie Xu. Full-order observers

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Finite volume method for nonlinear transmission problems

Finite volume method for nonlinear transmission problems Finite volume method for nonlinear transmission problems Franck Boyer, Florence Hubert To cite this version: Franck Boyer, Florence Hubert. Finite volume method for nonlinear transmission problems. Proceedings

More information

Case report on the article Water nanoelectrolysis: A simple model, Journal of Applied Physics (2017) 122,

Case report on the article Water nanoelectrolysis: A simple model, Journal of Applied Physics (2017) 122, Case report on the article Water nanoelectrolysis: A simple model, Journal of Applied Physics (2017) 122, 244902 Juan Olives, Zoubida Hammadi, Roger Morin, Laurent Lapena To cite this version: Juan Olives,

More information

arxiv: v2 [math.oc] 11 Jan 2018

arxiv: v2 [math.oc] 11 Jan 2018 A one-phase interior point method for nonconvex optimization Oliver Hinder, Yinyu Ye January 12, 2018 arxiv:1801.03072v2 [math.oc] 11 Jan 2018 Abstract The work of Wächter and Biegler [40] suggests that

More information

ON THE UNIQUENESS IN THE 3D NAVIER-STOKES EQUATIONS

ON THE UNIQUENESS IN THE 3D NAVIER-STOKES EQUATIONS ON THE UNIQUENESS IN THE 3D NAVIER-STOKES EQUATIONS Abdelhafid Younsi To cite this version: Abdelhafid Younsi. ON THE UNIQUENESS IN THE 3D NAVIER-STOKES EQUATIONS. 4 pages. 212. HAL Id:

More information

REVERSIBILITY AND OSCILLATIONS IN ZERO-SUM DISCOUNTED STOCHASTIC GAMES

REVERSIBILITY AND OSCILLATIONS IN ZERO-SUM DISCOUNTED STOCHASTIC GAMES REVERSIBILITY AND OSCILLATIONS IN ZERO-SUM DISCOUNTED STOCHASTIC GAMES Sylvain Sorin, Guillaume Vigeral To cite this version: Sylvain Sorin, Guillaume Vigeral. REVERSIBILITY AND OSCILLATIONS IN ZERO-SUM

More information

Tropical Graph Signal Processing

Tropical Graph Signal Processing Tropical Graph Signal Processing Vincent Gripon To cite this version: Vincent Gripon. Tropical Graph Signal Processing. 2017. HAL Id: hal-01527695 https://hal.archives-ouvertes.fr/hal-01527695v2

More information

ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1.

ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1. ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS R. ANDREANI, E. G. BIRGIN, J. M. MARTíNEZ, AND M. L. SCHUVERDT Abstract. Augmented Lagrangian methods with general lower-level constraints

More information

Operations Research Lecture 4: Linear Programming Interior Point Method

Operations Research Lecture 4: Linear Programming Interior Point Method Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Priority Programme 1962

Priority Programme 1962 Priority Programme 1962 An Example Comparing the Standard and Modified Augmented Lagrangian Methods Christian Kanzow, Daniel Steck Non-smooth and Complementarity-based Distributed Parameter Systems: Simulation

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Linear Quadratic Zero-Sum Two-Person Differential Games

Linear Quadratic Zero-Sum Two-Person Differential Games Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard To cite this version: Pierre Bernhard. Linear Quadratic Zero-Sum Two-Person Differential Games. Encyclopaedia of Systems and Control,

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Multiple sensor fault detection in heat exchanger system

Multiple sensor fault detection in heat exchanger system Multiple sensor fault detection in heat exchanger system Abdel Aïtouche, Didier Maquin, Frédéric Busson To cite this version: Abdel Aïtouche, Didier Maquin, Frédéric Busson. Multiple sensor fault detection

More information

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

Some tight polynomial-exponential lower bounds for an exponential function

Some tight polynomial-exponential lower bounds for an exponential function Some tight polynomial-exponential lower bounds for an exponential function Christophe Chesneau To cite this version: Christophe Chesneau. Some tight polynomial-exponential lower bounds for an exponential

More information

Modeling and control of impact in mechanical systems: theory and experimental results

Modeling and control of impact in mechanical systems: theory and experimental results Modeling control of impact in mechanical systems: theory experimental results Antonio Tornambè To cite this version: Antonio Tornambè. Modeling control of impact in mechanical systems: theory experimental

More information

Extended-Kalman-Filter-like observers for continuous time systems with discrete time measurements

Extended-Kalman-Filter-like observers for continuous time systems with discrete time measurements Extended-Kalman-Filter-lie observers for continuous time systems with discrete time measurements Vincent Andrieu To cite this version: Vincent Andrieu. Extended-Kalman-Filter-lie observers for continuous

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

On path partitions of the divisor graph

On path partitions of the divisor graph On path partitions of the divisor graph Paul Melotti, Eric Saias To cite this version: Paul Melotti, Eric Saias On path partitions of the divisor graph 018 HAL Id: hal-0184801 https://halarchives-ouvertesfr/hal-0184801

More information

Completeness of the Tree System for Propositional Classical Logic

Completeness of the Tree System for Propositional Classical Logic Completeness of the Tree System for Propositional Classical Logic Shahid Rahman To cite this version: Shahid Rahman. Completeness of the Tree System for Propositional Classical Logic. Licence. France.

More information

On Symmetric Norm Inequalities And Hermitian Block-Matrices

On Symmetric Norm Inequalities And Hermitian Block-Matrices On Symmetric Norm Inequalities And Hermitian lock-matrices Antoine Mhanna To cite this version: Antoine Mhanna On Symmetric Norm Inequalities And Hermitian lock-matrices 015 HAL Id: hal-0131860

More information

Analysis in weighted spaces : preliminary version

Analysis in weighted spaces : preliminary version Analysis in weighted spaces : preliminary version Frank Pacard To cite this version: Frank Pacard. Analysis in weighted spaces : preliminary version. 3rd cycle. Téhéran (Iran, 2006, pp.75.

More information

A Piecewise Line-Search Technique for Maintaining the Positive Definiteness of the Matrices in the SQP Method

A Piecewise Line-Search Technique for Maintaining the Positive Definiteness of the Matrices in the SQP Method Computational Optimization and Applications, 16, 121 158, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. A Piecewise Line-Search Technique for Maintaining the Positive Definiteness

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Technische Universität Dresden Herausgeber: Der Rektor

Technische Universität Dresden Herausgeber: Der Rektor Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002

More information