Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ
|
|
- Duane Garrison
- 5 years ago
- Views:
Transcription
1 2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze a new affine scaling Levenberg-Marquardt method with nonmonotonic interior bactracing line search technique for solving Karush-Kuhn- Tucer (KKT) system. By transforming the KKT system into an equivalent minimization problem with nonnegativity constraints on some of the variables, we establish the Levenberg-Marquardt equation based on this reformulation. Theoretical analysis are given which prove that the proposed algorithm is globally convergent and has a local superlinear convergent rate under some reasonable conditions. The results of numerical experiments are reported to show the effectiveness of the proposed algorithm. Keywords KKT systems, Levenberg-Marquardt method, affine scaling, interior point, convergence Chinese Library Classification O Mathematics Subject Classification 90C30, 65K05 C S:Levenberg-Marquardt{)KKTXÚ ï 1, ÁÏ 2 Á Jø a#(üšüns: Eâ C Levenberg-Marquardt{ )Karush-Kuhn-Tucer KKT XÚ"ÄudKKTXÚ=zdÜ Cþäš Kå z K ïálevenberg-marquardt "y²ž{ø=änâñ5 3Ünb^ e Ž{ä 5Âñ Ç"êŠ(JyŽ{ S5" ' c KKTXÚ Levenberg-Marquardt{ C S: Âñ ã aò O êÆ aò 90C30, 65K05 ÂvFϵ2011c324F * This wor is supported by The National Natural Science Foundation of China (No ) 1. School of Mathematics and Information, Shanghai Lixin University of Commerce, Shanghai , China; þ á& OÆêÆ &EÆ þ Business School, Shanghai Normal University, Shanghai , China; þ ŒÆûÆ þ ÏÕŠö Corresponding author, yunjuanwang@163.com
2 90 WANG Yunjuan, ZHU Detong 17ò 0 Introduction Let F : R n R n be once and h : R n R p, c : R n R m be twice continuously differentiable functions. Define by L(x, y, z) def = F (x) + h(x)y c(x)z the Lagrangian function, then we consider the following Karush-Kuhn-Tucer (KKT) system: L(x, y, z) = 0, c(x) 0, z 0, z T c(x) = 0. h(x) = 0, (0.1) The main purpose of this paper is to find a KKT point ω = (x, y, z ) R n+p+m satisfying the KKT system (0.1). Due to its close relationship with the variational inequality problem and the nonlinear programming problems, many constructing efficient algorithms for (0.1) have been established. The method we will describe in this paper is similar to the idea proposed in [1], where system (0.1) is transformed into a differentiable unconstrained minimization problem. Based on the Fischer-Burmeister function in [2]: ϕ : R 2 R defined by which has a very interesting property that ϕ(a, b) def = a 2 + b 2 (a + b), ϕ(a, b) = 0 a 0, b 0, ab = 0. Hence we can reformulate system (0.1) as a nonlinear system of equations Φ(ω) = 0, where the nonsmooth mapping Φ : R n+p+m R n+p+m is defined by L(x, y, z) Φ(ω) def = Φ(x, y, z) def = h(x) φ(c(x), z) and φ(c(x), z) def = (ϕ(c 1 (x), z 1 ),, ϕ(c m (x), z m )) T R m. It is easy to see that solving (0.1) is equivalent to finding a global solution of the problem min Ψ(ω) def = 1 2 Φ(ω)T Φ(ω) = 1 2 Φ(ω) 2, (0.2) here Ψ(ω) denotes the natural merit function of the equation operator Φ. This unconstrained optimization approach has been used in [1,3-4] to develop some Newton-type methods for the solution of (0.1). Despite their strong theoretical and numerical properties, these methods may fail to find the unique solution of (0.1) arising from
3 2Ï Affine scaling interior Levenberg-Marquardt method for KKT systems 91 strongly monotone variational inequalities because the variable z is not forced to be nonnegative in [1, 3-4]. This, together with the fact that the variable z has to be nonnegative at a solution of (0.1), we are naturally led to consider the following variant of the problem: min Ψ(ω) s.t. z 0. (0.3) Therefore, this paper will focus on the study of this reformulation of the KKT system (0.1). It is well nown that, it is rather difficult to solve the constrained optimization (0.3) directly. In order to avoid handling the constraints explicitly, we can use affine scaling strategies which contain an appropriate quadratic function and an scaling matrix to form a quadratic model similar to Coleman and Li in [5]. And finally, we will consider to apply an affine scaling Levenberg-Marquardt method to the quadratic model. This paper is organized as follows. In the next section, we propose the nonmonotone affine scaling Levenberg-Marquardt algorithm with bactracing interior point technique for solving (0.1). In section 2, we prove the global convergence of the proposed algorithm. In section 3, we discuss the local convergence property. Finally, the results of numerical experiments of the proposed algorithm are reported in Section 4. 1 Algorithm This section describes the affine scaling Levenberg-Marquardt method in association with nonmonotonic interior bactracing technique for solving a bound-constrained minimization reformulated by KKT system (0.1). By the differentiability assumptions we made on the functions F, c and h, and by the convexity of ϕ, it is obvious that the mapping Φ is locally Lipschitzian and thus almost everywhere differentiable by Rademacher s theorem. Let us denote by D Φ the set of points ω R n+p+m at which Φ is differentiable. Then, we can consider the B-subdifferential of Φ at ω, B Φ(ω) def = {H H = ω ω, ω D Φ Ψ(ω ) T } which is a nonempty and compact set whose convex hull Φ(ω) def = conv( B Φ(ω)) is Clare s [6] generalized Jocobian of Φ at ω. Proposition 1.1 [1] Let ω = (x, y, z) R n+p+m. Then, each element H Φ(ω) can be represented as follows: H = x L(ω) c(x) h(x)d a (ω) c(x) T 0 0 h(x) T 0 D b (ω) where D a (ω) = diag(a 1 (ω),, a m (ω)), D b (ω) = diag(b 1 (ω),, b m (ω)) R m m are diagonal matrices whose jth diagonal elements are given by h j (x) a j (ω) = 1, b j (ω) = h j (x) 2 + zj 2 z j T h j (x) 2 + z 2 j, 1
4 92 WANG Yunjuan, ZHU Detong 17ò if (h j (x), z j ) (0, 0), and by a j (ω) = ξ j 1, b j (ω) = ζ j 1 for any (ξ j, ζ j ) with (ξ j, ζ j ) 1 if (h j (x), z j ) = (0, 0). Proposition 1.2 [1] Ψ is continuously differentiable, and Ψ = H T Φ(ω) for every H in Φ(ω). Since the solving (0.1) is equivalent to finding a global solution of the problem (0.3), a classical algorithm for solving (0.1) will be based on the reformulated problem (0.3), i.e., Φ(ω) = 0, z 0. As is nown to all that the concept of nonsmooth Levenberg-Marquardt method is to mae Newton-lie method globally convergent while maintaining its excellent local convergence behavior. Now, we begin the description of the affine scaling interior Levenberg-Marquardt method with its core, the underlying Newton-lie iteration. Ignoring primal and dual feasibility of the problem (0.3), the first-order necessary conditions for ω to be a local minimizer are (g ) i = 0, if i I P, (g ) i = 0, if i J and (ωj ) i > 0, (g ) i > 0, if i J and (ωj ) i = 0, (1.1) where g(ω) def = Ψ(ω), (g ) i is the ith components of g = g(ω ), I def = {1,, n}, P def = {n + 1,, n + p} and J def = {n + p + 1,, n + p + m}. In order to omit the constraints appeared in the problem (0.3), we introduce an affine scaling matrix similar to the idea in [5]. The scaling matrix D = D(ω ) arises naturally from examining the first-order necessary conditions for nonlinear minimization transformed by KKT system (0.1), where D(ω) def = diag { γ 1 (ω) 1 2, γ 2 (ω) 1 2,, γ n+p+m (ω) 1 2 }, (1.2) and the ith component of the vector function γ(ω) is defined as follows: γ i (ω) def = { 1, if (g ) i = 0 and i I P, ω i, if (g ) i 0 and i J. (1.3) Definition 1.1 [5] A point ω is nondegenerate if, for each index i J, g i (ω J ) = 0 (ω J ) i > 0, (1.4) where g i (ω J ) is the ith component of vector g(ω J ). A reformulated problem (0.3) is nondegenerate if (1.4) holds for every ω J. The Levenberg-Marquardt equation arises naturally from examining the Kuhn-Tucer conditions for the reformulate problem (0.3), i.e., D(ω) 2 g(ω) = 0. (1.5)
5 2Ï Affine scaling interior Levenberg-Marquardt method for KKT systems 93 For each, given a positive parameter µ and an identity matrix I, we define ψ : R n+p+m R by ψ ( ˆd) def = 1 2 Φ(ω ) + H D 1 ˆd µ ˆd 2 = 1 2 Φ(ω ) 2 + (D 1 g ) T 1 ˆd + 2 ˆd T D 1 (H ) T H D 1 ˆd µ ˆd T ˆd, (1.6) and consider the minimization problem min ψ ( ˆd). (1.7) We now state an affine scaling Levenberg-Marquardt method applied to the solution of the semismooth problem (0.1). Let ˆd be the solution of the subproblem (1.6). Since ψ ( ˆd) is a strict convex function, ˆd is the global minimum of the subproblem (1.6) which is in fact equivalent to solving the following affine scaling Levenberg-Marquardt equation [D 1 (H ) T H D 1 + µ I] ˆd = D 1 g. (1.8) Zhu in [7] pointed out that the relevance of the used affine scaling matrix D 1 and matrix ˆd µ I depended on the fact that the affine scaled Levenberg-Marquardt trial step d = D 1 was angled away from the approaching bound. Consequently the bounds will not prevent a relatively large stepsize along d from being taen. In order to maintain the strict interior feasibility, a step-bac tracing along the solution d of the equation (1.8) could be required by the strict interior feasibility and nonmonotonic line research technique. Next we describe an affine scaling Levenberg-Marquardt algorithm which combines nonmonotonic interior bactracing technique for solving the KKT system (0.1). Algorithm 1.1 Initialization step Choose parameters β (0, 1 2 ), τ (0, 1), ε > 0, 0 < θ l < 1, µ 1, 0 < q 1 and positive integer M as nonmonotonic parameter. Let m(0) = 0. Give a starting point ω 0 = (x 0, y 0, z 0 ) with z 0 > 0, calculate H 0 Φ(ω 0 ). Set = 0, go to the main step. Main step 1. Evaluate Ψ = Ψ(ω ) = 1 2 Φ(ω ) 2 and H Φ(ω ). Calculate D, g = Ψ(ω ) = (H ) T Φ(ω ) and µ = µ g 2q. 2. If g ε, stop with the approximate solution ω. 3. Solve the affine scaling Levenberg-Marquardt equation (1.8) and obtain a step ˆd. Set d = D 1 ˆd. 4. Choose α = 1, τ, τ 2, until the following inequalities hold: Ψ(ω + α d ) Ψ(ω l() ) + α β(g ) T d, (1.9) where Ψ(ω l() ) = max 0jm() {Ψ(ω j )}. ω J + α d J 0. (1.10)
6 94 WANG Yunjuan, ZHU Detong 17ò 5. Set s = { α d, if ωj + α d J > 0, θ α d, otherwise, where θ (θ l, 1) and θ 1 = o( d ) and then set ω +1 = ω + s. 6. Tae the nonmonotone control parameter m( + 1) = min{m() + 1, M} and update H to obtain H +1 Φ(ω +1 ). Then set + 1 and go to step 1. 2 Global convergence Throughout this section we assume that Φ is semismooth. Given ω 0 = (x 0, y 0, z 0 ) R n+p+m with z 0 > 0, the algorithm generates a sequence {ω }. In our analysis, we denote the level set of Ψ by L(ω 0 ) def = {ω R n+p+m Ψ(ω) Ψ(ω 0 ), z 0}. The following assumptions are commonly used in convergence analysis of most methods for the constrained systems. Assumption A1 Sequence {ω } generated by the algorithm is contained in a compact set L(ω 0 ) on R n+p+m. Assumption A2 There exist some positive constants χ D, χ Φ and χ H such that D(ω) 1 χ D, Φ(ω) χ Φ, H χ H, H Φ(ω), ω L(ω 0 ). (2.1) The next lemma will show that the algorithm is well-defined. Lemma 2.1 At the th iteration, let ˆd be a solution of Levenberg-Marquardt equation. If g 0, then we have (g ) T d < 0. (2.2) Moreover, the proposed algorithm will produce an iterate ω +1 = ω +α d in a finite number of bactracing steps in (1.9)-(1.10), i.e., a positive α can always be found in step 4. Proof Suppose that (g ) T d = 0. Since ˆd is a solution of Levenberg-Marquardt equation, we have (g ) T d = (D 1 g ) T ˆd = ( ˆd ) T [D 1 (H ) T H D 1 + µ I] ˆd. (2.3) Noting that the matrix D 1 (H ) T H D 1 +µ I is positive definite since µ > 0, (g ) T d = 0 can only imply that ˆd = 0. Using the Levenberg-Marquardt equation again, we obtain g = [D 1 (H ) T H D 1 + µ I] ˆd (H ) T H D 1 + µ I ˆd = 0, (2.4)
7 2Ï Affine scaling interior Levenberg-Marquardt method for KKT systems 95 which contradicts g 0. So we have (g ) T d < 0, i.e., (2.2) holds. Now let us prove the latter part of the lemma. On the one hand, it is clear to see that, in a finite number of bactracing reductions, α will satisfy { { }} def α = min{1, Γ } def (ω = min 1, max J ) i (d J ), i = 1, 2,, m i def with Γ = + if (ω J )i 0 for all i {1, 2,, m}. (d J )i On the other hand, assume that ω satisfies that for all n 0. Then Ψ(ω + τ n d ) > Ψ(ω l() ) + βτ n (g ) T d Ψ(ω ) + βτ n (g ) T d (2.5) Ψ(ω + τ n d ) Ψ(ω ) τ n > β(g ) T d (2.6) follows. Hence, for n, we have (1 β)(g ) T d 0, i.e., (g ) T d 0 which contradicts (g ) T d < 0. Therefore it is always possible to find a step length α > 0 satisfying (1.9)- (1.10). So, we can see that the latter part of the lemma is also true. The total conclusion of the lemma holds. The main aim of the following two theorems is to prove global convergence results of the proposed algorithm. The former indicates that at least one it point of {ω } is a stationary point. The latter extends this theorem to a stronger global convergent result. Theorem 2.1 Let {ω } be a sequence generated by the proposed algorithm. Assume that Assumptions A1-A2 hold and the nondegenerate condition of the reformulated problem (0.3) holds. Then inf D 1 g = 0. Proof According to the acceptance rule (1.9) in step 4, we have that Ψ(ω l() ) Ψ(ω + α d ) βα (g ) T d. (2.7) Taing into account that m( + 1) m() + 1, and Ψ(ω +1 ) Ψ(ω l() ), we have that Ψ(ω l(+1) ) max 0jm()+1 {Ψ(ω +1 j )} = Ψ(ω l() ). This means that the sequence {Ψ(ω l() )} is nonincreasing for all, and therefore {Ψ(ω l() )} is convergent. From (2.3), we have (g ) T d = ( ˆd ) T [D 1 (H ) T H D 1 + µ I] ˆd µ ˆd 2 = µ g 2q ˆd 2 µ χ 2 g 2q d 2. (2.8) D
8 96 WANG Yunjuan, ZHU Detong 17ò Combing (2.7) and (2.8), for all > M, we obtain Ψ(ω l() ) max 0jm(l() 1) {Ψ(ωl() j 1 )} + βα l() 1 (g l() 1 ) T d l() 1 (2.9) max 0jm(l() 1) {Ψ(ωl() j 1 )} βµ α l() 1 l() 1 gl() 1 2q d l() 1 2. If the conclusion of the theorem is not true, there exist some ε > 0 such that χ 2 D g ε, = 1, 2,. (2.10) As {Ψ(ω l() )} is convergent, we obtain that from (2.9) and (2.10), α l() 1 d l() 1 2 = 0. (2.11) Further, following by induction way used in [8], it can be derived that which implies that either or α d 2 = 0, inf α = 0, (2.12) d = 0. (2.13) First, let us consider the case that (2.13) holds. Since µ =µ g 2q µχ 2q D χ2q H χ2q Φ, we can obtain from the Levenberg-Marquardt equation that (g ) T d = (D 1 g ) T [D 1 (H ) T H D 1 g 2 (H ) T H D 1 + µ ε 2 χ 2 D χ2 H + µχ2q D χ2q H χ2q Φ + µ I] 1 (D 1 g ) 0. (2.14) So d 0, which means that (2.13) is not true. So (2.12) holds. Next, we will consider the case (2.12) holds. (2.12) means that there exists a subset κ {} such that α = 0. (2.15), κ Assume that α given in step 4 is the stepsize to the boundary to box constraints along d. From the definition 1.1, there must exist some i such that (ωj ) i = 0 where ω is any accumulation point of the sequence {ω } and without loss of generality, assume that {ω } κ is a subsequence convergent to ω. Recall the Levenberg-Marquardt equation, we can rewrite it as µ d = D 2 [g + (H ) T H d ]. (2.16)
9 2Ï Affine scaling interior Levenberg-Marquardt method for KKT systems 97 Since ω is nondegenerate with γ i (ωj ) = 0 for any i, we have that (ω J ) i = 0 and (gj ) i 0 for some i. Hence, if α is defined by some γ i (ωj ) = 0 and (g ) i 0, then α = γi (ω J ) (d J )i for sufficiently large. Using (2.16) again, we have α = µ (g ) i + [(H ) T H d ] i µ g + (H ) T H d. (2.17) Combing the above inequality and µ = µ g 2q µε 2q > 0, if α given in step 4 is the stepsize to the boundary of box constraints along d, we have that inf α inf µ g J + (H J )T H J d J > 0. (2.18) Furthermore, if (2.12) holds, the acceptance rule (1.9) means that, for large, Noting that α τ Ψ(ω + α τ d ) Ψ(ω ) Ψ(ω + α τ d ) Ψ(ω l() ) β α τ (g ) T d. > 0, we can get from the above inequality that Taing its to (2.19), we obtain that is, Ψ(ω + α τ d ) Ψ(ω ) α β(g ) T d. (2.19) τ (g ) T d β(g ) T d, (1 β)(g ) T d 0. (2.20) Taing into account that 1 β > 0, we can obtain from (2.20) that Noting that (g ) T d 0, we have (g ) T d 0. (2.21) (g ) T d = 0. (2.22) But from (2.8), we can see that (2.22) is not true. (2.18) and (2.22) mean that (2.12) does also not hold. Hence (2.10) is not true and the conclusion of the theorem is true. Theorem 2.2 Let {ω } be a sequence generated by the proposed algorithm. Assume that Assumptions A1-A2 hold and the nondegenerate condition of the reformulated problem (0.3) holds, then D 1 g = 0.
10 98 WANG Yunjuan, ZHU Detong 17ò Proof The proof is still by contradiction. Let ε 1 (0, 1) be given and assume that there is a subsequence {m i } such that m i g mi ε 1. (2.23) Theorem 2.1 guarantees that for any ε 2 (0, ε 1 ) there is a subsequence of {m i }(without loss of generality we assume that it is still the full sequence) and a sequence {l i } such that and g ε 2, for m i < l i (2.24) l i g li < ε 2. (2.25) From the affine scaling Levenberg-Marquardt equation, we have that (g ) T d = ( ˆd ) T [D 1 (H ) T H D 1 + µ I] ˆd µ ˆd 2 = µ g 2q ˆd 2 µε 2q 2 ˆd 2 µε2q 2 χ 2 d 2. (2.26) D Since the matrix D 1 (H ) T H D 1 +µ I is nonsingular in the affine scaling Levenberg- Marquardt equation, we can get that (g ) T d = (D 1 So, combing (2.26) and (2.27), we obtain [(g ) T d ] 2 Noting that (g ) T d 0, we can get where σ def = (g ) T d µε q+1 2 χ D χ 2 D χ2 H +µχ2q D χ2q H χ2q Φ (1.9) and (2.28) mean that g ) T [D 1 (H ) T H D 1 g 2 (H ) T H D 1 + µ ε µ I] 1 D 1 g χ 2 D χ2 H +. (2.27) µχ2q D χ2q H χ2q Φ µε 2q+2 2 χ 2 D (χ2 D χ2 H + 2. µχ2q D χ2q H χ2q Φ ) d µε q+1 2 d = σ d, (2.28) χ D χ 2 D χ2 H + µχ2q D χ2q H χ2q Φ. Ψ(ω l() ) Ψ(ω l(l() 1) ) + βα l() 1 (g l() 1 ) T d l() 1 Ψ(ω l(l() 1) ) βσα l() 1 d l() 1. (2.29)
11 2Ï Affine scaling interior Levenberg-Marquardt method for KKT systems 99 Similar to the proof of Theorem 2.1, we have that the sequence {Ψ(ω l() )} is nonincreasing for m i < l i, and hence {Ψ(ω l() )} is convergent. So Similar to the proof in [8], we have that α l() 1 d l() 1 = 0. So we can also obtain + Ψ(ωl() ) = + Ψ(ω ). (2.30) α d = 0. (2.31) Similar to the proof of (2.18), we can obtain that there exists a subset κ {} such that α 0, (2.32) κ, where α is given in the step size to the boundary of box constraints along d, that is, the step size {α } cannot converge to zero. Since Ψ(ω) is continuous, and (2.31) holds, we have that [ Ψ(ω + ξ θ α d ) Ψ(ω )] T d 1 2 (1 β)σ d, (2.33) where σ is given in (2.28). Noting that Ψ is continuously differentiable and using the mean value theorem, we have the following result with 0 ξ 1 Ψ(ω + α θ d ) = Ψ(ω ) + βα θ Ψ(ω ) T d + (1 β)α θ Ψ(ω ) T d +α θ [ Ψ(ω + ξ α θ d ) Ψ(ω )] T d Ψ(ω ) + βα θ Ψ(ω ) T d, (2.34) here the last second inequality is deduced since the last term in bracets in the right-hand side of equality in (2.34) will become negative when α θ d is small enough. And hence the corresponding θ 1, as d 0. We then deduce from that for i sufficiently large, Ψ(ω ) Ψ(ω + α d ) βα (g ) T d βσα d ω mi ω li = l i 1 =m i ω +1 ω l i 1 1 βσ =m i α d = l i 1 l i 1 =m i α d =m i [Ψ(ω ) Ψ(ω +1 )]) = 1 βσ [Ψ(ωmi ) Ψ(ω li )]. (2.35)
12 100 WANG Yunjuan, ZHU Detong 17ò (2.30) and (2.35) mean that for large i, we have ω mi ω li ε 2. (1.3) implies that (γ mi ) j (γ li ) j (ω mi ) j (ω li ) j 0, as i tends to infinity. Finally, from (2.25), (2.27) and triangle inequality, we get that from g mi g li = (H mi ) T Φ mi (H li ) T Φ li χ H ω mi ω li χ H ε 2, ε 1 m i g mi Dm 1 i g mi Dm 1 i g li + Dm 1 i g li D 1 g li + g li Dm 1 i g mi g li + (Dm 1 i D 1 )g li + g li χ D ε 2 + χ H χ Φ ε 2 + ε 2, which contradicts ε 2 (0, ε 1 ), for arbitrarily small. This implies that (2.23) is not true, and hence the conclusion of the theorem holds. l i l i l i l i 3 Local convergence In this section we want to show that the Algorithm is locally fast convergent. The assumption in this section is that a KKT point ω of (0.1) is a BD-regualr solution of the system Φ(ω) = 0. Definition 3.1 The vector ω is called BD-regular for Φ if all elements H B Φ(ω ) are nonsingular. The next result follows from the fact that Φ is a (strongly) semismooth operator under certain smoothness assumptions for F, h and c (see [9-12]). Proposition 3.1 The following state holds: Φ(ω + h) Φ(ω) Hh = c 0 h 1+q for h 0 and H Φ(ω + h). (3.1) The following proposition refers to Robinson s strong regularity condition. Here, we will not restate its definition and interested reader may consult Robinson [13] and Liu [14] for several characterizations of a strongly regular KKT point. Proposition 3.2 A solution ω = (x, y, z ) R n R p R m of system (0.1) is strongly regular if and only if all matrices in Clare s generalized Jacobian Φ(ω ) are nonsingular. In particular, the strong regularity of ω is sufficient for ω to be a BD-regular solution of the system Φ(ω) = 0. The next property follows from the semicontinuity of the generalized Jacobian (see [15]) and the assumed BD-regularity. For the precise proofs, interested reader may refer to Lemma 2.6 in [9] and Proposition 3 in [11]. Proposition 3.3 Let ω be a BD-regular solution of Φ(ω) = 0. Then the following statement hold:
13 2Ï Affine scaling interior Levenberg-Marquardt method for KKT systems There exist constants c 1 > 0 and δ 1 > 0 such that the matrices H B Φ(ω) are nonsingular and satisfy for all ω with ω ω δ There exist constants c 2 > 0 and δ 2 > 0 such that for all ω with ω ω δ 2. H 1 c 1 (3.2) Φ(ω) c 2 ω ω (3.3) In order to prove the local convergence theorem, we will need the following lemmas. Lemma 3.1 Assume that ω = (x, y, z ) is a BD-regular solution of Φ(ω) = 0. Then we have that d c 1 Φ(ω ) (3.4) for all ω = (x, y, z ) sufficiently close to ω, where c 1 is given in Proposition 3.3 and d = D 1 ˆd and ˆd denotes a solution of Levenberg-Marquardt equation. Proof Since ω is a BD-regular KKT point, the matrix H B Φ(ω ) are uniformly nonsigular for all ω sufficiently close to ω by Proposition 3.3, i.e., there exists a constant c 1 > 0 such that d (H ) 1 H d c 1 H d. (3.5) On the other hand, using Levenberg-Marquardt equation, we have so 0 = (D 1 g ) T ˆd + ( ˆd ) T [D 1 (H ) T H D 1 (g ) T d + (d ) T (H ) T H d = Φ(ω )H d + H d 2 H d Φ(ω ) H d, Combining (3.5) with (3.6), we can easily obtain + µ I] ˆd H d Φ(ω ). (3.6) d c 1 Φ(ω ). (3.7) Theorem 3.1 Assume that assumptions A1-A2 hold. Let ω be any accumulation point of the sequence {ω } generated by the proposed algorithm and ω be a BD-regular point of Φ. Then ω is a BD-regular zero solution of Φ and ω ω, the step size α 1 for large enough. Proof If Φ(ω ) = 0 for some enough, then from d c 1 Φ(ω ) we have that d = 0 for all large enough and the step size α 1. Therefore, without loss of generality we may assume that D 1 (H ) T Φ(ω ) 0.
14 102 WANG Yunjuan, ZHU Detong 17ò From the Levenberg-Marquardt equation, we have (g ) T d = ( ˆd ) T [D 1 (H ) T H D 1 + µ I] ˆd (d ) T (H ) T H d d 2 (H ) 1 2 Using the acceptance rule (1.9) and (3.8), we have d 2 c 2. (3.8) 1 Ψ(ω + α d ) Ψ(ω l() ) + βα (g ) T d Ψ(ω ) β c 2 1 α d 2. (3.9) Similar to the proof of theorem in [8], since {Ψ(ω )} is convergent, we have Assume that there exists a subsequence κ {} such that Then, assumption (3.11) implies that α d 2 = 0. (3.10), κ d > 0. (3.11) α = 0. (3.12), κ Similar to the proof of (2.18), we have that α 0 if α is given by (1.10). And at the same time, the acceptance (1.9) means that for large enough, Since we have that Ψ(ω + α τ d ) Ψ(ω ) β α τ (g ) T d. (3.13) Ψ(ω + α τ d ) Ψ(ω ) = α τ (g ) T d + o( α τ d ) (1 β) α τ (g ) T d + o( α τ d ) 0. (3.14) Dividing (3.14) by α τ d and noting that 1 β > 0, we have that from (3.8) From (3.15), we can get that (g ) T d 0, κ d, κ d c (3.15), κ (g ) T d = 0 and, κ d = 0. (3.16)
15 2Ï Affine scaling interior Levenberg-Marquardt method for KKT systems 103 We now prove that if (3.16) holds, then α = 1 must satisfy the accepted condition (1.9) in step 4. For large enough, we have that This gives that Ψ(ω + d ) Ψ(ω ) (g ) T d = 1 2 Φ(ω ) + (H ) T d + o( d ) Φ(ω ) 2 (g ) T d = 1 2 (H ) T d 2 + o( d 2 ). (3.17) Ψ(ω + d ) = Ψ(ω ) + (g ) T d (H ) T d 2 + o( d 2 ) Ψ(ω l() ) + β(g ) T d + ( 1 2 β)(g ) T d [(g ) T d + (H ) T d 2 ] + o( d 2 ) Ψ(ω l() ) + β(g ) T d ( 1 2 β)µ ˆd µ ˆd 2 + o( d 2 ) = Ψ(ω l() ) + β(g ) T d. (3.18) Therefore, the accepted condition (1.9) holds when α = 1. Now, we prove that if (3.16) holds, when α = 1, the accepted condition (1.10) given in step 4 also holds at the stepsize to the boundary of box constraints along d. (3.16) means that (d ) i 0, for all i. Further, (d J ) i 0 for all i = 1, 2,, m. If (g J ) i = 0 for any i, assume that α given in step 4 is the step size to the boundary of box constraints along d, the nondegenerate means that (ω J ) i > 0, then { { (ω α = min 1, max J ) i (d J ), i = 1, 2,, m} } = min{1, + } = 1. i If (g J ) i 0 for some i, we have that (ω J ) i = 0. Since (H ) T H d converges to zero and µ I is a positive definite diagonal matrix in (1.8), the nondegenerate condition of reformulated problem (0.3) at the it point implies that (d J ) i and (g J ) i have the same sign for sufficiently large. Hence, if α is defined by some γ i (ω J ) = 0 and (g J ) i 0, then α = γ i (ω J ) / (g J ) i for sufficiently large. Using (2.17), again, noting µ 1, we have { α = min 1, { min 1, µ } µ (gj ) i + [(HJ )T (HJ )d J ] i µ (H J )T (H J )d J g J + (H J )T (H J )d J } 1 as d 0. Further, by the condition on the strictly feasible stepsize θ (θ 0, 1], for some 0 < θ 0 < 1 and θ 1 = o( d ), θ = 1, comes from d = 0. So α 1, i.e., s = d and hence ω +1 = ω + d.
16 104 WANG Yunjuan, ZHU Detong 17ò Similar to the proof of (2.28), we can get that µ D 1 (g ) T d g 1+q d χ D χ 2 H χ2 D + µχ2q D χ2q H χ2q Φ where ϱ def = µ χ D χ 2 H χ2 D +µχ2q D χ2q H χ2q Φ So from (3.19), we have 0 = (g ) T d d. = ϱ g 1+q d, (3.19) ϱ D 1 g 1+q = ϱ D 1 g 1+q, (3.20) which implies that D 1 (H ) T Φ(ω ) = D 1 g = 0. Since ω is a BD-regular point of Φ, i.e., D and H nonsingular, this gives Φ(ω ) = 0 which means that ω is a BD-regular zero point of Φ. Further, ω is a BD-regular zero of Φ, then there exist δ 2 > 0 and c 2 > 0 such that Φ(ω ) c 2 ω ω for all ω ω δ 2. All the above gives ω ω. Theorem 3.2 Suppose that ω is a BD-regular solution of Φ(ω) = 0. Let {ω } denote any sequence that converges to ω for all. For each ω let ˆd denote a solution of Levenberg- Marquardt equation. Then ω + d ω = O( ω ω 1+q ). Proof By the BD-regularity of ω we have for ω sufficiently close to ω and H B Φ(ω ) that ω + d ω (H ) 1 H (ω + d ω ) c 1 H d + H (ω ω ) c 1 ( Φ(ω ) + H d + Φ(ω ) Φ(ω ) H (ω ω ) ) = c 1 ( Φ(ω ) + H d + c 0 ω ω 1+q ). (3.21) Since ˆd is a solution of Levenberg-Marquardt equation and d = D (ω ω ) is feasible for (1.7), we obtain that Φ(ω ) + H d 2 = Φ(ω ) + H D 1 ˆd 2 Φ(ω ) + H D 1 ˆd 2 + µ ˆd 2 Φ(ω ) + H D 1 d 2 + µ d 2 Φ(ω ) Φ(ω ) + H (ω ω ) 2 + µ D 2 ω ω 2 = Φ(ω ) Φ(ω ) H (ω ω ) 2 + µ D 2 ω ω 2. (3.22) For ω with ωj > 0, there exists sufficiently small δ (0, 2] such that the open ball B(ω, δ) def = {ω ω ω < δ, ω J > 0}.
17 2Ï Affine scaling interior Levenberg-Marquardt method for KKT systems 105 Let {ω j } be subsequence such that ω j ω and j 0 be the index such that for > j0, the sequence {ω j } belongs to B(ω, δ 2 ). Assume j > j0. Then (ω j J ) i > δ 2 for 2 i = 1, 2,, m. Hence, D j n + p + m δ where 2 δ 1. Taing into account that we can obtain from (3.22) that that is, µ = µ g 2q µχ 2q D χ2q H L2q ω ω 2q, Φ(ω ) + H d 2 [c µχ 2q 2 D χ2q H L2q (n + p + m δ )2 ] ω ω 2+2q, (3.23) def where c 3 = c µχ2q D χ2q H L2q 2 (n + p + m δ )2. Together (3.21) with (3.24), we have that Φ(ω ) + H d c 3 ω ω 1+q, (3.24) ω + d ω = O( ω ω 1+q ). (3.25) Theorem 3.2 shows that under the assumption that a KKT point ω of (0.1) is a BDregualr solution of the system Φ(ω) = 0, the proposed algorithm has locally Q-superlinear at (1+q)-order of convergence rate. 4 Numerical experiments Numerical experiments on a new affine scaling Levenberg-Marquardt method with nonmonotonic interior bactracing line search technique given in this paper have been performed on computer. The experiments are carried out on 4 test problems which are quoted from [16]. Here, we first transform them into the Karush-Kuhn-Tucer system, and then start searching from the initial points given by [16] to get the numerical results. The computation terminates when one of the following stopping criterions is satisfied which is either g 10 8 or Ψ +1 Ψ The selected parameter values are: ε = 10 8, β = 0.25, µ = 1, q = 0.5, τ = 0.5, θ l = 0.95 and M = 5. NI and NF stand for the numbers of iterations and function evaluations of Ψ respectively. The numerical results of our Levenberg-Marquardt algorithm and the Modified BFGS given in [16] are presented in the following Table 4.1. Table 4.1 Experimental results Problem name Levenberg-Marquardt algorithm Modified BFGS in [16] NI/NF NI/NF Powell s function of four variables 24/24 45/51 Wood s function 35/35 54/66 A quartic function 18/18 55/61 A sine-valley function 35/36 39/54
18 106 WANG Yunjuan, ZHU Detong 17ò References [1] Facchinei F, Fischer A, Kanzow C. Regularity properties of a semismooth reformlation of variational inequalities [J]. SIAM Journal on Optimization, 1998, 8(3): [2] Fischer A. A special Newton-type optimization method [J]. Optimization, 1992, 24(3-4): [3] Facchinei F, Fischer A, Kanzow C. A semismooth Newton method for variational inequalities: The case of box constraints [C]//it Complementarity and Variational Problems: State of the Art, Philadelphia: [s.n.], 1997: [4] Qi L Q, Jiang H Y. Semismooth Karush-Kuhn-Tucer equations and convergence analysis of Newton and Quasi-Newton methods for solving these equations [J]. Mathematics of Operations Research, 1997, 22(2): [5] Coleman T F, Li Y Y. An interior trust region approach for nonlinear minimization subject to bounds [J]. SIAM Journal on Optimization, 1996, 6(2): [6] Clare F H. Optimization and nonsmooth analysis [M]. New Yor: John Wiley and Sons, [7] Zhu D T. Affine scaling interior Levenberg-Marquardt method for bound-constrained semismooth equations under local error bound conditions [J]. Journal of Compututional and Applied Mathematics, 2008, 219(1): [8] Gripp L, Lampariello F, Lucidi L. A nonmonotone line search technique for Newton s methods [J]. SIAM Journal on Numerical Analysis, 1986, 23(4): [9] Qi L Q. Convergence analysis of some algorithms for solving nonsmooth equations [J]. Mathematics of Operations Research, 1993, 18(1): [10] Qi L Q, Sun J. A nonsmooth version of Newton s method [J]. Mathematical Programming, 1993, 58(1-3): [11] Pang J S, Qi L Q. Nonsmooth equations: motivation and algorithms [J]. SIAM Journal on Optimization, 1993, 3(3): [12] Fischer A. Solution of monotone complementarity problems with locally Lipschitzian functions [J]. Mathematical Programming, 1997, 76(3): [13] Robinson S M. Strongly regular generalized equations [J]. Mathematics of Operations Research, 1980, 5(1): [14] Liu J M. Strong stability in variational inequalities [J]. SIAM Journal on Control and Oprimization, 1995, 33(3): [15] Bertseas D P. Constrained optimization and Lagrange multiplier methods [M]. New Yor: Academic Press, [16] Yuan Y Y. A modified BFGS algorithm for unconstrained optimization [J]. IMA Journal of Numerical Analysis, 1991, 11(3):
A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2
A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS Christian Kanzow 1 and Hou-Duo Qi 2 1 University of Hamburg Institute of Applied Mathematics Bundesstrasse 55, D-20146 Hamburg,
More informationA SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES
A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationA PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)
A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION Bintong Chen 1 Xiaojun Chen 2 Christian Kanzow 3 September 1997 revised May 1998 and March 1999 Abstract: We introduce a new NCP-function in order to reformulate
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationNonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization
Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong
More informationA class of Smoothing Method for Linear Second-Order Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin
More informationWHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?
WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e
More informationUsing exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities
Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present
More informationA Continuation Method for the Solution of Monotone Variational Inequality Problems
A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:
More informationNewton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions
260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum
More informationA Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions
The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationA Smoothing Newton Method for Solving Absolute Value Equations
A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS
ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are
More information20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes
016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which
More informationPacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT
Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given
More informationsystem of equations. In particular, we give a complete characterization of the Q-superlinear
INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica
More informationTechnische Universität Dresden Herausgeber: Der Rektor
Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002
More informationAN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS
AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationA Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems
A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based
More informationSchool of Mathematics, the University of New South Wales, Sydney 2052, Australia
Computational Optimization and Applications, 13, 201 220 (1999) c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. On NCP-Functions * DEFENG SUN LIQUN QI School of Mathematics,
More informationCONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results
More informationA Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties
A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties André L. Tits Andreas Wächter Sasan Bahtiari Thomas J. Urban Craig T. Lawrence ISR Technical
More informationGENERALIZED second-order cone complementarity
Stochastic Generalized Complementarity Problems in Second-Order Cone: Box-Constrained Minimization Reformulation and Solving Methods Mei-Ju Luo and Yan Zhang Abstract In this paper, we reformulate the
More informationOn the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem
On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem Guidance Professor Assistant Professor Masao Fukushima Nobuo Yamashita Shunsuke Hayashi 000 Graduate Course in Department
More information1. Introduction. We consider the general smooth constrained optimization problem:
OPTIMIZATION TECHNICAL REPORT 02-05, AUGUST 2002, COMPUTER SCIENCES DEPT, UNIV. OF WISCONSIN TEXAS-WISCONSIN MODELING AND CONTROL CONSORTIUM REPORT TWMCC-2002-01 REVISED SEPTEMBER 2003. A FEASIBLE TRUST-REGION
More informationLecture 19 Algorithms for VIs KKT Conditions-based Ideas. November 16, 2008
Lecture 19 Algorithms for VIs KKT Conditions-based Ideas November 16, 2008 Outline for solution of VIs Algorithms for general VIs Two basic approaches: First approach reformulates (and solves) the KKT
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationINTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global
More informationA STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized
More informationSequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO
Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and
More informationSemismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones
to appear in Linear and Nonlinear Analysis, 2014 Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones Jein-Shan Chen 1 Department of Mathematics National
More informationA Smoothing SQP Method for Mathematical Programs with Linear Second-Order Cone Complementarity Constraints
A Smoothing SQP Method for Mathematical Programs with Linear Second-Order Cone Complementarity Constraints Hiroshi Yamamura, Taayui Ouno, Shunsue Hayashi and Masao Fuushima June 14, 212 Abstract In this
More informationStationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3
More informationSOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS
A SEMISMOOTH EQUATION APPROACH TO THE SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS Tecla De Luca 1, Francisco Facchinei 1 and Christian Kanzow 2 1 Universita di Roma \La Sapienza" Dipartimento di Informatica
More informationA Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm
Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationA Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems
A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com
More informationA Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent
A Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent Sasan Bahtiari André L. Tits Department of Electrical and Computer Engineering and Institute for Systems
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More information1. Introduction. We analyze a trust region version of Newton s method for the optimization problem
SIAM J. OPTIM. Vol. 9, No. 4, pp. 1100 1127 c 1999 Society for Industrial and Applied Mathematics NEWTON S METHOD FOR LARGE BOUND-CONSTRAINED OPTIMIZATION PROBLEMS CHIH-JEN LIN AND JORGE J. MORÉ To John
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationTechnische Universität Dresden Institut für Numerische Mathematik. An LP-Newton Method: Nonsmooth Equations, KKT Systems, and Nonisolated Solutions
Als Manuskript gedruckt Technische Universität Dresden Institut für Numerische Mathematik An LP-Newton Method: Nonsmooth Equations, KKT Systems, and Nonisolated Solutions F. Facchinei, A. Fischer, and
More informationON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS
MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth
More information1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is
New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationA FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS
A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics
More informationSEMISMOOTH LEAST SQUARES METHODS FOR COMPLEMENTARITY PROBLEMS
SEMISMOOTH LEAST SQUARES METHODS FOR COMPLEMENTARITY PROBLEMS Dissertation zur Erlangung des naturwissenschaftlichen Doktorgrades der Bayerischen Julius Maximilians Universität Würzburg vorgelegt von STEFANIA
More informationSemismooth Support Vector Machines
Semismooth Support Vector Machines Michael C. Ferris Todd S. Munson November 29, 2000 Abstract The linear support vector machine can be posed as a quadratic program in a variety of ways. In this paper,
More informationError bounds for symmetric cone complementarity problems
to appear in Numerical Algebra, Control and Optimization, 014 Error bounds for symmetric cone complementarity problems Xin-He Miao 1 Department of Mathematics School of Science Tianjin University Tianjin
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationOn the acceleration of augmented Lagrangian method for linearly constrained optimization
On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental
More informationKaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization
Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth
More informationA New Sequential Optimality Condition for Constrained Nonsmooth Optimization
A New Sequential Optimality Condition for Constrained Nonsmooth Optimization Elias Salomão Helou Sandra A. Santos Lucas E. A. Simões November 23, 2018 Abstract We introduce a sequential optimality condition
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationKey words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path
A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS YUN-BIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new non-interior-point
More informationAffine covariant Semi-smooth Newton in function space
Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationA Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme
A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov
More informationSmoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences
Smoothed Fischer-Burmeister Equation Methods for the Complementarity Problem 1 Houyuan Jiang CSIRO Mathematical and Information Sciences GPO Box 664, Canberra, ACT 2601, Australia Email: Houyuan.Jiang@cmis.csiro.au
More informationA SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018
More informationExact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming
Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Ellen H. Fukuda Bruno F. Lourenço June 0, 018 Abstract In this paper, we study augmented Lagrangian functions for nonlinear semidefinite
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationMachine Learning. Support Vector Machines. Manfred Huber
Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data
More informationSEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS
SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS XIANTAO XIAO, YONGFENG LI, ZAIWEN WEN, AND LIWEI ZHANG Abstract. The goal of this paper is to study approaches to bridge the gap between
More informationExamples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints
Comput Optim Appl (2009) 42: 231 264 DOI 10.1007/s10589-007-9074-4 Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints A.F. Izmailov M.V. Solodov Received:
More informationPattern Classification, and Quadratic Problems
Pattern Classification, and Quadratic Problems (Robert M. Freund) March 3, 24 c 24 Massachusetts Institute of Technology. 1 1 Overview Pattern Classification, Linear Classifiers, and Quadratic Optimization
More informationA perturbed damped Newton method for large scale constrained optimization
Quaderni del Dipartimento di Matematica, Università di Modena e Reggio Emilia, n. 46, July 00. A perturbed damped Newton method for large scale constrained optimization Emanuele Galligani Dipartimento
More informationGlobally convergent three-term conjugate gradient projection methods for solving nonlinear monotone equations
Arab. J. Math. (2018) 7:289 301 https://doi.org/10.1007/s40065-018-0206-8 Arabian Journal of Mathematics Mompati S. Koorapetse P. Kaelo Globally convergent three-term conjugate gradient projection methods
More informationOptimal Newton-type methods for nonconvex smooth optimization problems
Optimal Newton-type methods for nonconvex smooth optimization problems Coralia Cartis, Nicholas I. M. Gould and Philippe L. Toint June 9, 20 Abstract We consider a general class of second-order iterations
More informationA SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationKeywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence.
STRONG LOCAL CONVERGENCE PROPERTIES OF ADAPTIVE REGULARIZED METHODS FOR NONLINEAR LEAST-SQUARES S. BELLAVIA AND B. MORINI Abstract. This paper studies adaptive regularized methods for nonlinear least-squares
More informationA globally convergent Levenberg Marquardt method for equality-constrained optimization
Computational Optimization and Applications manuscript No. (will be inserted by the editor) A globally convergent Levenberg Marquardt method for equality-constrained optimization A. F. Izmailov M. V. Solodov
More informationChap 2. Optimality conditions
Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1
More informationA nonmonotone semismooth inexact Newton method
A nonmonotone semismooth inexact Newton method SILVIA BONETTINI, FEDERICA TINTI Dipartimento di Matematica, Università di Modena e Reggio Emilia, Italy Abstract In this work we propose a variant of the
More informationGlobal Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction
ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(2011) No.2,pp.153-158 Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method Yigui Ou, Jun Zhang
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationFirst-order optimality conditions for mathematical programs with second-order cone complementarity constraints
First-order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye Jinchuan Zhou Abstract In this paper we consider a mathematical program with second-order
More informationSurvey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA
Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling
More informationDifferentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA
Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics
More informationA Piecewise Line-Search Technique for Maintaining the Positive Definiteness of the Matrices in the SQP Method
Computational Optimization and Applications, 16, 121 158, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. A Piecewise Line-Search Technique for Maintaining the Positive Definiteness
More informationA SECOND DERIVATIVE SQP METHOD : LOCAL CONVERGENCE
A SECOND DERIVATIVE SQP METHOD : LOCAL CONVERGENCE Nic I. M. Gould Daniel P. Robinson Oxford University Computing Laboratory Numerical Analysis Group Technical Report 08-21 December 2008 Abstract In [19],
More informationA proximal-like algorithm for a class of nonconvex programming
Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,
More informationA Local Convergence Analysis of Bilevel Decomposition Algorithms
A Local Convergence Analysis of Bilevel Decomposition Algorithms Victor DeMiguel Decision Sciences London Business School avmiguel@london.edu Walter Murray Management Science and Engineering Stanford University
More informationAN EQUIVALENCY CONDITION OF NONSINGULARITY IN NONLINEAR SEMIDEFINITE PROGRAMMING
J Syst Sci Complex (2010) 23: 822 829 AN EQUVALENCY CONDTON OF NONSNGULARTY N NONLNEAR SEMDEFNTE PROGRAMMNG Chengjin L Wenyu SUN Raimundo J. B. de SAMPAO DO: 10.1007/s11424-010-8057-1 Received: 2 February
More informationOn proximal-like methods for equilibrium programming
On proximal-lie methods for equilibrium programming Nils Langenberg Department of Mathematics, University of Trier 54286 Trier, Germany, langenberg@uni-trier.de Abstract In [?] Flam and Antipin discussed
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationApplying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone
Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Xin-He Miao 1, Nuo Qi 2, B. Saheya 3 and Jein-Shan Chen 4 Abstract: In this
More informationOn Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming
On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence
More informationMODIFYING SQP FOR DEGENERATE PROBLEMS
PREPRINT ANL/MCS-P699-1097, OCTOBER, 1997, (REVISED JUNE, 2000; MARCH, 2002), MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY MODIFYING SQP FOR DEGENERATE PROBLEMS STEPHEN J. WRIGHT
More information