Namur Center for Complex Systems University of Namur 8, rempart de la vierge, B5000 Namur (Belgium)

Size: px
Start display at page:

Download "Namur Center for Complex Systems University of Namur 8, rempart de la vierge, B5000 Namur (Belgium)"

Transcription

1 On the convergence of an inexact Gauss-Newton trust-region method for nonlinear least-squares problems with simple bounds by M. Porcelli Report naxys January 011 Namur Center for Complex Systems University of Namur 8, rempart de la vierge, B5000 Namur (Belgium)

2 On the convergence of an inexact Gauss-Newton trust-region method for nonlinear least-squares problems with simple bounds M. Porcelli 3 January 011 Abstract We introduce an inexact Gauss-Newton trust-region method for solving bound-constrained nonlinear least-squares problems where, at each iteration, a trust-region subproblem is approximately solved by the Conjugate Gradient method. Provided a suitable control on the accuracy to which we attempt to solve the subproblems, we prove that the method has global and asymptotic fast convergence properties. Keywords: bound-constrained nonlinear least-squares, simple bounds, trust-region methods, convergence theory, affine scaling. 1 Introduction We address the solution of the nonlinear least-squares problem with simple bounds min x Ω θ(x) = 1 Θ(x), (BCLS) where θ : IR n IR, Θ : IR n IR m is a given continuously differentiable mapping and Ω is the n- dimensional box Ω = {x IR n l x u}, l (IR ) n, u (IR ) n, l < u. Remarably, we do not mae any assumption on the relationship between the dimensions m and n. If m > n we say the (BCLS) problem is overdetermined. If m = n we have a square problem; if m < n the problem is underdetermined. In the last few years, a number of different globally convergent methods for the solution of (BCLS) have been proposed. Many of them, e.g. [, 3, 4, 9, 14, 15, 16, 17, 19, ], are based on the so-called Affine Scaling approach introduced in [7] by Coleman and Li. This approach relies on the observation that the first-order optimality conditions (KKT conditions) for (BCLS) can be written as a nonlinear system of equations D(x)J(x) T Θ(x) = 0, for x Ω, where J denotes the Jacobian matrix of Θ and D is the following diagonal scaling matrix with D(x) = diag( v 1 (x),..., v n (x) ), (1) x i u i if ( θ(x)) i < 0, u i <, v i (x) = x i l i if ( θ(x)) i 0, l i >, 1 if ( θ(x)) i 0, l i = or ( θ(x)) i < 0, u i =, for i = 1,..., n. Trust-region methods for square (BCLS) problems were studied both theoretically and computationally in[, 3, 4, 14]. Francisco et al. [9] designed a trust-region Gauss-Newton method for underdetermined (BCLS) problems. Under full ran assumptions on J, local quadratic convergence is proved to interior zero residual points, i.e. points x such that l < x < u and Θ(x ) = 0. Kanzow and Petra [15] and Zhu [] proposed global Levenberg-Marquardt methods for semismooth problems of the form (BCLS). In particular, the paper [15] is focused on overdetermined problems arising from a suitable reformulation () 1

3 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method of mixed complementarity problems; global convergence is shown and local convergence to zero residual solutions is proved under an error bound assumption. In [], a nonmonotone interior bactracing line search technique is proposed to ensure the global convergence and, under a local error bound condition, local fast convergence to nondegenerate zero residual solutions is achieved. In [16, 17, 19], trust-region methods have been proposed for problem (BCLS) regardless of its dimensions. These procedures differ in the quadratic model used during the iterations: one method employs a Gauss-Newton model while the other one is based on a regularized Gauss-Newton model and is a Levenberg-Marquardt method. They offer global convergence properties combined with potentially fast local convergence both to degenerate and nondegenerate zero residual solutions. In particular, the local convergence analysis is carried out under full ran assumptions on J for the Gauss-Newton method and under an error bound assumption in the Levenberg-Marquardt case. It is important to note that all the methods mentioned (except [4]) rely on direct methods for linear systems and linear least-squares problems. These direct methods may be preferable to iterative methods when the cost of a matrix factorizationis not excessive, e.g. if the dimension of the problem is sufficiently small or the Jacobian matrix is structured. Otherwise, it becomes necessary to use iterative methods for the solution of the linear systems and linear least-squares problems arising at each iteration. The purpose of this wor is twofold. Firstly, we shall design an inexact version of the Gauss-Newton trust-region method proposed in [16, 17, 19] based on the use of iterative methods for the linear algebra phase. Secondly, we shall analyze the impact of the use of the iterative linear algebra on the convergence properties of the inexact method. Our study is motivated by the good practical performance of the Gauss-Newton method shown in [19] and by the promising use of the method in different contexts, see [1, 5, 13]. Since the current implementation [19] is limited to the use of direct linear algebra techniques, it is suitable for medium size problems; therefore the inexact procedure presented in this paper widens the applicability of our approach. Throughout the paper, the subscript will denote an iteration counter and for any function h, h will be the shorthand for h(x ). The -norm is denoted by and B ρ(y) = {x : x y < ρ} with ρ > 0. The symbol I p represents the identity matrix of dimension p. Let A + denote the Moore-Penrose pseudoinverse of the matrix A. Finally, we let P Ω(x) be the projection of x onto Ω, i.e. (P Ω(x)) i = max{l i, min{x i, u i}}, i = 1,..., n. Description of the method In this section we describe the th iteration of an inexact Gauss-Newton trust-region method, called ITREBO (Inexact Tust-REgion method for BOund-constrained least-squares problems), for solving the problem (BCLS). The sequence {x } generated by the method consists of feasible points for (BCLS), i.e. x Ω, > 0. Given x Ω and > 0, we consider the following trust region problem min{m (p) = 1 J p+θ : p }. (3) Correspondingly to problem (3), we consider a generalized Cauchy step p C descent direction d defined as d = D θ, along the scaled steepest where D is the scaling matrix defined in (1) and (), that has the favorable property to be well-angled with respect to the simple bounds [6]. The step p C minimizes m along the direction d within the feasible trust-region, i.e. p C = argmin m (p) subject to p, x +p Ω. (4) p span{d } The th iteration of the ITREBO procedure is outlined in Algorithm.1 and the main steps are now setched. In Step 1, the trust-region problem (3) is solved approximately ( inexactly ) by the Conjugate Gradient (CG) method. The description of this step constitutes the main contribution of this section

4 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 3 Algorithm.1: ITREBO: -th iteration Input: x Ω, 0 < min, 0 η η max < 1, β 1, β,δ (0,1). 1. Compute the inexact trust-region step p tr for (3) by the CG method.. Let p tr = P Ω (x +p tr ) x. 3. Compute the generalized Cauchy step p C based on (4). 4. If ρ c ( p tr ) = m (0) m ( p tr ) m (0) m (p C ) β 1, (5) Set p = p tr ; Else find t (0,1] such that satisfies ρ c (p ) β If p = tp C +(1 t) p tr, (6) ρ θ (p ) = θ(x ) θ(x +p ) m (0) m (p ) Set x +1 = x +p, choose +1 min and η +1 [0,η max ]; Else reduce, = δ and go to Step 1. β, (7) and details shall be given further on. To generate a new feasible iterate, in Step we project x +p tr onto the box Ω and define a possibly modified trust-region step p tr such that x + p tr is feasible. Steps 3 5 attempt to find a feasible iterate x +1 = x +p which provides a sufficient decrease in the value of θ with respect to x. In Step 4, we impose a sufficient decrease of m in comparison to the generalized Cauchy step p C ; this is crucial to mae the method globally convergent. We let p = p tr if p tr satisfies (5), otherwise, we loo for a convex combination of the steps p C and p tr such that the condition (5) is satisfied. In Step 5, we measure the quality of the quadratic model m as an approximation to θ around x. If the sufficient improvement condition (7) is satisfied, the new iterate is x +1 = x +p ; otherwise, p is rejected and the trust-region size is reduced. We refer the reader to [19] for further details on the Steps -5 of the algorithm. Now, we give an insight to Step 1 of Algorithm.1, i.e. to the computation of an inexact trust-region step p tr. The basic idea of the methods proposed in [16, 17, 19] was to use a dogleg strategy in order to tae the, possibly projected, minimum norm step p N = J + Θ, (8) as frequently as possible, taing advantage of its good properties in a neighbourhood of a zero-residual solution. The aim of the ITREBO method is to retain this property and at the same time to avoid the high computational cost in forming the minimum norm step by direct factorization methods. Given x Ω, we consider the trust-region problem (3). Let p (0) = 0 and {p (j) } be the sequence of iterates generated by the CG method applied to the normal equations We now that for j 1 J T J p = J T Θ. (9) p (j) = argmin{m (p) : p K j }, (10) where K j is the jth Krylov subspace K j = span({(j TJ ) i J TΘ } j 1 i=0 ). Let pi producing a prescribed reduction of the value of m, i.e. be the first CG iterate p I = argmin{m (p) : p K j }, m (p I ) η m (0), (11)

5 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 4 where η [0,1) is the so-called forcing term. We note that by m (p) = J T (J p+θ ) and (11) it follows J T J p I = J T Θ +r, r η J T Θ, (1) i.e. p I is an inexact Gauss-Newton step for the problem θ(x) = 0 and the forcing term η is used to control the accuracy in the solution of the system (9) [8]. Since we initialize p (0) to zero, each iterate p (j) is larger in norm than its predecessor and CG terminates in a finite number of iterations computing the minimum norm step p N (8), see [11]. This implies that p I p N. (13) Therefore, we stop the CG iterations as soon as either the specified accuracy (1) is achieved or the trust-region boundary is reached. In the former case the approximate trust-region solution is the lowdimensional unconstrained minimizer p I of m. In the latter case, no further iterates giving lower value of m will be inside the trust-region. If p (j) is such that p (j 1) < p (j) then we tae the Steihaug-Toint point p ST [0, 1], that is p ST = p (j 1) +τ(p (j) p (j 1) ), τ (0,1] such that p ST =. TheSteihaug-Tointpointhasthe favorablepropertythat theoptimaldecreaseofm atthe exactsolution of the trust-region problem (3), i.e. (m (0) m (p tr )), is no more than twice that achieved at p ST [3]. This process is described in Algorithm.. Algorithm.: Computing the inexact trust-region step p tr by CG Input: x, 0 η < 1, > Set p (0) = 0 and j = 1;. Compute the j-th CG iterate p (j) given in (10); 3. If p (j) and p (j) satisfies (1), then set p I = p(j), return p tr = p I. 4. If p (j) >, find τ (0,1] such that p ST = p (j 1) +τ(p (j) p (j 1) ) satisfies p ST =, return p tr = p ST ; 5. Set j = j +1 and go to Step. We conclude this section maing some comments on Algorithm.1. We set an upper bound η max < 1 on the forcing term η so that the sequence {η } is uniformly less than 1. This is a basic requirement for the inexact Newton framewor [8]. Moreover, we note that the condition (5) is not necessary in trust-region CG framewor for unconstrained optimization while it is needed here to handle the simple bounds. 3 Convergence analysis In this section we establish global and local convergence properties of the ITREBO method. Throughout the section we let {x } be the sequence generated by the ITREBO method and, without loss of generality, we assume that for all, x is not a stationary point for the least-squares problem (BCLS), i.e. D θ 0. Moreover we mae the following basic assumptions on the function Θ in (BCLS).

6 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 5 Assumption 1 There exists an open, bounded and convex set L containing the whole sequence {x } such that L {x IR n : x s.t. x x r}, for some r > 0, and the Jacobian matrix J is Lipschitz continuous in L with Lipschitz constant γ D, i.e. for all x,z L J(x) J(z) γ D x z. (14) Assumption Θ is bounded above on L and χ L = sup x L J(x). Trivially, Assumption 1 implies that the sequence {x } is bounded. Moreover the following lemma can be easily proved. Lemma 3.1 Let Assumptions 1 and hold. Then for all x,z L Θ(x) Θ(z) χ L x z, (15) Θ(x) Θ(z) J(z)(x z) γ D x z. (16) Following the lines of the proof of [3, Lemma 3.] it is easy to show that the ITREBO method is well-defined i.e. the -th iteration of the method terminates in a finite number of trials. The global convergence properties of the method are guaranteed by imposing the condition (5) in Step 4 of Algorithm.1 and they are summarized in the following theorem whose proof follows the lines of that of [16, Theorem 4.1]. Theorem 3.1 Let Assumptions 1 and hold and {x } be the sequence generated by the ITREBO method. i) Every limit point of the sequence {x } is a first-order stationary point for the problem (BCLS). ii) If x is a limit point of {x } and Θ(x ) = 0, then all the limit points of {x } are zero-residual solutions to problem (BCLS). iii) If x is a limit point of {x } such that x int(ω) and J(x ) has full row ran, then x is such that Θ(x ) = 0. Under further assumptions, we are able to carry out the local convergence analysis. Let us assume that the sequence {x } generated by the ITREBO method has a limit point x such that Θ(x ) = 0 and that the Jacobian J(x ) is full ran. Then, under suitable choices of the sequence {η } of the forcing terms, the use of the inexact step p I as an approximation to the minimum norm step pn yields to fast asymptotic convergence of the sequence {x }. Let S be the set of zero-residual solutions to problem (BCLS), d(x,s) denote the distance from the point x to the set S and [x] S S be such that x [x] S = d(x,s), i.e. S = {y Ω : Θ(y) = 0}, [x] S = argmin y S We mae the following assumption. d(x,s) = inf{ x y,y S}, x y. (17) Assumption 3 The zero-residual solution set S of problem (BCLS) is nonempty. The sequence {x } generated by the ITREBO method has a limit point x S and ran(j(x )) = min{n,m}, i.e. J(x ) is full ran. It is important to remar that due to Assumption 3 and Theorem 3.1, we now that lim Θ = 0. The next lemma provides useful properties that are a direct consequence of Assumptions 1-3. In particular, local properties of the Jacobian J(x) and its pseudoinverse J(x) +, if x is sufficiently close to x, are stated. Lemma 3. Let Assumptions 1-3 hold. Then there exists a positive constant τ > 0 such that if x B τ (x ) x L and [x] S L, (18) Θ(x) χ L d(x,s). (19)

7 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 6 Moreover, there exists a constant ν > 0 such that if x B τ (x ) then ran(j(x)) = min{n,m}, (0) J(x) + ν. (1) Proof. See [17, Lemma 3.1]. The following lemma shows an important feature of overdetermined (BCLS) problems. If J(x ) is full ran, then Θ is guaranteed to provide a local error bound on some neighbourhood of x and x is an isolated zero-residual solution to (BCLS). Lemma 3.3 Let Assumptions 1-3 hold. If m n, then there exist positive constants α 0 and ω such that if x B ω (x ) then 1 α 0 d(x,s) = 1 α 0 x x Θ(x). () Proof. Let x B τ (x ) where τ is the scalar given in Lemma 3.. Since J(x ) is full column ran, then J(x ) + = (J(x ) T J(x )) 1 J(x ) T and J(x ) + J(x ) = I n. Also, by (14) we get I n J(x ) + J(x) J(x ) + J(x ) J(x) γ D J(x ) + x x. Choosing ω < min{τ, 1/(4γ D J(x ) + )}, we have I n J(x ) + J(x) 1/ for x B ω (x ). Then, by using the Mean Value Theorem we obtain Hence 1 J(x ) + Θ(x) = (x x ( ) In J(x ) + J(x +t(x x )) ) (x x )dt 0 ( 1 1 ) x x. Θ(x) J(x ) + Θ(x) J(x ) + 1 J(x ) + x x. Thisinequalityimplies thatx isanisolatedzero-residualsolutionto(bcls) andreducingω ifnecessary, we get d(x,s) = x x for x B ω (x ). Thus () is obtained with α 0 = J(x ) Properties of the inexact trust-region step In this section we will show that eventually there is a simple transition from the global method with a direction p of the form (6) to the projected step p I = P Ω (x +p I ) x. In fact, eventually the trust-region constraint becomes inactive and p I satisfies (5) and (7). As a consequence the sequence {x } converges to x and, for appropriate choices of {η }, the rate of convergence is q-quadratic, see the next Theorems 3. and 3.3. The steps p I and pi play a central role in the asymptotic behaviour of the sequence {x }. A result on the distance of a point from the zero-residual solution set S follows from the contractivity of the projection map P Ω. From the definition of the steps p I and pi we have p I pi (3) x+ p I z x +p I z, z Ω (4) and from (17) we have d(x + p I,S) x + p I [x +p I ] S x +p I [x +p I ] S, i.e. d(x + p I,S) d(x +p I,S). (5)

8 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 7 In the following study we will use the results proved in Lemmas 3.1, 3. and 3.3. In particular, for subsequent references, we let α 1 and α be constants defined as α 1 = νχ L, α = νγ D, (6) where χ L and γ D are in Assumptions 1 and and reduce the constant τ of Lemma 3. if necessary so that α 1 α τ 1/4. (7) It follows two technical lemmas. Lemma 3.4 Let Assumptions 1-3 hold. Let τ be the constant given in Lemma 3.. If x B τ (x ) then p N ν Θ α 1 d(x,s). (8) Proof. From (8), (1), (19) and (6), the bound (8) easily follows. Lemma 3.5 Let Assumptions 1-3 hold. Let τ 1 τ where τ is given in Lemma 3. and τ τ 1 /(1+α 1 ). Then if x B τ (x ) then x +p I B τ1 (x ), x +p I L, [x +p I ] S L, (9) and x + p I B τ1 (x ), x + p I L. (30) Proof. Note that (13) yields x +p I x x x + p I x x + p N. Then, by (8) we have x +p I x (1+α 1 )τ τ 1. Hence, by Lemma 3. the statements in (9) are proved. Similarly, using the contractivity (3), (30) holds. The next lemma establishes that if x is sufficiently close to x, then the trust-region is inactive and the inexact step p I is taen as the approximate trust-region step. Lemma 3.6 Let Assumptions 1-3 hold. Then there exists ς > 0 such that if x B ς (x ) then the trust-region solution p tr computed by Algorithm. is the step p I given in (11). Proof. Let τ > 0 be given in Lemma 3. and let x B τ (x ). Since x S and (8) holds, there exists a scalar ς τ sufficiently small so that if x B ς (x ) then p N min. Namely, the unconstrained minimum norm minimizer of the quadratic model m lies in the trust-region. Therefore, by the property (13), Algorithm. returns the step p I. The analysis of the steps p I and pi is the subject of the next results. Lemma 3.7 concerns the case m n, while Lemma 3.8 refers to the case m n. Lemma 3.7 Let m n and let Assumption 1-3 hold. Let α 1 and α be the constants defined in (6). Then, there exists a positive constant ρ o, such that if x B ρ o(x ) x + p I x x +p I x φ o x x, (31) where φ o = α 0(γ D (α 1 +1) x x +α 1 χ L η ). (3)

9 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 8 Proof. Let ω and τ as in Lemma 3.3 and Lemma 3. respectively. Fix x B ρ o(x ), where ρ o < min{ω,τ}/(1+α 1 ). Then by Lemma 3.5 and using (9) with τ 1 = min{ω,τ} we obtain By condition () we get x +p I B τ(x ), x +p I B ω(x ), x +p I L, [x +p I ] S L. (33) so we need to estimate Θ(x +p I ) to prove (31). By (16) we get x +p I x α 0 Θ(x +p I ), (34) Θ(x +p I ) Θ(x +p I ) Θ J p I + Θ +J p I γ D p I + Θ +J p I. (35) Consider the SVD decomposition of J. By Lemma 3., J is full ran. Hence, let J = U Σ V T = (U,1,U, )Σ V T whereu,1 IR m n, U, IR m (m n), V IR n n Σ IR m n, Σ = diag(ς 1,...,ς n ), ς i > 0 for all i = 1,...,n. Then we have that U T,1 = UT,1 (JT )+ J T, because (J T )+ J T is the orthogonal projection onto the range of J, [3]. As a consequence we may write that U T,1(Θ +J p I ) = U T,1(J T ) + J T (Θ +J p I ). If we use (1), (1) and Assumption we obtain U T,1(Θ +J p I ) J + JT (Θ +J p I ) Moreover we verify easily that U T, J = 0 and so α 1 η Θ. (36) U T, (Θ +J p I ) = UT, Θ = U, U T, Θ, where the last equality follows from U T, U, = I m n. Moreover the equality I m = U U T yields and by J p N = J J + Θ = U,1 U T,1 Θ we get U, U T,Θ = (I m U,1 U T,1)Θ U, U T, Θ = Θ +J p N. (37) Since p N is the global minimizer of Θ +J p we obtain from (37) that From (16) we get U T, (Θ +J p I ) = Θ +J p N Θ +J (x x ). U T, (Θ +J p I ) γ D x x. (38) Combining together Θ +J p I = UT (Θ +J p I ), (36) and (38) we find that Θ +J p I UT,1 (Θ +J p I ) + UT, (Θ +J p I ) α 1 η Θ +γ D x x. (39) By (35), (13), (8), (39), (19) and () we obtain Θ(x +p I ) (γ D (α 1 +1) x x +α 1 χ L η ) x x. Hence (34) and (4) give (31). The same result holds if m n and the proof is essentially the one of [17, Lemma 3.].

10 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 9 Lemma 3.8 Let m n and let Assumption 1-3 hold. Let α 1,α and ν be the constants defined in (6) and (1) respectively. Then there exists a positive constant ρ u, such that if x B ρ u(x ) and if η min{η max,1/(4α 1 )}, then d(x +p I,S) ν(να Θ +α 1 η ) Θ, (40) and where d(x + p I,S) φ u d(x,s), (41) φ u = α 1(α d(x,s)+η ). (4) Proof. Let τ as in Lemma 3. and such that (7) holds. Fix x B ρ u(x ), where Then by Lemma 3.5, we get ρ u < τ/(1+α 1 ). (43) x +p I B τ(x ), x +p I L, [x +p I ] S L. (44) To prove the thesis we need intermediate results. Consider the sequence {w +l } l, l 0, of the form w = x, w +l+1 = w +l +s I +l, l 0, (45) where s I +l is computed by applying CG method to the linear system J(w +l ) T J(w +l )s = J(w +l ) T Θ(w +l ). Specifically, starting from the null initial guess, the step s I +l is the first CG iterate such that where r +l is given by r +l η +l J(w +l ) T Θ(w +l ), l 0, r +l = J(w +l ) T J(w +l )s I +l +J(w +l) T Θ(w +l ), l 0, (46) and { η +l } l 0 is a sequence of positive scalars such that η = η and sup j 0 η +j 1/(4α 1 ). Note that for l = 0, we get s I = pi. Letting we have s N +l = J(w +l) + Θ(w +l ), l 0. (47) s I +l sn +l, l 0. (48) First, we show that {w +l } l 0 B τ (x ). Second, we prove that {w +l } l 0 has limit point in S. We begin proving that {w +l } B τ (x ) by induction. The thesis trivially holds for w = x. Then, we suppose that w +j B τ (x ) for j l and show that w +l+1 B τ (x ). By (47), (48), (1), (16) and Lemma 3. we get s I +j sn +j ν Θ(w +j), (49) Θ(w +j ) Θ(w +j 1 ) J(w +j 1 )s I +j 1 γ D s I +j 1, (50) r +j χ L η +j Θ(w +j ), (51) for j = 1,...,l. Moreover, from (46), (47), (1) and (J T )+ J T = J J + = I m it follows J(w +j 1 )(s N +j 1 si +j 1 ) ν r +j 1, 1 j l.

11 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 10 Hence from (51) and (19) Θ(w +j ) = Θ(w +j ) Θ(w +j 1 ) J(w +j 1 )s N +j 1 ±J(w +j 1 )s I +j 1 γ D s I +j 1 +ν r +j 1 (να Θ(w +j 1 ) +α 1 η +j 1 ) Θ(w +j 1 ) (5) (α 1 α d(w +j 1,S)+α 1 η +j 1 ) Θ(w +j 1 ) (α 1 α τ +α 1 η +j 1 ) Θ(w +j 1 ) 1 Θ(w +j 1), for j = 1,...,l, since α 1 α τ 1 4 and sup j 0 η +j 1/(4α 1 ). Then it follows that and by (49) It then follows from (53) that Θ(w +j ) ( ) j 1 Θ, 1 j l, s I +j ν ( 1 ) j Θ, 1 j l. (53) w +l+1 x l w +j+1 w +j + x x j=0 l ( 1 ) j s I +j +ρu ν Θ +ρ u, j=0 j=0 and (19) and (43) yield to w +l+1 x ν Θ +ρ u α 1 d(x,s)+ρ u (α 1 +1)ρ u τ. As a consequence, {w +l } B τ (x ) and w +l satisfies Lemma 3. and Lemma 3. for all l 0. Further, the conditions (49), (50) and (51) hold for j 1. Second, we prove that {w +l } is a Cauchy sequence with limit point x S. In fact, letting p > q 0 and proceeding as above we obtain p 1 p 1 w +p w +q s I +j s N +j s N +j α 1 ρ u. j=q j=q Thus, {w +l } is a Cauchy sequence and the limit is denoted as x. To show that x S note that (46), (16), (15), (45), (51) and the property (J(w +l ) T ) + J(w +l ) T = I m yield Θ(w +l+1 ) = (J(w +l ) T ) + J(w +l ) T Θ(w +l+1 ) for l 0. Hence J(w +l ) + J(w +l ) T (Θ(w +l+1 ) J(w +l )s I +l Θ(w +l))+ r +l ν ( χ L γ D s I +l + r +l ) ν ( χ L γ D s I +l +χ L η +l Θ(w +l ) ) α 1 γ D w +l+1 w +l +α 1 η +l ( Θ(w +l+1 ) Θ(w +l ) + Θ(w +l+1 ) ) α 1 γ D w +l+1 w +l +1/4(χ L w +l+1 w +l + Θ(w +l+1 ) ), Θ(w +l+1 ) 4 3 (α 1γ D w +l+1 w +l +χ L /4) w +l+1 w +l, for l 0. Since lim l w +l+1 w +l = 0, it follows Θ( x) = lim l Θ(w +l+1 ) = 0. j=0

12 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 11 Now we can prove the thesis of the lemma. Note that x +p I x = w +1 x j=1 si +j, since and From (49) and (53) we get w +1 x = l 1 w +l = w +1 + j=1 w +1 lim l 1 lim l j=1 s I +j, lim l w +l = x, l w +l s I +j l 1 = lim s I +j l j=1 lim l 1 s I +j = s I +j. l j=1 j=1 x +p I x s I +j ν j=1 j=1 ( ) j 1 1 Θ(w +1) = ν Θ(w +1). Then, using equation (5) with j = 1 we obtain x +p I x ν(να Θ +α 1 η ) Θ. (54) Since d(x +p I,S) x +p I x, (40) holds. Finally, applying (5), (40) and (19) we easily obtain condition (41). The next lemma gives useful asymptotic bounds on quantities that are used in the proofs of Lemma 3.10 and Lemma Lemma 3.9 Let Assumptions 1-3 hold. Let α 1,α and ν be the constants defined in (6) and (1) respectively. Then there exists a constant ˆτ > 0 such that if x Bˆτ (x ) then J p I +Θ χ L d(x +p I,S)+να Θ, (55) Θ(x + p I ) J p I +Θ ( ν α Θ +να J p I +Θ ) Θ. (56) Proof. Let τ as in Lemma 3.. Fix x Bˆτ (x ), where ˆτ < τ/(1+α 1 ). Then by Lemma 3.5, (9) and (30) hold with τ 1 = τ. Consider the equality J p I +Θ = Θ(x + p I ) Θ([x +p I ] S)+J p I (Θ(x + p I ) Θ ). Then by (15), (16), (4), (3), (13) and (8) we obtain J p I +Θ χ L x + p I [x +p I ] S +γ D p I χ L x +p I [x +p I ] S +γ D p N χ L d(x +p I,S)+να Θ, and (55) is proved. To prove (56) we use the Mean Value Theorem to get the statement Hence, Θ(x + p I ) = Θ + 1 Θ(x + p I ) = J p I +Θ + 0 J(x +t p I ) p I dt+j p I J p I. 1 0 (J(x +t p I ) J ) p I dt ( 1 ) T + (J(x +t p I ) J ) p I dt ( J p I +Θ ), 0

13 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 1 and consequently by (8) Θ(x + p I ) J p I +Θ γd p I 4 +γ D J p I +Θ p I ν α Θ 4 +να J p I +Θ Θ. 3. Behaviour of the sequence {x } and convergence rate In this section we first show that, under proper assumptions on the forcing sequence {η }, the step p I satisfies conditions (5) and (7) whenever x is sufficiently close to x and is sufficiently large. Then, we prove that the whole sequence {x } generated by the ITREBO method converges to x S and the convergence rate is q-quadratic. The following Lemma 3.10 and Theorem 3. concern the overdetermined case; Lemma 3.11 and Theorem 3.3 refer to the underdetermined case. Lemma 3.10 Let m n and let Assumptions 1-3 hold. Assume that lim η = 0. Then p I satisfies conditions (5) and (7) whenever x is sufficiently close to x and is sufficiently large. Proof. Let ψ o min{ρ o,ˆτ}, where ρ o and ˆτ are given in Lemma 3.7 and Lemma 3.9 respectively and fix x B ψ o(x ). Note that m (0) m (p C ) m (0) and Since d(x +p I,S) x +p I x, (55), (31), (19) and () yield ρ c ( p I ) 1 J p I +Θ Θ. (57) J p I +Θ χ L φ o x x +να Θ σ o x x, (58) where φ o is given in (3) and σo is defined as σ o = χ Lφ o +α 1α x x. (59) Thus, (57), (58) and () give ( ) σ ρ c ( p I o ) 1. α 0 Since lim η = 0, if x is sufficiently close to x and is sufficiently large we obtain that p I satisfies condition (5). Let p I satisfy (5). To prove that pi satisfies (7), observe that m (0) = Θ /, m (p I ) < m (0) and ρ θ ( p I ) = 1 Θ(x + p I ) J p I +Θ Θ J p I +Θ. (60) From () and (58) we have Θ J p I +Θ ( 1 α 0 (σ o ) ) x x. (61) Using (56), (61), (58), (19), () we have ρ θ ( p I ) 1 (ν α Θ +να J p I +Θ ) Θ ( ), (σ o) x x 1 α 0 1 χ α 1 α x x +να σ o x x L 1 (σ o. (6) α ) 0

14 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 13 Then, if x is sufficiently close to the solution x and sufficiently large, the second term in (6) can be made less than (1 β ) and p I satisfies condition (7). Next theorem provides the main result on the convergence rate of the sequence generated by the ITREBO method for the case m n. Theorem 3. Let m n and let Assumptions 1-3 hold. Then the sequence {x } generated by the ITREBO method converges to x q-superlinearly if lim η = 0. Moreover, the convergence rate is q-quadratic if η = O( Θ ). Proof. Let {x j } be a subsequence of {x } convergingto x. By Lemmas 3.6 and 3.10 if x j is sufficiently close to x and for j sufficiently large, then the step taen is equal to p I j and by (3), (13) and (8) lim j p I j = 0. Then, since x is an isolated limit point of {x }, using [18, Lemma 4.10], we conclude that lim x = x. To establish the convergence rate of {x }, let x sufficiently near to x and sufficiently large so that x +1 = x + p I and Lemma 3.7 holds. Then from (31) x +1 x x x φo, where φ o is defined in (3). Since lim + φ o = 0, x converges to x q-superlinearly. Moreover, if η = O( Θ ), then η = O( x x ) and it follows x +1 x = O( x x ), i.e. the q-quadratic rate is guaranteed. The main local convergence properties of the ITREBO method for the underdetermined case are stated below. Firstly, using the findings in Lemma 3.9 and proceeding as in Lemma 3.10, we get the subsequent lemma. Lemma 3.11 Let m n and let Assumptions 1-3 hold. Assume that lim η = 0. Then p I satisfies conditions (5) and (7) whenever x is sufficiently close to x and is sufficiently large. Proof. Let ψ u min{ρ u,ˆτ}, where ρ u and ˆτ are given in Lemma 3.8 and Lemma 3.9 respectively and let sufficiently large that η min{η max,1/(4α 1 )}. Fix x B ψ u(x ). Note that since x is a limit point of {x } and lim η = 0, there exists an iterate x and a forcing term η satisfying the above conditions. First we prove that p I satisfies (5). By (55), (40) and (19) we obtain where Thus, (57), (63) and (19) give J p I +Θ σ u Θ, (63) σ u = α 1α (α 1 +1)d(x,S)+α 1 η. (64) ρ c ( p I ) 1 (σ u ), Then, p I satisfiescondition (5) if x is sufficiently closeto x and is sufficiently largesothat 1 (σ u) > β 1. Second we prove that p I satisfies (7). Let pi satisfy (5). From (63), (19) we have Using (60), (56) and (65) we obtain Θ J p I +Θ ( 1 (σ u )) Θ. (65) ρ θ ( p I ) 1 ν α Θ +να J p I +Θ 1 (σ u ), 1 ν α Θ +να σ u Θ 1 (σ u ), 1 α 1α d(x,s) +α 1 α σ u d(x,s) 1 (σ u ). (66)

15 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 14 Then, if x is sufficiently close to the solution x and is sufficiently large, the second term in (66) can be made less than (1 β ), hence p I satisfies condition (7). Secondly, the behaviour of the sequence {x } generated by the ITREBO method is given in the next theorem whose proof parallels that of [17, Theorem 3.1]. Theorem 3.3 Let m n and let Assumptions 1-3 hold. Then the sequence {x } generated by the ITREBO method converges to x q-superlinearly if lim η = 0. Moreover, the convergence rate is q-quadratic if η = O( Θ ). Proof. Let ˆ be sufficiently large so that φ u in (4) satisfies φ u 1 and η min{η max,1/(4α 1 )} for ˆ. Let ψ min{ς,ρ u }, where ς and ρ u are given in Lemma 3.6 and Lemma 3.8 respectively, and ˆ be such that if x B ψ (x ) and then p I ψ satisfies (5) and (7). Finally, let ζ < 1+α 1. Fix and x B ζ (x ). We begin showing that if x B ζ (x ) then x l B ψ (x ) for l >. We proceed by induction. First, we show that x +1 B ψ (x ). In fact, by (3) we have x +1 x = x + p I x ζ + p I. Thus by (13) and (8) and the definition of ζ, we get x +1 x (1 + α 1 )ζ ψ. Second, we assume x +1,..., x +m 1 B ψ (x ), and show that x +m B ψ (x ). From (41) it follows for l = 1,...,m. Thus, d(x +l,s) φ u +l 1 d(x +l 1,S) ( ) l 1 1 ( 1 ) l, φ u d(x,s) ζ x +m x x +m x +m x x m 1 l=0 m 1 p I +ζ α 1 where the last inequality follows from (13) and (8), and m 1 x +m x (α 1 l=0 l=0 d(x +l,s)+ζ, ( 1 l +1)ζ (α1 ) ( 1 ) l +1)ζ = (α1 +1)ζ ψ, where the last inequality is due to the choice of ζ. Note that we have x +l = x +l 1 + p I +l 1 for l > 0. Moreover, letting p > q we have l=0 x p x q p 1 ( 1 ) lζ p I l α 1 = α1 ζ. l=q l=0 This means that {x } is a Cauchy sequence and hence it converges. Since x is a limit point we conclude lim x = x. To establish the convergence rate of {x }, let sufficiently large so that x +j+1 B ψ (x ) for j 0. By (8) and (41) we obtain p I +j+1 pn +j+1 α 1 d(x +j+1,s) α 1 φ u +j d(x +j,s) 1 α 1 d(x +j,s). Then, we proceed as above and using x +1 x j=0 pi +j+1, and (41) we get ( ) j 1 x +1 x α 1 d(x +j+1,s) α 1 d(x +1,S) j=0 α 1 φ u d(x,s). j=0 Hence x +1 x x x α 1φ u d(x,s) = α 1 φ u d(x,s).

16 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 15 Since lim + φ u = 0, x converges to x q-superlinearly. If moreover η = O( Θ ), then η = O(d(x,S)) by (19) and hence for sufficiently large there exists φ > 0 such that φ u φd(x,s), i.e. x +1 x α 1 φ u d(x,s) α 1 φd(x,s) α 1 φ x x, and then the q-quadratic rate is guaranteed. Acnowledgements The author is grateful to Benedetta Morini for the several helpful discussions and the many suggestions on the manuscript. References [1] N.M. Alsaifi, P. Englezos, Prediction of multiphase equilibrium using the PC-SAFT equation of state and simultaneous testing of phase stability, Fluid Phase Equilibr. (010), doi: /j.fluid [] S. Bellavia, M.Macconi, B. Morini, An affine scaling trust-region approach to bound-constrained nonlinear systems, Appl. Numer. Math., 44 (003), pp [3] S. Bellavia, B. Morini, An interior global method for nonlinear systems with simple bounds, Optim. Methods Softw., 0 (005), pp. 1-. [4] S. Bellavia, B. Morini, Subspace trust-region methods for large bound-constrained nonlinear equations, SIAM J. Numer. Anal., 44 (006), pp [5] L. Bencini, R. Fantacci, L. Maccari, Analytical model for performance analysis of IEEE DCF mechanism in multi-radio wireless networs, in Proc. of ICC 010, pp. 1-5, Cape Town, South Africa, May 010. [6] M.A. Branch, T.F. Coleman, Y. Li, A subspace, interior and conjugate gradient method for largescale bound-constrained minimization problems, SIAM J. Sci. Comput., 1 (1999), pp [7] T.F. Coleman, Y. Li, An interior trust-region approach for nonlinear minimization subject to bounds, SIAM J. Optim., 6 (1996), pp [8] R.S. Dembo, S.C. Eisenstat, T. Steihaug, Inexact Newton methods, SIAM J. Numer. Anal., 19 (198), pp [9] J.B. Francisco, N. Krejić, J.M. Martínez, An interior-point method for solving box-constrained underdetermined nonlinear systems, J. Comput. Appl. Math., 177 (005), pp [10] M.R. Hestenes, E. Stiefel, Methods of Conjugate Gradients for solving linear systems, J. Res. Natl. Bur. Stand., 49 (195), pp [11] M.R. Hestenes, Pseudoinverses and conjugate gradients, Communications of the ACM, 18 (1975), pp [1] R.A. Horn, C.R. Johnson, Matrix analysis, The Cambridge University Press, [13] M. Kaiser, K. Klamroth, A. Theale and Ph. L. Toint, Solving structured nonlinear least-squares and nonlinear feasibility problems with expensive functions, Report NAXYS , Dept of Mathematics, FUNDP, Namur (B), 010. [14] C. Kanzow, A. Klug, An interior-point affine-scaling trust-region method for semismooth equations with box constraints, Comput. Optim. Appl., 37 (007), pp

17 M. Porcelli: On the convergence of an inexact Gauss-Newton trust-region method 16 [15] C. Kanzow, S. Petra, Projected filter trust region methods for a semismooth least-squares formulation of mixed complementarity problems, Optim. Method Soft., (007), pp [16] M. Macconi, B. Morini, M. Porcelli, Trust-region quadratic methods for nonlinear systems of mixed equalities and inequalities, Applied Numer. Math., 59 (009), pp [17] M. Macconi, B. Morini, M. Porcelli, A Gauss-Newton method for solving bound-constrained underdetermined nonlinear systems, Optim. Methods Softw., 4 (009), pp [18] J.J. Moré, D.C. Sorensen, Computing a trust-region step, SIAM J. Sci. Stat. Comput., 4 (1983), pp [19] B. Morini, M. Porcelli, TRESNEI, a Matlab trust-region solver for systems of nonlinear equalities and inequalities, Comput. Optim. Appl. (010) DOI: /s [0] T. Steihaug, The conjugate gradient method and trust regions in large scale optimization, SIAM J. Numer. Anal., 0 (1983), pp [1] Ph.L. Toint, Towards an efficient sparsity exploiting Newton method for minimization, in Sparse Matrices and Their Uses (I. S. Duff, ed.), Academic Press, London (1981), pp [] D. Zhu, Affine scaling interior LevenbergMarquardt method for bound-constrained semismooth equations under local error bound conditions, J. Comput. Appl. Math., 19 (008), pp [3] Y. Yuan, On the truncated conjugate-gradient method. Math. Program., Ser. A, 87(3):561573, 000.

Trust-region methods for rectangular systems of nonlinear equations

Trust-region methods for rectangular systems of nonlinear equations Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini

More information

On affine scaling inexact dogleg methods for bound-constrained nonlinear systems

On affine scaling inexact dogleg methods for bound-constrained nonlinear systems On affine scaling inexact dogleg methods for bound-constrained nonlinear systems Stefania Bellavia, Sandra Pieraccini Abstract Within the framewor of affine scaling trust-region methods for bound constrained

More information

Keywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence.

Keywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence. STRONG LOCAL CONVERGENCE PROPERTIES OF ADAPTIVE REGULARIZED METHODS FOR NONLINEAR LEAST-SQUARES S. BELLAVIA AND B. MORINI Abstract. This paper studies adaptive regularized methods for nonlinear least-squares

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Key words. convex quadratic programming, degeneracy, Interior Point methods, Inexact Newton methods, global convergence.

Key words. convex quadratic programming, degeneracy, Interior Point methods, Inexact Newton methods, global convergence. AN INTERIOR POINT NEWTON-LIKE METHOD FOR NONNEGATIVE LEAST SQUARES PROBLEMS WITH DEGENERATE SOLUTION STEFANIA BELLAVIA, MARIA MACCONI AND BENEDETTA MORINI Abstract. An interior point approach for medium

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ 2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze

More information

Optimal Newton-type methods for nonconvex smooth optimization problems

Optimal Newton-type methods for nonconvex smooth optimization problems Optimal Newton-type methods for nonconvex smooth optimization problems Coralia Cartis, Nicholas I. M. Gould and Philippe L. Toint June 9, 20 Abstract We consider a general class of second-order iterations

More information

New hybrid conjugate gradient methods with the generalized Wolfe line search

New hybrid conjugate gradient methods with the generalized Wolfe line search Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:

More information

Università di Firenze, via C. Lombroso 6/17, Firenze, Italia,

Università di Firenze, via C. Lombroso 6/17, Firenze, Italia, Convergence of a Regularized Euclidean Residual Algorithm for Nonlinear Least-Squares by S. Bellavia 1, C. Cartis 2, N. I. M. Gould 3, B. Morini 1 and Ph. L. Toint 4 25 July 2008 1 Dipartimento di Energetica

More information

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,

More information

CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS

CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS MATHEMATICS OF COMPUTATION Volume 68, Number 228, Pages 165 1613 S 25-5718(99)1135-7 Article electronically published on March 1, 1999 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS BENEDETTA MORINI Abstract.

More information

QUASI-NEWTON METHODS FOR CONSTRAINED NONLINEAR SYSTEMS: COMPLEXITY ANALYSIS AND APPLICATIONS

QUASI-NEWTON METHODS FOR CONSTRAINED NONLINEAR SYSTEMS: COMPLEXITY ANALYSIS AND APPLICATIONS QUASI-NEWTON METHODS FOR CONSTRAINED NONLINEAR SYSTEMS: COMPLEXITY ANALYSIS AND APPLICATIONS LEOPOLDO MARINI, BENEDETTA MORINI, MARGHERITA PORCELLI Abstract. We address the solution of constrained nonlinear

More information

Globally convergent three-term conjugate gradient projection methods for solving nonlinear monotone equations

Globally convergent three-term conjugate gradient projection methods for solving nonlinear monotone equations Arab. J. Math. (2018) 7:289 301 https://doi.org/10.1007/s40065-018-0206-8 Arabian Journal of Mathematics Mompati S. Koorapetse P. Kaelo Globally convergent three-term conjugate gradient projection methods

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,

1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0, AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale

More information

An example of slow convergence for Newton s method on a function with globally Lipschitz continuous Hessian

An example of slow convergence for Newton s method on a function with globally Lipschitz continuous Hessian An example of slow convergence for Newton s method on a function with globally Lipschitz continuous Hessian C. Cartis, N. I. M. Gould and Ph. L. Toint 3 May 23 Abstract An example is presented where Newton

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based

More information

This manuscript is for review purposes only.

This manuscript is for review purposes only. 1 2 3 4 5 6 7 8 9 10 11 12 THE USE OF QUADRATIC REGULARIZATION WITH A CUBIC DESCENT CONDITION FOR UNCONSTRAINED OPTIMIZATION E. G. BIRGIN AND J. M. MARTíNEZ Abstract. Cubic-regularization and trust-region

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Politecnico di Torino Porto Institutional Repository [Article] Constrained dogleg methods for nonlinear systems with simple bounds Original Citation: Bellavia S., Macconi M., Pieraccini S. (2012). Constrained

More information

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2 A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS Christian Kanzow 1 and Hou-Duo Qi 2 1 University of Hamburg Institute of Applied Mathematics Bundesstrasse 55, D-20146 Hamburg,

More information

Notes on Numerical Optimization

Notes on Numerical Optimization Notes on Numerical Optimization University of Chicago, 2014 Viva Patel October 18, 2014 1 Contents Contents 2 List of Algorithms 4 I Fundamentals of Optimization 5 1 Overview of Numerical Optimization

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

Inexact Newton Methods Applied to Under Determined Systems. Joseph P. Simonis. A Dissertation. Submitted to the Faculty

Inexact Newton Methods Applied to Under Determined Systems. Joseph P. Simonis. A Dissertation. Submitted to the Faculty Inexact Newton Methods Applied to Under Determined Systems by Joseph P. Simonis A Dissertation Submitted to the Faculty of WORCESTER POLYTECHNIC INSTITUTE in Partial Fulfillment of the Requirements for

More information

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem SIAM J. OPTIM. Vol. 9, No. 4, pp. 1100 1127 c 1999 Society for Industrial and Applied Mathematics NEWTON S METHOD FOR LARGE BOUND-CONSTRAINED OPTIMIZATION PROBLEMS CHIH-JEN LIN AND JORGE J. MORÉ To John

More information

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning An Accelerated Block-Parallel Newton Method via Overlapped Partitioning Yurong Chen Lab. of Parallel Computing, Institute of Software, CAS (http://www.rdcps.ac.cn/~ychen/english.htm) Summary. This paper

More information

Numerical Methods for Large-Scale Nonlinear Equations

Numerical Methods for Large-Scale Nonlinear Equations Slide 1 Numerical Methods for Large-Scale Nonlinear Equations Homer Walker MA 512 April 28, 2005 Inexact Newton and Newton Krylov Methods a. Newton-iterative and inexact Newton methods. Slide 2 i. Formulation

More information

Convergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize

Convergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize Bol. Soc. Paran. Mat. (3s.) v. 00 0 (0000):????. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v38i6.35641 Convergence of a Two-parameter Family of Conjugate

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

A line-search algorithm inspired by the adaptive cubic regularization framework with a worst-case complexity O(ɛ 3/2 )

A line-search algorithm inspired by the adaptive cubic regularization framework with a worst-case complexity O(ɛ 3/2 ) A line-search algorithm inspired by the adaptive cubic regularization framewor with a worst-case complexity Oɛ 3/ E. Bergou Y. Diouane S. Gratton December 4, 017 Abstract Adaptive regularized framewor

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

A Recursive Trust-Region Method for Non-Convex Constrained Minimization

A Recursive Trust-Region Method for Non-Convex Constrained Minimization A Recursive Trust-Region Method for Non-Convex Constrained Minimization Christian Groß 1 and Rolf Krause 1 Institute for Numerical Simulation, University of Bonn. {gross,krause}@ins.uni-bonn.de 1 Introduction

More information

A family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations

A family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations Journal of Computational Applied Mathematics 224 (2009) 11 19 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: www.elsevier.com/locate/cam A family

More information

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Mathematical Programming manuscript No. (will be inserted by the editor) Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Wei Bian Xiaojun Chen Yinyu Ye July

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models by E. G. Birgin, J. L. Gardenghi, J. M. Martínez, S. A. Santos and Ph. L. Toint Report NAXYS-08-2015

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

THE restructuring of the power industry has lead to

THE restructuring of the power industry has lead to GLOBALLY CONVERGENT OPTIMAL POWER FLOW USING COMPLEMENTARITY FUNCTIONS AND TRUST REGION METHODS Geraldo L. Torres Universidade Federal de Pernambuco Recife, Brazil gltorres@ieee.org Abstract - As power

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,

More information

On the iterate convergence of descent methods for convex optimization

On the iterate convergence of descent methods for convex optimization On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization

Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization J. M. Martínez M. Raydan November 15, 2015 Abstract In a recent paper we introduced a trust-region

More information

Trust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725

Trust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725 Trust Region Methods Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh Convex Optimization 10-725/36-725 Trust Region Methods min p m k (p) f(x k + p) s.t. p 2 R k Iteratively solve approximations

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

Dictionary Learning for L1-Exact Sparse Coding

Dictionary Learning for L1-Exact Sparse Coding Dictionary Learning for L1-Exact Sparse Coding Mar D. Plumbley Department of Electronic Engineering, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom. Email: mar.plumbley@elec.qmul.ac.u

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

Global convergence of trust-region algorithms for constrained minimization without derivatives

Global convergence of trust-region algorithms for constrained minimization without derivatives Global convergence of trust-region algorithms for constrained minimization without derivatives P.D. Conejo E.W. Karas A.A. Ribeiro L.G. Pedroso M. Sachine September 27, 2012 Abstract In this work we propose

More information

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics

More information

Introduction to Nonlinear Optimization Paul J. Atzberger

Introduction to Nonlinear Optimization Paul J. Atzberger Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,

More information

A trust region method based on interior point techniques for nonlinear programming

A trust region method based on interior point techniques for nonlinear programming Math. Program., Ser. A 89: 149 185 2000 Digital Object Identifier DOI 10.1007/s101070000189 Richard H. Byrd Jean Charles Gilbert Jorge Nocedal A trust region method based on interior point techniques for

More information

A Trust-Funnel Algorithm for Nonlinear Programming

A Trust-Funnel Algorithm for Nonlinear Programming for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton

More information

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review Department of Statistical Sciences and Operations Research Virginia Commonwealth University Oct 16, 2013 (Lecture 14) Nonlinear Optimization

More information

Affine covariant Semi-smooth Newton in function space

Affine covariant Semi-smooth Newton in function space Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong

More information

Iteration-complexity of first-order penalty methods for convex programming

Iteration-complexity of first-order penalty methods for convex programming Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)

More information

Accelerated Gradient Methods for Constrained Image Deblurring

Accelerated Gradient Methods for Constrained Image Deblurring Accelerated Gradient Methods for Constrained Image Deblurring S Bonettini 1, R Zanella 2, L Zanni 2, M Bertero 3 1 Dipartimento di Matematica, Università di Ferrara, Via Saragat 1, Building B, I-44100

More information

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 43 (2017), No. 7, pp. 2437 2448. Title: Extensions of the Hestenes Stiefel and Pola Ribière Polya conjugate

More information

The effect of calmness on the solution set of systems of nonlinear equations

The effect of calmness on the solution set of systems of nonlinear equations Mathematical Programming manuscript No. (will be inserted by the editor) The effect of calmness on the solution set of systems of nonlinear equations Roger Behling Alfredo Iusem Received: date / Accepted:

More information

Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations

Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations Applied Mathematics Volume 2012, Article ID 348654, 9 pages doi:10.1155/2012/348654 Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations M. Y. Waziri,

More information

A line-search algorithm inspired by the adaptive cubic regularization framework, with a worst-case complexity O(ɛ 3/2 ).

A line-search algorithm inspired by the adaptive cubic regularization framework, with a worst-case complexity O(ɛ 3/2 ). A line-search algorithm inspired by the adaptive cubic regularization framewor, with a worst-case complexity Oɛ 3/. E. Bergou Y. Diouane S. Gratton June 16, 017 Abstract Adaptive regularized framewor using

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

DENSE INITIALIZATIONS FOR LIMITED-MEMORY QUASI-NEWTON METHODS

DENSE INITIALIZATIONS FOR LIMITED-MEMORY QUASI-NEWTON METHODS DENSE INITIALIZATIONS FOR LIMITED-MEMORY QUASI-NEWTON METHODS by Johannes Brust, Oleg Burdaov, Jennifer B. Erway, and Roummel F. Marcia Technical Report 07-, Department of Mathematics and Statistics, Wae

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

Worst Case Complexity of Direct Search

Worst Case Complexity of Direct Search Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient

More information

1. Introduction. In this paper we derive an algorithm for solving the nonlinear system

1. Introduction. In this paper we derive an algorithm for solving the nonlinear system GLOBAL APPROXIMATE NEWTON METHODS RANDOLPH E. BANK AND DONALD J. ROSE Abstract. We derive a class of globally convergent and quadratically converging algorithms for a system of nonlinear equations g(u)

More information

An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations

An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations International Journal of Mathematical Modelling & Computations Vol. 07, No. 02, Spring 2017, 145-157 An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations L. Muhammad

More information

A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification

A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification JMLR: Workshop and Conference Proceedings 1 16 A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification Chih-Yang Hsia r04922021@ntu.edu.tw Dept. of Computer Science,

More information

A Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent

A Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent A Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent Sasan Bahtiari André L. Tits Department of Electrical and Computer Engineering and Institute for Systems

More information

Adaptive two-point stepsize gradient algorithm

Adaptive two-point stepsize gradient algorithm Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of

More information