Modified nonmonotone Armijo line search for descent method
|
|
- Rosaline Harmon
- 5 years ago
- Views:
Transcription
1 Numer Alor :1 25 DOI /s ORIGINAL PAPER Modified nonmonotone Armijo line search for descent method Zhenjun Shi Shenquan Wan Received: 23 February 2009 / Accepted: 20 June 2010 / Published online: 3 July 2010 Spriner Science+Business Media, LLC 2010 Abstract Nonmonotone line search approach is a new technique for solvin optimization problems. It relaxes the line search rane and finds a larer step-size at each iteration, so as to possibly avoid local minimizer and run away from narrow curved valley. It is helpful to find the lobal minimizer of optimization problems. In this paper we develop a new modification of matrixfree nonmonotone Armijo line search and analyze the lobal converence and converence rate of the resultin method. We also address several approaches to estimate the Lipschitz constant of the radient of objective functions that would be used in line search alorithms. Numerical results show that this new modification of Armijo line search is efficient for solvin lare scale unconstrained optimization problems. Keywords Unconstrained optimization Nonmonotone line search Converence Mathematics Subject Classifications C30 65K05 49M37 he wor was supported in part by Racham Faculty Research Grant of University of Michian, USA. Z. Shi B Department of Mathematics and Computer Science, Central State University, Wilberforce, OH 45384, USA zshi@centralstate.edu S. Wan Department of Computer and Information Science, he University of Michian, Dearborn, MI 48128, USA shqwan@umd.umich.edu
2 2 Numer Alor : Introduction Let R n be an n-dimensional Euclidean space and f : R n R 1 be a continuously differentiable function, where n is a positive inteer. Line search method for the unconstrained minimization problem is defined by the equation min f x, x R n, 1 x +1 = x + α d, = 0, 1, 2,..., 2 where x 0 R n is an initial point, d a descent direction of f x at x,andα a step-size. hrouhout this paper, x refers to the minimizer or stationary point of 1. We denote f x by f, f x by f,and f x by, respectively. Choosin a search direction d and determinin a step-size α alon the direction at each iteration are the main tas in line search methods 19, 22]. he search direction d is enerally required to satisfy d < 0, 3 which uarantees that d is a descent direction of f x at x 19, 35]. In order to uarantee the lobal converence, we sometimes require that d satisfies the sufficient descent condition d c 2, 4 where c > 0 is a constant. Moreover, we often choose d to satisfy the anle property d cos <, d >= d τ 0, 5 where τ 0 : 1 τ 0 > 0 is a constant. Once a search direction is determined at each step, traditional line search requires the function value to decrease monotonically, namely f x +1 < f x, = 0, 1, 2,..., 6 whenever x = x. Many line searches can uarantee the descent property 6. For example, exact line search 31, 34 37], Armijo line search see 1], Wolfe line search 38, 39] and Goldstein line search 14], etc. Here are the commonly-used line searches. a Minimization. At each iteration, α is selected so that f x + α d = min α>0 f x + αd. 7
3 Numer Alor : b Approximate minimization. At each iteration, α is selected so that { α = min α } d x + αd = 0,α >0. 8 c Armijo 30]. Set scalars s,β,l > 0, andσ with s = d L d 2,ρ 0, 1, and σ 0, 1 2.Letα be the larest one in { s, s ρ,s ρ 2,... } for which f f x + αd σα d, 9 d Limited minimization. Set s = d L d 2, α is defined by f x + α d = min f x + αd, 10 α 0,s ] where L > 0 is a constant. e Goldstein. A fixed scalar σ 0, 1 2 is selected and α is chosen to satisfy σ f x + α d f α d 1 σ. 11 f It is possible to show that, if f is bounded from below, then there exists an interval of step-sizes α for which the relation above is satisfied, and there are fairly simple alorithms for findin such a step-size throuh a finite number of arithmetic operations. Stron Wolfe. α is chosen to satisfy simultaneously f f x + α d σα d 12 and x + α d d β d, 13 where σ and β aresomescalarswithσ 0, 2 1 and β σ, 1. Wolfe. α is chosen to satisfy 12and x + α d d β d. 14 Some important lobal converence results for various methods usin the above mentioned specific line search procedures have been iven 3, 7, 20, 21, 31, 38, 39]. In fact, the above-mentioned line searches can mae the related methods be monotone descent methods for unconstrained optimization problems 34 37].
4 4 Numer Alor :1 25 Nonmonotone line search methods have also been investiated by many authors 5, 11, 27]. In fact, Barzilai-Borwein method 2, 4, 25, 26] is a nonmonotone descent method. he nonmonotone line search does not need to satisfy the condition 6 in many situations; as a result, it is helpful to overcome the case where the sequence of iterates run into the bottom of a curved narrow valley, a common occurrence in practical nonlinear problems. Grippo et al ] proposed a new line search technique, nonmonotone line search, for Newton-type method and truncated Newton method. Liu et al. 6] used nonmonotone line search to the BFGS quasi-newton method. Den 6], oint 32, 33], and many other researchers 15 17, 40, 41] studied nonmonotone trust reion methods. Moreover, eneral nonmonotone line search technique has been investiated by many authors 5, 23, 27, 28]. In particular, Sun et al. 27] studied eneral nonmonotone line search methods by usin inverse continuous modulus, which may be described as follows. A Nonmonotone Armijo line search. Let s > 0, σ 0, 1, ρ 0, 1 and let M be a nonneative inteer. For each,letm satisfy m0 = 0, 0 m minm 1, M], Let α be the larest one in { s, sρ,sρ 2,... } such that f x + αd max 0 j m f x j]+σα d. 16 B Nonmonotone Goldstein line search. Let σ 0, 1 2, α is selected to satisfy and f x + α d f x + α d max f j]+σα d, 17 0 j m max f j]+1 σα d j m C Nonmonotone Wolfe line search. Let σ 0, 1 2,andβ σ, 1, α is selected to satisfy f x + α d max f j]+σα d, 19 0 j m and x + α d d β d. 20 Sun et al. 27] established eneral converent alorithms with the above three nonmonotone line searches. Armijo-type line search is easier to implement on computer than other line searches do. However, when we use Armijotype line search to find an acceptable step size at each iteration, two questions need to be answered. How to determine an adequate initial step size s is the first question. If s is too lare, then the number of bactracin will be lare.
5 Numer Alor : If s is too small, then the resultin method will convere slowly. Shi et al. developed a more eneral nonmonotone line search method 28] and ave a suitable estimation of s. he second question is how to determine the bactracin ratio ρ 0, 1.Ifρ is too close to 1, then the number of bactracin will be increased reatly, the computational cost will be increased substantially. If ρ is too close to 0, then the smaller step size will be appeared at iteration and possibly decrease the converence rate. In this paper we develop a new modification of matrix-free nonmonotone Armijo line search and analyze the lobal converence of resultin line search methods. his development enables us to choose a larer step-size at each iteration and maintain the lobal converence. We also address several ways to estimate the Lipschitz constant of the radient of objective functions that is used to estimate the initial step size at each iteration. Numerical results show that the new modification of nonmonotone Armijo line search is efficient for solvin lare scale unconstrained optimization problems and correspondin alorithms is superior to some existin similar line search methods. he rest of this paper is oranized as follows. In Section 2 we describe the new modicifcation of nonmonotone Armijo line search. In Section 3 we analyze the lobal converence of the resultin method. Section 4 is devoted to the analysis of converence rate. In Section 5, we propose some ways to estimate the parameters used in the new version of Armijo line search and report some numerical results. 2 Nonmonotone Armijo line search hrouhout this paper we assume H1 he objective function f x has a lower bound on R n. H2 he radient x = f x of f x is uniform continuous in an open convex set B that contains the level set Lx 0 ={x R n f x f x 0 } with x 0 iven. H2 he radient x = f x of f x is Lipschitz continuous in an open convex set B that contains the level set Lx 0, i.e., there exists L > 0 such that x y L x y, x, y B. Obviously, if H2 holds, then H2 holds, i.e., H2 is weaer than H2. We describe line search method as follows. Alorithm A Given some parameters and the initial point x 0, := 0. Step 1. If =0, then stop; else o to Step 2; Step 2. x +1 = x + α d where d is a descent direction of f x at x, α is selected by some line search. Step 3. := + 1,otoStep1.
6 6 Numer Alor :1 25 In this paper, we would lie to investiate how to choose the step-size α instead of the choices of d. We develop a new modification of nonmonotone Armijo line search approach and find that the step-size defined in the modified nonmonotone line search is larer than that defined in the oriinal nonmonotone line searches. D Modified nonmonotone Armijo line search. Set scalars s,ρ,μ,l > 0, and σ with s = d L d, σ 0, 2 1, ρ 0, 1 and μ 0, 2. M is a 2 nonneative inteer and m satisfies 15. Let α be the larest one in { s, s ρ,s ρ 2,... } such that f x + αd max f j] σα 0 j m d + 12 αμl d 2 ]. 21 Remar 1 Suppose that α is defined by line search A and α is defined by line search D, thenα α.infact,ifl L and there exists α satisfyin 16, then α is certainly to satisfy 21. Avoidin the memory and computation of a matrix is the advantae of this modified nonmonotone line search. Unlie the nonmonotone line search in 28], it is a matrix-free line search and can be used to solve lare scale optimization problems. If M 0,thenthe modified nonmonotone line search will reduce to the monotone line search in 29]. Moreover, if μ = 0, then the line search D will reduce to the nonmonotone Armijo line search A. In Alorithm A, the correspondin alorithm with line search D is denoted by Alorithm D. In what follows, we will analyze the lobal converence of Alorithm D. 3 Global converence In order to analyze the lobal converence, we first describe the eneral theorem. heorem 3.1 Assume that H1 and H2 hold, the search direction d satisf ies 3 and α is determined by the modif ied nonmonotone Armijo line search D, Alorithm D enerates an inf inite sequence {x } and {L } satisf ies with 0 < u 0 U 0. hen, we have u 0 L U 0, 22 lim inf d = d
7 Numer Alor : In order to prove 23, we assume that the contrary proposition holds, i.e., there exists a positive number ɛ>0 such that d d ɛ,. 24 herefore, it follows from 21, 22, and 24, that f x + αd max f j] σα d + 12 ] αμl d 2 0 j m σα d + 12 ] s μl d 2 which results in f x + αd max f j] σ 0 j m = σ μ α L d σ μ α d d U 0 d σ μ ɛ α d, U μ ɛ U 0 α d,. 25 Lemma 3.1 Assume that the conditions in heorem 3.1 hold and 25 holds. Let η 0 = σ μ ɛ. U 0 hen, max f xml+ j ] max f xml 1+i ] η 0 min αml+ j 1 d Ml+ j 1, 1 j M 1 i M 0 j M 1 26 and thus l=1 min 0 j M 1 αml+ j d Ml+ j < Proof By H1 and 25, it suffices to show that the followin inequality holds for j = 1, 2,..., M, f x Ml+ j max f xml 1+i ] η 0 min αml+ j 1 d Ml+ j i M 0 j M 1
8 8 Numer Alor :1 25 Since the new modified nonmonotone Armijo line search and 25 imply f x Ml+1 max 0 i mml f x Ml i η0 αml d Ml, 29 it follows from this and mml M that 28 holds for j = 1. Suppose that 28 holds for any 1 j M 1. With the descent property of d, we obtain max f xml+i ] max f xml 1+i ] i j 1 i M By the new modified Armijo line search, the induction hypothesis, m Ml + j M, 25, and 30, we obtain f x Ml+ j+1 max f xml+ j i ] η 0 αml+ j d Ml+ j 0 i mml+ j { max max f xml 1+i ], max f xml+i ]} η 0 αml+ j d Ml+ j 1 i M 1 i j max f xml 1+i ] η 0 αml+ j d Ml+ j. 1 i M hus, 28 isalsotruefor j + 1. By induction, 28 holds for 1 j M. his shows that 26 holds. Since f x is bounded below by H1, it follows that max f xmj+i ] >. 1 i M By summin 28 over l, we can et l=1 min 0 j M 1 αml+ j d Ml+ j < +. herefore, 27 holds. he proof is completed.
9 Numer Alor : Proof of heorem 3.1 From Lemma 3.1, there exists an infinite subset K of {0, 1, 2,..., } such that lim K, α d = If there is an infinite subset K 1 = { } K α = s,thenwehave 0 = lim α d K 1, = lim s d K 1, = lim d K 1, L d d 2 1 d, U 0 d which contradicts 24. If there is an infinite subset K 2 = { } K α < s, then we have α ρ 1 s. By the modified nonmonotone Armijo line search D, we have f x + α ρ 1 d max f j] >σα ρ 1 d + 12 ] α ρ 1 μl d 2. 0 j m herefore, f x + α ρ 1 d f >σα ρ 1 d. Usin the mean value theorem on the left-hand side of the above inequality, we can find θ 0, 1] such that α ρ 1 x + θ α ρ 1 d d >σα ρ 1 d. herefore, x + θ α ρ 1 d d >σ d. 32 By H2, the Cauchy-Schwartz inequality, 32 and31, we obtain 0 = lim x + θ α ρ 1 d K 2, lim K 2, x + θ α ρ 1 d ] d d > 1 σ lim d K 2, d, which also contradicts 24. he contradictions shows that 23 holds. he proof of heorem 3.1 is completed.
10 10 Numer Alor :1 25 Lemma 3.2 Assume that H1 and H2 hold, the search direction d satisf ies 3 and α is determined by the modif ied nonmonotone Armijo line search, Alorithm D enerates an inf inite sequence {x },andl satisf ies ul < L UL, 33 where u 0, 1, U 1, +. hen, there exists η 0 >0 such that for any M, Proof Let f x + α d max f j] η 0 0 j m K 1 = { α = s }, K2 = { α < s }. 2 d. 34 d If K 1,then f x + α d max f j] σα d + 12 ] α μl d 2 0 j m hus, f x + α d max f j] σ 0 j m = σ d L d 2 d 1 ] 2 μ d = σ μ 2 d L d μ 2 d L d, K Lettin η = σ μ L, K 1, by 33, we have η = σ μ L σ μ UL < 0. Lettin η = σ μ, UL
11 Numer Alor : by 35, we obtain η η and f +1 max f j] η 0 j m 2 d, K d If K 2,thenα < s. hus, α ρ 1 s. By the modified nonmonotone Armijo line search D, we have f x + α ρ 1 d max f j] >σα ρ 1 0 j m herefore, f x + α ρ 1 d f >σα ρ 1 d. d + 12 α ρ 1 μl d 2 ]. Usin the mean value theorem on the left-hand side of the above inequality, we can find θ 0, 1] such that herefore, α ρ 1 x + θ α ρ 1 d d >σα ρ 1 d. x + θ α ρ 1 d d >σ d. 37 By H2, the Cauchy-Schwartz inequality, and 37, we obtain herefore, By lettin we have α ρ 1 L d 2 x + θ α ρ 1 d d x + θ α ρ 1 d ] d 1 σ d. ρ1 σ α > d L d, K s = ρ1 σ L d d 2, K 2, s >α > s, K 2. 39
12 12 Numer Alor :1 25 By 21and39, we have f +1 max f j] σα d + 12 ] α μl d 2 0 j m σα d s μl d 2 ]} σ s d 1 ] 2 μ d σρ1 σ2 μ = 2L 2 d. d Lettin η σ2 μ = σρ1, 40 2L we have 2 f +1 max f j] η d 0 j m d, K Let η = σρ1 σ2 μ. 2UL It follows from 33thatη η and f +1 max f j] η 0 j m 2 d, K d Lettin η 0 = minη,η,by36and42, we have f +1 max f j] η 0 0 j m 2 d, M. 43 d he proof is completed. Remar 3.1 his lemma shows that the sequence { f } is not a decreasin sequence but a decreasin sequence within every m steps.
13 Numer Alor : Lemma 3.3 Assume that H1 and H2 hold, the search direction d satisf ies 3 and α is determined by the modif ied nonmonotone Armijo line search D, Alorithm D enerates an inf inite sequence {x },and33 holds at each iteration. hen, max f xml+ j ] max f xml 1+i ] η 0 min 1 j M 1 i M 0 j M 1 2 Ml+ j 1 d Ml+ j 1, d Ml+ j 1 44 and thus l=1 min 0 j M 1 2 Ml+ j d Ml+ j < d Ml+ j he proof is similar to that of Lemma 3.1. Corollary 3.1 Assume that H1 and H2 hold, and that the conditions in heorem 3.1 hold. hen where l satisf ies lim f lim f x l, 46 f x l = max f j]. 0 j m Proof Since m + 1 m + 1, we have f x l+1 = max f x+1 j ] 0 j m+1 f x+1 j ] max 0 j m+1 = max { f x l, f x +1 } = f x l. hus { f x l } is a monotone non-increasin sequence. H1 implies that { f x l } has a bound from below and thus it has a limit. By 26, we obtain that 46 holds. Corollary 3.2 Assume that H1 and H2 hold, the search direction d satisf ies 5 and α is determined by the modif ied nonmonotone Armijo line search D, Alorithm D enerates an inf inite sequence {x },and22 holds at each iteration. hen, lim inf =0. + Proof It is easy to prove by 5 and46. he proof is completed.
14 14 Numer Alor :1 25 heorem 3.2 Suppose that H1 and H2 hold, d satisf ies 4 and d 2 c 1 + c 2,, 47 where c 1 and c 2 are positive constants. he step-size α is determined by the modif ied nonmonotone Armijo line search D and 22 holds. Alorithm D enerates an inf inite sequence {x }. hen, lim inf =0. 48 Proof We proceed by contradiction. Assume that the relation 48 does not hold. hen, there exists some τ such that τ, his and 4 imply d cτ 2, hen, it follows from 47and50that l=1 where i 0 satisfies min 0 i M 1 2 Ml+i d Ml+i d Ml+i l=1 l=1 =+, c 2 τ 4 d Ml+i0 2 c 2 τ 4 c 1 + c 2 Ml + i 0 2 Ml+i0 d Ml+i0 2 = min Ml+i d Ml+i. d Ml+i0 0 i M 1 d Ml+i he above relation contradicts 45. he contradiction shows the truth of 48. Remar 3.2 heorem 3.2 shows that { } is lobal converence even if d as +. heorem 3.3 Suppose that H1 and H2 hold, d satisf ies 5. he step-size α is determined by the modif ied nonmonotone Armijo line search D and 33 holds. Alorithm D enerates an inf inite sequence {x }. hen, lim =0. 51
15 Numer Alor : Proof By 45, we have By 5, we have Lettin lim min l + 0 j M 1 lim l + Ml+φl = 2 Ml+ j d Ml+ j = 0. d Ml+ j min Ml+ j =0. 0 j M 1 min Ml+ j, 0 j M 1 we obtain lim Ml+φl =0. 52 l It follows from H2 and 33that +1 = Lα d + Ls d + = L d + L d L + L 1 + L u. By lettin c 3 = 1 + L u, we have herefore, for 1 i M, +1 c Ml+1+i c 3 Ml+1+i 1... c 2M 3 Ml+φl. By 52, we obtain that 51 holds and the proof is finished.
16 16 Numer Alor : Converence rate In order to analyze the converence rate, we restrict our discussion to the case of uniformly convex objective functions. We assume that H3 f is twice continuously differentiable and uniformly convex on R n. Lemma 4.1 Assume that H3 holds, then H1 and H2 hold, f x has a unique minimizer x,andthereexists0 < m L such that m y 2 y 2 f xy M y 2, x, y R n ; 54 and thus 1 2 m x x 2 f x f x 1 2 M x x 2, x R n ; 55 M x y 2 x y x y m x y 2, x, y R n ; 56 M x x 2 x x x m x x 2, x R n. 57 By 57 and 56 we can also obtain from the Cauchy-Schwartz inequality, that and M x x x m x x, x R n, 58 x y M x y, x, y R n. 59 By 55 and 58 we can also obtain the followin relation m 2M 2 x 2 f x f x M 2m 2 x Its proof can be seen from 3, 31]. heorem 4.1 Assume that H3 holds, the search direction d satisf ies the anle property 5, the step-size α is determined by the modif ied nonmonotone Armijo line search D and 33 holds. Alorithm D enerates an inf inite sequence {x }. hen, {x } x at least R-linearly, i.e., there exists 0 <ω<1 such that R 1 {x }= lim x x 1 = ω. 61 +
17 Numer Alor : Proof Since H3 implies H1 and H2, the conditions of heorem 3.3 hold in this case. hus 51 holds. By 58 it follows that lim x = x. 62 At first, we obtain from 60 and53, that f x +1 f x b f x f x ], 1, 63 where b = c2 3 M 3 > 1. m 3 For any l 0,letψl be any index in Ml + 1, Ml + 1] for which f x ψl = max f xml+i ] i M By the definition of ψl, 5, and 44, we can et f x ψl f x ψl 1 c 6 min Ml+ j 2, 65 0 j M 1 where c 6 = τ0 2 η 0 is a positive constant. Similarly as in the proof of heorem 3.1 in 5], we can prove that there exist constants c 4 and c 5 0, 1 such that f x f x c 4 c 5 f x1 f x ]. 66 By 55and66 we have 1 2 m x x 2 f x f x c 4 c 5 f x1 f x ], and thus Lettin we have x x R 1 {x }= 2c 4 f x1 f x ] ω = c 5, lim + x x 1 = lim + = ω<1 m c c4 f x 1 f x ] which shows that {x } converes to x at least R-linearly. m 1 ω
18 18 Numer Alor : Numerical results In this section we will discuss the implementation of the new alorithm. he technique of choosin parameters is reasonable and effective for solvin practical problems both in theoretical and numerical aspect. 5.1 Parametric estimation In the modified Armijo line search, the parameter L needs to be estimated. As we now, L should approximate Lipschitz constant M of the radient of objective function f.ifm is iven, we should certainly tae L = M.However, M is not nown a prior in many situations. First of all, we let δ = x +1 x, y = +1, and estimate or L = y 1 δ 1, 68 { } y i L = max i = 1, 2,..., M δ i 69 whenever M + 1,andM is a positive inteer. Next, BB method 2, 4, 25, 26] also motivates us to find a way of estimatin M. Solvin the minimization min L R 1 Lδ 1 y 1, we can obtain L = δ 1 y 1 δ Obviously, if K M + 1, we can also tae { } δ L = max i y i i = 1, 2,..., M. 71 δ i 2 On the other hand, we can tae L = y 1 2, 72 δ 1 y 1 or { } y i 2 L = max i = 1, 2,..., M, 73 δ i y i whenever M + 1.
19 Numer Alor : here are many other techniques of estimatin the Lischitz constant M 35]. We will use to estimate the M and the correspondin alorithms are denoted by Alorithms respectively. 5.2 Numerical results In what follows, we will discuss the numerical performance of the new line search method. he test problems are chosen from 18] and the implementable alorithm is stated as follows. Alorithm A Step 0. Given some parameters σ 0, 1 2, ρ 0, 1, μ 0, 2, andl0 = 1; let x 1 R n and set := 0; Step 1. If =0then stop, else o to step 3; Step 3. Choose d satisfies anle property 5, for example, tain d = ; Step 4. x +1 = x + α d,whereα is defined by the modified nonmonotone Armijo line search; Step 5. δ = x +1 x, y = +1,andL +1 is determined by one of 68 73; Step 6. := + 1,otostep1. In the above alorithm, we set σ = 10 4, ρ = 0.87, μ = 1, M = 2 and compare with oriinal Armijo line search method with the same parameters. We will find that the step-size in the new line search method is easier to find than in the oriinal one. In other words, the new line search method needs less evaluations of radient and objective function at each iteration. We tested the new line search methods and oriinal Armijo line search method with double precision in a portable computer. he codes were written in the visual C++ lanuae. Our test problems and the initial points used are drawn from 18]. For each problem, the limitin number of function evaluations is set to 1,000, and the stoppin condition is We report our numerical results in able 1, where Armijo, N-Armijo, New68, New70, and New72 stand for the Armijo line search method, nonmonotone Armijo line search method, new line search methods with L taen by 68, 70, and 72, respectively. he symbols n, IN, N f mean the dimension of the problem, the number of iterations, and the number of function evaluations. he unconstrained optimization problems are numbered in the same way as 18]. For example, P5 means Problem 5 in 18]. We will set d = at each
20 20 Numer Alor :1 25 able 1 IN and N F for reachin the same precision, M = 2 Problem n Armijo N-Armijo New68 New70 New72 P5 2 8/12 8/9 6/9 7/11 7/8 P /32 19/22 16/19 14/17 17/23 P /50 33/45 28/34 26/33 30/42 P /72 12/56 16/63 12/58 11/56 P /17 13/13 12/13 12/13 11/11 P /21 19/19 16/23 12/14 11/15 P /30 18/27 17/22 16/25 15/20 P /42 30/38 28/34 26/36 26/38 P /58 32/49 30/34 28/32 30/52 P /87 59/61 43/67 48/72 42/61 P /67 49/64 45/59 38/38 47/52 P /121 14/34 11/32 16/78 9/83 P /30 14/17 14/19 12/16 15/18 P /28 15/24 14/21 16/19 12/18 CPU 23s 19s 17s 14s 16s iteration. In this case, if α = s at each iteration, then New70 and New72 will reduce to BB methods 2, 4, 25, 26] because of correspondin to 70, or d α = s = L d 2 = 1 = δ 1 2 L δ 1 y 1 α = δ 1 y 1 y 1 2 correspondin to 72. As we now, BB method is an efficient method without line search 24]. In fact, a powerful inexact line search should reduce the function evaluation number at each iteration. he new line search method proposed in the paper needs less evaluation number of function value than other line search methods. For example, Armijo line search method, nonmonotone Armijo line search method, etc. Moreover, how to estimate the parameter L is still a valuable problem. We should see other ways to estimate L so as to cut down the number of function evaluations at each iteration. We use CPU to denote the averae computational time 8] for solvin each correspondin problems. From able 1, we can see that, for some problems, the three new alorithms need less number of function evaluations than oriinal Armijo line search methods. However, for some problems, Armijo line search method performs as well as the new line search methods. herefore, our numerical results show that the new line search methods are superior to oriinal Armijo line search methods in many situations. In particular, the new method needs less function
21 Numer Alor : evaluations than oriinal Armijo line search methods, i.e, in many cases, α always taes s in the new line search. Moreover, the estimation of L and essentially s is very important in the new alorithm. In numerical experiment, we tae d =, which is a special case of descent directions. We may tae other descent directions as d at each step. If we tae μ = 1, ρ = and U = 1000 then the converence condition 33 reduces to L 1000 < L 1000L. 75 hus the alorithm will convere for rouh estimation of L. All the above mentioned problems are very small scale problems. he advantae of the new modification of nonmonotone Armijo line search seems not obvious. We should implement it for lare scale problems. For lare scale problems 8 10], we implement the modified nonmonone Armijo line search in the same software environment. In able 2, denotes the number of iteration IN 10,000. he numerical results are listed in able 2. From able 2, we can see that, the previous Armjio-type line searches, such as Armijo and N-Armijo, need more iterations and more functional evaluations than the new modifications New68, New70 andnew72 when reachin the same precision. We found that the previous Armijo-type line search can result in ziza phenomena in solvin some lare scale problems. he correspondin method cannot stop in the local area of the minimizer because of the constant choice of initial possible acceptable step size at each iteration. he shows that the new modification of nonmonotone Armijo line search can avoid ziza phenomena due to the ood estimation of initial possible acceptable step size s. able 2 IN and N f for reachin the same precision, M = 2 Problem n Armijo N-Armijo New68 New70 New72 ORSION1 15, /8, /8, /6, /6, /7,120 CLPLAEA 4, /5, /2, /1, /1, /1,542 JNLBRNGA 15, /7, /6, /2, /3, /2,163 SVANBERG 5, /8, /4, /3, /3, /3,480 BRAU2D 5, /4, /1, /1, /1, /1,311 BRAU3D 4, /2, /1, /2, /1, /1,345 RIGGER 5, /3, /2, /3, /2, /2,230 BDVALUE 5, /4, /3, /3, /3, /1,768 VAREIGVL 5,000 / 324/1, /1, /1, /1,352 MSQRA 10,000 / / 413/4, /6, /4,551 NLMSURF 15,625 / / 965/8, /9, /8,252 OBSCLBU 15,625 / 814/7, /7, /6, /3,487 PRODPL /381 28/127 26/106 26/106 25/98 ODNAMUR 11,276 / 515/7, /3, /2, /1,927 CPU 196s 143s 158s
22 22 Numer Alor :1 25 able 3 IN and N F for reachin the same precision, M = 15 Problem n Armijo N-Armijo New69 New71 New73 P5 2 8/12 9/13 10/14 9/18 12/15 P /32 13/20 17/17 14/16 15/21 P /50 32/41 26/35 26/32 31/43 P /72 14/17 16/24 12/34 11/43 P /17 14/15 16/17 12/15 12/18 P /21 19/19 16/23 12/14 11/15 P /30 23/23 17/17 18/22 15/28 P /42 34/34 31/39 28/32 25/28 P /58 32/32 28/32 28/32 30/32 P /87 52/52 43/43 45/52 42/54 P /67 53/65 48/65 42/37 48/55 P /121 14/14 12/12 16/18 9/23 P /30 15/15 14/17 12/14 15/15 P /28 15/15 14/16 14/17 16/16 CPU 23s 19s 21s 18s 18s As to the nonmonotone stratey, tain M = 2 seems to be too small. We should do more numerical experiments to test the performance of the new nonmonotone line search when M taes bi numbers. We use 69, 71 and 73toreplace68, 70and72 in the alorithm when M + 1. From able 3, we found that we should tae small M for small scale problems. When M = 15 and n 20, the performance of the nonmonotone line search seems to be bad while the performance seems to be better when n is lare. We are not sure how to tae M can mae the nonmonotone line search have a ood performance in practical computation. We try to find some relationship between M and n but not yet riht now. It may depend on the specific objective function. We used M = 5 and M = 10 to test the nonmontone able 4 IN and N f for reachin the same precision, M = 15 Problem n Armijo N-Armijo New69 New71 New73 ORSION1 15, /8, /2, /4, /5, /5,641 CLPLAEA 4, /5, /2, /1, /1, /1,241 JNLBRNGA 15, /7, /5, /2, /2, /1,982 SVANBERG 5, /8, /3, /3, /3, /3,241 BRAU2D 5, /4, /1, /1, /1, /1,289 BRAU3D 4, /2, /1, /2, /1, /1,426 RIGGER 5, /3, /2, /3, /2, /2,374 BDVALUE 5, /4, /3, /2, /3, /1,528 VAREIGVL 5,000 / 317/1, /1, /1, /1,456 MSQRA 10,000 / 424/5, /4, /6, /4,825 NLMSURF 15,625 / 934/8, /8, /9, /8,153 OBSCLBU 15,625 / 834/7, /7, /6, /3,723 PRODPL /381 26/147 28/121 26/98 22/95 ODNAMUR 11,276 / 525/7, /3, /2, /1,906 CPU 183s 172s 147s 153s
23 Numer Alor : line search and found that the situation of M = 5 is very similar to that of M = 2, while the situation of M = 10 is very similar to that of M = 15. See able 4 to find the performance of the nonmonotone line search when M = 15. In ables 1, 2, 3 and 4, Alorithm New70 andnew71 seem to be the best in all mentioned alorithms based on the averae computational time. Actually, we want to find the smallest Lipschitz constant of the radient of objective function. In fact, y 1 δ 1 δ 1 2 y 1 δ 1 y 1 2 y 1 δ 1. herefore, 70 and71 are the best choices of Lipschitz constant estimation and thus result in the best estimations of step sizes at the -th iteration. 6 Conclusion A new modification of matrix-free nonmonotone Armijo line search was developed and the related descent method was investiated. It was proved that the new modification of nonmonotone Armijo line search method has lobal converence and linear converence rate under some mild conditions. We can choose a larer step-size in each line search procedure and sometimes avoid the iterates runnin into the narrow curved valley of objective functions. he new modification of nonmontone Armijo line search can avoid ziza phenomena in some cases. he new line search approach can mae us desin new line search methods in some wider sense. Especially, the new modified line search method can reduce to BB method 2, 4, 25, 26] in some special cases. As we can see, Lipschitz constant M of radient x of the objective function f x needs to be estimated at each step. we have discussed some techniques for choosin L. In numerical experiment, we tae d = at each step. Indeed, we can tae other descent directions as d, such as conjuate radient direction, limited quasi-newton directions, etc. For the future research, we can establish lobal converence of other line search methods with the modified nonmonotone Goldstein or the modified nonmonotone Wolfe line searches. Of course, we expect to decrease the number of radients and functional evaluations for some line searches. Since the new line search needs to estimate L, we can find other ways to estimate L and choose diverse parameters such as σ, μ,andρ, so as to find available parameters in solvin special unconstrained optimization problems. Moreover, we can investiate the L-BFGS and CG method with the modified nonmonotone Armijo line search. Acnowledements he authors are very rateful to the anonymous referees and the editor for their careful readin and valuable comments and suestions that reatly improved the paper.
24 24 Numer Alor :1 25 References 1. Armijo, L.: Minimization of functions havin Lipschitz continuous first partial derivatives. Pac. J. Math. 16, Barzilai, J., Borwein, J.M.: wo-point step size radient methods. IMA J. Numer. Anal. 8, Cohen, A.I.: Stepsize analysis for descent methods. J. Optim. heory Appl. 332, Dai, Y.H., Liao, L.Z.: R-linear converence of the Barzilai and Borwein radient method. IMA J. Numer. Anal. 22, Dai, Y.H.: On the nonmonotone line search. J. Optim. heory Appl. 1122, Den, N., Xiao, Y., Zhou, F.: Nonmonotone trust reion alorithms. J. Optim. heory Appl. 76, Dennis J.E. Jr., Schnable, R.B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Enlewood Cliffs Dolan, E., Moré, J.J.: Benchmarin optimization software with performance profiles. Math. Proram. 91, Gould, N.I.M., Orban, D., oint, Ph.L.: GALAHAD, a library of thread-safe Fortran 90 pacaes for lare-scale nonlinear optimization. ACM rans. Math. Softw. 29, Bonartz, I., Conn, A.R., Gould, N., oint, Ph.L.: CUE: constrained and unconstrained testin environment, ACM rans. Math. Softw. 21, Grippo, L., Lampariello, F., Lucidi, S.: A class of nonmonotone stability methods in unconstrained optimization. Numer. Math. 62, Grippo, L., Lampariello, F., Lucidi, S.: A truncated Newton method with nonmonotone line search for unconstrained optimization. J. Optim. heory Appl. 60, Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton s method. SIAM J. Numer. Anal. 23, Goldstein, A.A.: On steepest descent. SIAM J. Control 3, Ke, X., Han, J.: A nonmonotone trust reion alorithm for equality constrained optimization. Sci. China 38A, Ke, X., Liu, G., Xu, D.: A nonmonotone trust reion alorithm for unconstrained optimization. Chin. Sci. Bull. 41, Liu, G., Han, J., Sun, D.: Global converence of BFGS alorithm with nonmonotone line search. Optimization 34, Moré, J.J., Garbow, B.S., Hillstron, K.E.: estin unconstrained optimization software. ACM rans. Math. Softw. 7, Nocedal, J., Wriht, J.S.: Numerical Optimization. Spriner, New Yor Nocedal, J.: heory of alorithms for unconstrained optimization. Acta Numer. 1, Powell, M.J.D.: Direct search alorithms for optimization calculations. Acta Numer. 7, Pola, E.: Optimization: Alorithms and Consistent Approximations. Spriner, New Yor Panier, E., its, A.: Avoidin the Maratos effect by means of a nonmonotone line search, I: eneral constrained problems. SIAM J. Numer. Anal. 28, Raydan, M., Svaiter, B.E.: Relaxed steepest descent and Cauchy-Barzilai-Borwein method. Comput. Optim. Appl. 21, Raydan, M.: he Barzilai and Borwein method for the lare scale unconstrained minimization problem. SIAM J. Optim. 7, Raydan, M.: On the Barzilai Borwein radient choice of steplenth for the radient method. IMA J. Numer. Anal. 13, Sun, W.Y., Han, J.Y., Sun, J.: Global converence of nonmonotone descent methods for unconstrained optimization problems. J. Comput. Appl. Math. 146, Shi, Z.J., Shen, J.: Converence of nonmonotone line search method. J. Comput. Appl. Math. 193, Shi, Z.J., Shen, J.: New inexact line search method for unconstrained optimization. J. Optim. heory Appl. 127,
25 Numer Alor : Shi, Z.J., Shen, J.: Converence of descent method without line search. Appl. Math. Comput. 167, Shi, Z.J.: Converence of line search methods for unconstrained optimiztaion. Appl. Math. Comput. 157, oint, P.L.: An assessment of nonmonotone line search techniques for unconstrained optimization. SIAM J. Sci. Comput. 17, oint, P.L.: A nonmonotone trust-reion alorithm for nonlinear optimization subject to convex constraints. Math. Proram. 77, odd, M.J.: On converence properties of alorithms for unconstrained minimization. IMA J. Numer. Anal. 9, Vrahatis, M.N., Androulais, G.S., Lambrinos, J.N., Maoulas, G.D.: A class of radient unconstrained minimization alorithms with adaptive stepsize. J. Comput. Appl. Math. 114, Vrahatis, M.N., Androulais, G.S., Manoussais, G.E.: A new unconstrained optimization method for imprecise function and radient values. J. Math. Anal. Appl. 197, Wei, Z., Qi, L., Jian, H.: Some converence properties of descent methods. J. Optim. heory Appl. 951, Wolfe, P.: Converence condition for ascent methods II: some corrections. SIAM Rev. 13, Wolfe, P.: Converence condition for ascent methods. SIAM Rev. 11, Xiao, Y., Zhou, F.J.: Nonmonotone trust reion methods with curvilinear path in unconstrained minimization. Computin 48, Zhan, J.L., Zhan, X.S.: A nonmonotone adaptive trust reion method and its converence. Comput. Math. Appl. 45,
New Inexact Line Search Method for Unconstrained Optimization 1,2
journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.
More informationStep-size Estimation for Unconstrained Optimization Methods
Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationTwo new spectral conjugate gradient algorithms based on Hestenes Stiefel
Research Article Two new spectral conjuate radient alorithms based on Hestenes Stiefel Journal of Alorithms & Computational Technoloy 207, Vol. (4) 345 352! The Author(s) 207 Reprints and permissions:
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationNew hybrid conjugate gradient methods with the generalized Wolfe line search
Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:
More informationA Novel of Step Size Selection Procedures. for Steepest Descent Method
Applied Mathematical Sciences, Vol. 6, 0, no. 5, 507 58 A Novel of Step Size Selection Procedures for Steepest Descent Method Goh Khang Wen, Mustafa Mamat, Ismail bin Mohd, 3 Yosza Dasril Department of
More informationAdaptive two-point stepsize gradient algorithm
Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of
More informationMath 164: Optimization Barzilai-Borwein Method
Math 164: Optimization Barzilai-Borwein Method Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 online discussions on piazza.com Main features of the Barzilai-Borwein (BB) method The BB
More informationNew hybrid conjugate gradient algorithms for unconstrained optimization
ew hybrid conjugate gradient algorithms for unconstrained optimization eculai Andrei Research Institute for Informatics, Center for Advanced Modeling and Optimization, 8-0, Averescu Avenue, Bucharest,
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationFirst Published on: 11 October 2006 To link to this article: DOI: / URL:
his article was downloaded by:[universitetsbiblioteet i Bergen] [Universitetsbiblioteet i Bergen] On: 12 March 2007 Access Details: [subscription number 768372013] Publisher: aylor & Francis Informa Ltd
More informationA family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations
Journal of Computational Applied Mathematics 224 (2009) 11 19 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: www.elsevier.com/locate/cam A family
More informationConvergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize
Bol. Soc. Paran. Mat. (3s.) v. 00 0 (0000):????. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v38i6.35641 Convergence of a Two-parameter Family of Conjugate
More informationStep lengths in BFGS method for monotone gradients
Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly
More informationBulletin of the. Iranian Mathematical Society
ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 43 (2017), No. 7, pp. 2437 2448. Title: Extensions of the Hestenes Stiefel and Pola Ribière Polya conjugate
More informationModification of the Armijo line search to satisfy the convergence properties of HS method
Université de Sfax Faculté des Sciences de Sfax Département de Mathématiques BP. 1171 Rte. Soukra 3000 Sfax Tunisia INTERNATIONAL CONFERENCE ON ADVANCES IN APPLIED MATHEMATICS 2014 Modification of the
More information2010 Mathematics Subject Classification: 90C30. , (1.2) where t. 0 is a step size, received from the line search, and the directions d
Journal of Applie Mathematics an Computation (JAMC), 8, (9), 366-378 http://wwwhillpublisheror/journal/jamc ISSN Online:576-645 ISSN Print:576-653 New Hbri Conjuate Graient Metho as A Convex Combination
More informationGlobal Convergence Properties of the HS Conjugate Gradient Method
Applied Mathematical Sciences, Vol. 7, 2013, no. 142, 7077-7091 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2013.311638 Global Convergence Properties of the HS Conjugate Gradient Method
More informationOn the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search
ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised
More informationSOLVING SYSTEMS OF NONLINEAR EQUATIONS USING A GLOBALLY CONVERGENT OPTIMIZATION ALGORITHM
Transaction on Evolutionary Algorithm and Nonlinear Optimization ISSN: 2229-87 Online Publication, June 2012 www.pcoglobal.com/gjto.htm NG-O21/GJTO SOLVING SYSTEMS OF NONLINEAR EQUATIONS USING A GLOBALLY
More informationA Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search
A Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search Yu-Hong Dai and Cai-Xia Kou State Key Laboratory of Scientific and Engineering Computing, Institute of
More informationA Modified Hestenes-Stiefel Conjugate Gradient Method and Its Convergence
Journal of Mathematical Research & Exposition Mar., 2010, Vol. 30, No. 2, pp. 297 308 DOI:10.3770/j.issn:1000-341X.2010.02.013 Http://jmre.dlut.edu.cn A Modified Hestenes-Stiefel Conjugate Gradient Method
More informationAn Efficient Modification of Nonlinear Conjugate Gradient Method
Malaysian Journal of Mathematical Sciences 10(S) March : 167-178 (2016) Special Issue: he 10th IM-G International Conference on Mathematics, Statistics and its Applications 2014 (ICMSA 2014) MALAYSIAN
More informationGLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH
GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics
More informationA New Nonlinear Conjugate Gradient Coefficient. for Unconstrained Optimization
Applie Mathematical Sciences, Vol. 9, 05, no. 37, 83-8 HIKARI Lt, www.m-hiari.com http://x.oi.or/0.988/ams.05.4994 A New Nonlinear Conjuate Graient Coefficient for Unconstraine Optimization * Mohame Hamoa,
More informationAN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.
Bull. Iranian Math. Soc. Vol. 40 (2014), No. 1, pp. 235 242 Online ISSN: 1735-8515 AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.
More informationNew Hybrid Conjugate Gradient Method as a Convex Combination of FR and PRP Methods
Filomat 3:11 (216), 383 31 DOI 1.2298/FIL161183D Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat New Hybrid Conjugate Gradient
More informationANALYTIC CENTER CUTTING PLANE METHODS FOR VARIATIONAL INEQUALITIES OVER CONVEX BODIES
ANALYI ENER UING PLANE MEHODS OR VARIAIONAL INEQUALIIES OVER ONVE BODIES Renin Zen School of Mathematical Sciences honqin Normal Universit honqin hina ABSRA An analtic center cuttin plane method is an
More informationGlobally convergent three-term conjugate gradient projection methods for solving nonlinear monotone equations
Arab. J. Math. (2018) 7:289 301 https://doi.org/10.1007/s40065-018-0206-8 Arabian Journal of Mathematics Mompati S. Koorapetse P. Kaelo Globally convergent three-term conjugate gradient projection methods
More informationJournal of Computational and Applied Mathematics. Notes on the Dai Yuan Yuan modified spectral gradient method
Journal of Computational Applied Mathematics 234 (200) 2986 2992 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Notes
More informationGlobal Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction
ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(2011) No.2,pp.153-158 Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method Yigui Ou, Jun Zhang
More informationA Conjugate Gradient Method with Inexact. Line Search for Unconstrained Optimization
Applie Mathematical Sciences, Vol. 9, 5, no. 37, 83-83 HIKARI Lt, www.m-hiari.com http://x.oi.or/.988/ams.5.4995 A Conuate Graient Metho with Inexact Line Search for Unconstraine Optimization * Mohame
More informationA DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION
1 A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION G E MANOUSSAKIS, T N GRAPSA and C A BOTSARIS Department of Mathematics, University of Patras, GR 26110 Patras, Greece e-mail :gemini@mathupatrasgr,
More informationAn Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations
International Journal of Mathematical Modelling & Computations Vol. 07, No. 02, Spring 2017, 145-157 An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations L. Muhammad
More informationA new conjugate gradient method with the new Armijo search based on a modified secant equations
ISSN: 35-38 Enineerin an echnoloy Vol 5 Iue 7 July 8 A new conjuate raient metho with the new Armijo earch bae on a moifie ecant equation Weijuan Shi Guohua Chen Zhibin Zhu Department of Mathematic & Applie
More informationA Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration
A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration Address: Prof. K. Schittkowski Department of Computer Science University of Bayreuth D - 95440
More informationNonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization
Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong
More informationSteepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720
Steepest Descent Juan C. Meza Lawrence Berkeley National Laboratory Berkeley, California 94720 Abstract The steepest descent method has a rich history and is one of the simplest and best known methods
More informationA New Conjugate Gradient Method. with Exact Line Search
Applie Mathematical Sciences, Vol. 9, 5, no. 9, 799-8 HIKARI Lt, www.m-hiari.com http://x.oi.or/.988/ams.5.533 A New Conjuate Graient Metho with Exact Line Search Syazni Shoi, Moh Rivaie, Mustafa Mamat
More informationGlobal convergence of a regularized factorized quasi-newton method for nonlinear least squares problems
Volume 29, N. 2, pp. 195 214, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems WEIJUN ZHOU
More informationOn efficiency of nonmonotone Armijo-type line searches
Noname manuscript No. (will be inserted by the editor On efficiency of nonmonotone Armijo-type line searches Masoud Ahookhosh Susan Ghaderi Abstract Monotonicity and nonmonotonicity play a key role in
More informationCONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS
CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:
More informationResearch Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search
Abstract and Applied Analysis Volume 013, Article ID 74815, 5 pages http://dx.doi.org/10.1155/013/74815 Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search Yuan-Yuan Chen
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente October 25, 2012 Abstract In this paper we prove that the broad class of direct-search methods of directional type based on imposing sufficient decrease
More informationNonlinear conjugate gradient methods, Unconstrained optimization, Nonlinear
A SURVEY OF NONLINEAR CONJUGATE GRADIENT METHODS WILLIAM W. HAGER AND HONGCHAO ZHANG Abstract. This paper reviews the development of different versions of nonlinear conjugate gradient methods, with special
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationOn memory gradient method with trust region for unconstrained optimization*
Numerical Algorithms (2006) 41: 173 196 DOI: 10.1007/s11075-005-9008-0 * Springer 2006 On memory gradient method with trust region for unconstrained optimization* Zhen-Jun Shi a,b and Jie Shen b a College
More informationPreconditioned conjugate gradient algorithms with column scaling
Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Preconditioned conjugate gradient algorithms with column scaling R. Pytla Institute of Automatic Control and
More informationWorst case complexity of direct search
EURO J Comput Optim (2013) 1:143 153 DOI 10.1007/s13675-012-0003-7 ORIGINAL PAPER Worst case complexity of direct search L. N. Vicente Received: 7 May 2012 / Accepted: 2 November 2012 / Published online:
More informationAn improved convergence theorem for the Newton method under relaxed continuity assumptions
An improved convergence theorem for the Newton method under relaxed continuity assumptions Andrei Dubin ITEP, 117218, BCheremushinsaya 25, Moscow, Russia Abstract In the framewor of the majorization technique,
More informationCONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS
MATHEMATICS OF COMPUTATION Volume 68, Number 228, Pages 165 1613 S 25-5718(99)1135-7 Article electronically published on March 1, 1999 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS BENEDETTA MORINI Abstract.
More informationNew Accelerated Conjugate Gradient Algorithms for Unconstrained Optimization
ew Accelerated Conjugate Gradient Algorithms for Unconstrained Optimization eculai Andrei Research Institute for Informatics, Center for Advanced Modeling and Optimization, 8-0, Averescu Avenue, Bucharest,
More informationOpen Problems in Nonlinear Conjugate Gradient Algorithms for Unconstrained Optimization
BULLETIN of the Malaysian Mathematical Sciences Society http://math.usm.my/bulletin Bull. Malays. Math. Sci. Soc. (2) 34(2) (2011), 319 330 Open Problems in Nonlinear Conjugate Gradient Algorithms for
More informationHandling Nonpositive Curvature in a Limited Memory Steepest Descent Method
Handling Nonpositive Curvature in a Limited Memory Steepest Descent Method Fran E. Curtis and Wei Guo Department of Industrial and Systems Engineering, Lehigh University, USA COR@L Technical Report 14T-011-R1
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationA COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS
A COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS MEHIDDIN AL-BAALI AND HUMAID KHALFAN Abstract. Techniques for obtaining safely positive definite Hessian approximations with selfscaling
More informationAffine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ
2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze
More informationSearch Directions for Unconstrained Optimization
8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All
More informationImproved Damped Quasi-Newton Methods for Unconstrained Optimization
Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient
More informationModification of Non-Linear Constrained Optimization. Ban A. Metras Dep. of Statistics College of Computer Sciences and Mathematics
Raf. J. Of Comp. & Math s., Vol., No., 004 Modification of Non-Linear Constrained Optimization Ban A. Metras Dep. of Statistics College of Computer Sciences and Mathematics Received: 3/5/00 Accepted: /0/00
More informationStatistics 580 Optimization Methods
Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of
More informationResearch Article A Descent Dai-Liao Conjugate Gradient Method Based on a Modified Secant Equation and Its Global Convergence
International Scholarly Research Networ ISRN Computational Mathematics Volume 2012, Article ID 435495, 8 pages doi:10.5402/2012/435495 Research Article A Descent Dai-Liao Conjugate Gradient Method Based
More information154 ADVANCES IN NONLINEAR PROGRAMMING Abstract: We propose an algorithm for nonlinear optimization that employs both trust region techniques and line
7 COMBINING TRUST REGION AND LINE SEARCH TECHNIQUES Jorge Nocedal Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208-3118, USA. Ya-xiang Yuan State Key Laboratory
More informationOn the iterate convergence of descent methods for convex optimization
On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.
More informationHow good are extrapolated bi-projection methods for linear feasibility problems?
Comput Optim Appl DOI 10.1007/s10589-011-9414-2 How good are extrapolated bi-projection methods for linear feasibility problems? Nicholas I.M. Gould Received: 7 March 2011 Springer Science+Business Media,
More informationA new nonmonotone Newton s modification for unconstrained Optimization
A new nonmonotone Newton s modification for unconstrained Optimization Aristotelis E. Kostopoulos a George S. Androulakis b a a Department of Mathematics, University of Patras, GR-265.04, Rio, Greece b
More informationWEAK AND STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR NONEXPANSIVE MAPPINGS IN HILBERT SPACES
Applicable Analysis and Discrete Mathematics available online at http://pemath.et.b.ac.yu Appl. Anal. Discrete Math. 2 (2008), 197 204. doi:10.2298/aadm0802197m WEAK AND STRONG CONVERGENCE OF AN ITERATIVE
More informationA QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2
A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS Christian Kanzow 1 and Hou-Duo Qi 2 1 University of Hamburg Institute of Applied Mathematics Bundesstrasse 55, D-20146 Hamburg,
More informationA Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems
A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based
More informationGradient methods exploiting spectral properties
Noname manuscript No. (will be inserted by the editor) Gradient methods exploiting spectral properties Yaui Huang Yu-Hong Dai Xin-Wei Liu Received: date / Accepted: date Abstract A new stepsize is derived
More informationIntroduction to Nonlinear Optimization Paul J. Atzberger
Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,
More informationA double projection method for solving variational inequalities without monotonicity
A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014
More informationA line-search algorithm inspired by the adaptive cubic regularization framework, with a worst-case complexity O(ɛ 3/2 ).
A line-search algorithm inspired by the adaptive cubic regularization framewor, with a worst-case complexity Oɛ 3/. E. Bergou Y. Diouane S. Gratton June 16, 017 Abstract Adaptive regularized framewor using
More informationPrediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate
www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation
More informationSuppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.
Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of
More informationA modified quadratic hybridization of Polak-Ribière-Polyak and Fletcher-Reeves conjugate gradient method for unconstrained optimization problems
An International Journal of Optimization Control: Theories & Applications ISSN:246-0957 eissn:246-5703 Vol.7, No.2, pp.77-85 (207) http://doi.org/0.2/ijocta.0.207.00339 RESEARCH ARTICLE A modified quadratic
More informationOPER 627: Nonlinear Optimization Lecture 14: Mid-term Review
OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review Department of Statistical Sciences and Operations Research Virginia Commonwealth University Oct 16, 2013 (Lecture 14) Nonlinear Optimization
More information1. Introduction. We develop an active set method for the box constrained optimization
SIAM J. OPTIM. Vol. 17, No. 2, pp. 526 557 c 2006 Society for Industrial and Applied Mathematics A NEW ACTIVE SET ALGORITHM FOR BOX CONSTRAINED OPTIMIZATION WILLIAM W. HAGER AND HONGCHAO ZHANG Abstract.
More informationNONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM
NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM JIAYI GUO AND A.S. LEWIS Abstract. The popular BFGS quasi-newton minimization algorithm under reasonable conditions converges globally on smooth
More informationj=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k
Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,
More informationA proximal-like algorithm for a class of nonconvex programming
Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,
More informationIntroduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems
New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationGeneralization of Vieta's Formula
Rose-Hulman Underraduate Mathematics Journal Volume 7 Issue Article 4 Generalization of Vieta's Formula Al-Jalila Al-Abri Sultan Qaboos University in Oman Follow this and additional wors at: https://scholar.rose-hulman.edu/rhumj
More informationOptimal Newton-type methods for nonconvex smooth optimization problems
Optimal Newton-type methods for nonconvex smooth optimization problems Coralia Cartis, Nicholas I. M. Gould and Philippe L. Toint June 9, 20 Abstract We consider a general class of second-order iterations
More informationZangwill s Global Convergence Theorem
Zangwill s Global Convergence Theorem A theory of global convergence has been given by Zangwill 1. This theory involves the notion of a set-valued mapping, or point-to-set mapping. Definition 1.1 Given
More informationNonlinear Model Reduction of Differential Algebraic Equation (DAE) Systems
Nonlinear Model Reduction of Differential Alebraic Equation DAE Systems Chuili Sun and Jueren Hahn Department of Chemical Enineerin eas A&M University Collee Station X 77843-3 hahn@tamu.edu repared for
More informationA Dwindling Filter Line Search Method for Unconstrained Optimization
A Dwindling Filter Line Search Method for Unconstrained Optimization Yannan Chen and Wenyu Sun January 12, 2011 Abstract In this paper, we propose a new dwindling multidimensional filter second-order line
More informationConvergence of DFT eigenvalues with cell volume and vacuum level
Converence of DFT eienvalues with cell volume and vacuum level Sohrab Ismail-Beii October 4, 2013 Computin work functions or absolute DFT eienvalues (e.. ionization potentials) requires some care. Obviously,
More informationA PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
More informationA Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems
A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com
More informationInexact Spectral Projected Gradient Methods on Convex Sets
Inexact Spectral Projected Gradient Methods on Convex Sets Ernesto G. Birgin José Mario Martínez Marcos Raydan March 26, 2003 Abstract A new method is introduced for large scale convex constrained optimization.
More informationA class of derivative-free nonmonotone optimization algorithms employing coordinate rotations and gradient approximations
Comput Optim Appl (25) 6: 33 DOI.7/s589-4-9665-9 A class of derivative-free nonmonotone optimization algorithms employing coordinate rotations and gradient approximations L. Grippo F. Rinaldi Received:
More informationGRADIENT METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION
GRADIENT METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION By HONGCHAO ZHANG A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationDept. of Mathematics, University of Dundee, Dundee DD1 4HN, Scotland, UK.
A LIMITED MEMORY STEEPEST DESCENT METHOD ROGER FLETCHER Dept. of Mathematics, University of Dundee, Dundee DD1 4HN, Scotland, UK. e-mail fletcher@uk.ac.dundee.mcs Edinburgh Research Group in Optimization
More informationHandout on Newton s Method for Systems
Handout on Newton s Method for Systems The following summarizes the main points of our class discussion of Newton s method for approximately solving a system of nonlinear equations F (x) = 0, F : IR n
More information