GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH
|
|
- Dale Bennett
- 5 years ago
- Views:
Transcription
1 GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics University of Melbourne, Melbourne, Australia Abstract. Global convergence results are derived for well-known conjugate gradient methods in which the line search step is replaced by a step whose length is determined by a formula. The results include the following cases: 1. The Fletcher-Reeves method, the Hestenes-Stiefel method, and the Dai-Yuan method applied to a strongly convex LC 1 objective function; 2. The Polak-Ribière method and the Conjugate Descent method applied to a general, not necessarily convex, LC 1 objective function. Keywords: Conjugate gradient methods, convergence of algorithms, line search. AMS subject classification: 90C25, 90C30. 1 The corresponding author. Fax: 65) jsun@nus.edu.sg. This author is a fellow of the Singapore-MIT Alliance. His research was partially supported by grant R and R of National University of Singapore. 2 This author s work was partially supported by a scholarship from National University of Singapore.
2 1 Introduction The conjugate gradient method is quite useful in finding an unconstrained minimum of a high-dimensional function fx). In general, the method has the following form: d k { gk for k = 1, = g k + β k d for k > 1, 1) x k+1 = x k + α k d k, 2) where g k denotes the gradient fx k ), α k is a steplength obtained by a line search, d k is the search direction, and β k is chosen so that d k becomes the kth conjugate direction when the function is quadratic and the line search is exact. Varieties of this method differ in the way of selecting β k. Some well-known formulae for β k are given by and β F R k = g k 2, Fletcher-Reeves [7]) 3) g 2 β P R k = gt k g k g ) g 2, Polak-Ribière [13]) 4) βk HS = gt k g k g ) d T g, Hestenes-Stiefel [9]) 5) k g ) βk CD = g k 2 d T g, The Conjugate Descent Method [6]) 6) where is the Euclidean norm and T stands for the transpose. Recently Dai and Yuan [5] also introduced a formula for β k : β DY k = d T g k g ). 7) For ease of presentation we call the methods corresponding to 3)-7) the FR method, the PR method, the HS method, the CD method, and the DY method, respectively. The global convergence of methods 3) 7) has been studied by many authors, including Al-Baali [1], Gilbert and Nocedal [8], Hestenes and Stiefel [9], Hu and Storey [10], Liu, Han and Yin [11], Powell [14, 15], Touati-Ahmed and Storey [16], and Zoutendijk [18] among others. A key factor of global convergence is how to select the steplength α k. The most natural choice of α k is to do the exact line search, i.e., to let α k = argmin α 0 fx k + αd k ). However, somewhat surprisingly, this natural choice could result in a non-convergent sequence in the case of PR and HS methods, as shown by Powell [15]. Inspired by Powell s work, Gilbert and Nocedal [8] conducted an elegant analysis and showed that the PR method is globally convergent if is restricted to be non-negative and α k is determined by a line search step satisfying the sufficient descent condition gk T d k c in addition to the standard Wolfe conditions [17]. Recently, Dai and Yuan [3, 4] showed that both the CD method and the FR method are globally convergent if the following line search conditions for α k are satisfied: βk P R { fxk ) fx k + α k d k ) δα k g T k d k, σ 1 g T k d k gx k + α k d k ) T d k σ 2 g T k d k, 8) where σ 1 0, 0 < δ < σ 1 < 1, 0 < σ 2 < 1 for the CD method and in addition σ 1 + σ 2 1 for the FR method. 1
3 In this paper we consider the problem from a different angle. We show that a unified formula for α k see 10) below) can ensure global convergence for many cases, which include: 1. The FR method, the HS method, and the DY method applied to a strongly convex first order Lipschitz continuous LC 1 for short) objective function; 2. The PR method and the CD method applied to a general, not necessarily convex, LC 1 objective function. For convenience we will refer to these methods as conjugate gradient methods without line search since the steplength is determined by a formula rather than a line search process. 2 The Proof for Convergence We adopt the following assumption on function f which is commonly used in the literature. Assumption 1. The function f is LC 1 in a neighborhood N of the level set L := {x R n fx) fx 1 )} and L is bounded. Here, by LC 1 we mean that the gradient fx) is Lipschitz continuous with modulus µ, i.e., there exists µ > 0 such that fx k+1 ) fx k ) µ x k+1 x k for any x k+1, x k N. Assumption 1 is sufficient for the global convergence of the PR method and the CD method, but seems to be not enough for the convergence of the FR method, the HS method, and the DY method without line search. Thus, for these three methods we impose the following stronger assumption. Assumption 2. The function f is LC 1 and strongly convex on N. In other words, there exists λ > 0 such that [ fx k+1 ) fx k )] T x k+1 x k ) λ x k+1 x k 2 for any x k+1, x k N. Note that Assumption 2 implies Assumption 1 since a strongly convex function has bounded level sets. Let {Q k } be a sequence of positive definite matrices. Assume that there exist ν min > 0 and ν max > 0 such that d R n ν min d T d d T Q k d ν max d T d. 9) This condition would be satisfied, for instance, if Q k = Q and Q is positive definite. analyze the conjugate gradient methods that use the following steplength formula: We where d k Qk := δµ/ν min < 1. d T k Q kd k α k = δg T k d k / d k 2 Q k, 10) and δ 0, ν min /µ). Note that the specification of δ ensures Lemma 1 Suppose that x k is given by 1), 2), and 10). Then holds for all k, where and φ k = g T k+1d k = ρ k g T k d k 11) ρ k = 1 δφ k d k 2 / d k 2 Q k 12) { 0 for αk = 0, g k+1 g k ) T x k+1 x k )/ x k+1 x k 2 for α k 0. 13) 2
4 Proof. The case of α k = 0 implies that ρ k = 1 and g k+1 = g k. Hence 11) is valid. We now prove for the case of α k 0. From 2) and 10) we have The proof is complete. gk+1d T k = gk T d k + g k+1 g k ) T d k = gk T d k + αk 1 k+1 g k ) T x k+1 x k ) = gk T d k + αk 1 k x k+1 x k 2 = gk T d k + α k φ k d k 2 ) δgk T d k / d k 2 Q k = g T k d k = φ k d k 2 1 δφ k d k 2 / d k 2 Q k ) g T k d k = ρ k g T k d k. 14) Corollary 2 In formulae 6) and 7) we have β DY k = β CD k /1 ρ ). Proof. This is straightforward from 6), 7), and 11). Corollary 3 There holds 1 δµ/ν min ρ k 1 + δµ/ν min 15) for all k if Assumption 1 is valid; and this estimate can be sharpened to 0 < ρ k 1 δλ/ν max 16) if Assumption 2 is valid. Proof. By 12) and 13) we have d k 2 ρ k = 1 δφ k d k 2 = 1 δg k g ) T x k x ) d k 2 Q k x k x 2 d k 2. 17) Q k Then from Assumption 1 or 2 we have either g k+1 g k µ x k+1 x k 18) or g k g ) T x k x ) λ x k x 2, 19) which, together with 9) and 17), leads to the corresponding bounds for ρ k. Lemma 4 Suppose that Assumptions 1 holds and that x k is given by 1), 2), and 10). Then gk T d k ) 2 / d k 2 <. 20) Proof. By the mean-value theorem we have fx k+1 ) fx k ) = ḡ T x k+1 x k ) 21) 3
5 where ḡ = f x) for some x [x k, x k+1 ]. Now by the Cauchy-Schwartz inequality, 2), 10), and Assumption 1 we obtain ḡ T x k+1 x k ) = g T k x k+1 x k ) + ḡ g k ) T x k+1 x k ) gk T x k+1 x k ) + ḡ g k x k+1 x k gk T x k+1 x k ) + µ x k+1 x k 2 = α k g T k d k + µα 2 k d k 2 = α k g T k d k µα k δg T k d k d k 2 / d k 2 Q k = α k g T k d k 1 µδ d k 2 / d k 2 Q k ) δ1 µδ/ν min )g T k d k ) 2 / d k 2 Q k ; 22) i.e., fx k+1 ) fx k ) δ1 µδ/ν min )g T k d k ) 2 / d k 2 Q k, 23) which implies fx k+1 ) fx k ). It follows by Assumption 1 that lim k fx k ) exists. Thus, from 9) and 23) we obtain This finishes our proof. g T k d k) 2 d k 2 ν max g T k d k) 2 d k 2 Q k ν max δ1 µδ/ν min ) [fx k) fx k+1 )]. 24) Lemma 5 Suppose that Assumption 1 holds and that x k is given by 1), 2), and 10). Then lim inf k g k 0 implies g k 4 / d k 2 <. 25) Proof. If lim inf k g k 0, then there exists γ > 0 such that g k γ for all k. Let λ k = g T k d k / d k. 26) Then by Lemma 4 there holds λ k γ/4 27) for all large k. By Lemma 1 and Corollary 3 we have g T k d = ρ k g T d 1 + δµ/ν min ) g T d < 2 g T d ; 28) Considering 1), we have g k = β k d d k. 29) By multiplying g k on both sides of 29), we obtain From 30) and 28) it follows that = β k g T k d g T k d k. 30) d k = β kg T k d g T k d k d k β k g T k d + g T k d k d k 4
6 i.e. The above relation can be re-written as Hence we have 2λ β k d d k + λ k = λ k + 2λ d k + g k d k λ k + 2λ + 2λ d k g k λ k + 2λ + 2λ γ d k ) γ gk 2 λ k + 2λ γ d k = λ k + 2λ + g k 2 2 d k ; d k λ k + 2λ + g k 2 2 d k. 31) d k 2λ k + 4λ 4λ k + λ ). 32) g k 4 / d k 2 16λ k + λ ) 2 32λ 2 k + λ 2 ). 33) By lemma 4 we know that λ 2 k <. Hence we obtain from 33) g k 4 d k 2 <. Remarks. Note that the two lemmae above are also true if Assumption 2 holds because Assumption 2 implies Assumption 1. The proof of Lemma 5 is similar to the one of Theorem 2.3 and Corollary 2.4 in [2]. The difference is that here we do not need to assume the strong Wolfe conditions. Lemma 6 The CD method satisfies under Assumption 1, where Ω is a positive constant. d k 2 g k 4 d 2 g 4 + Ω 34) Proof. For the CD method, by Lemma 1 and Corollary 3 we have gk T d k = gk T g k + g k 2 ) g T d d = gt k d g T d = ρ g T d g T d = 1 + ρ ) < 0. 35) 5
7 Note that from the last equality above and Corollary 3 we have g T k d k ) 2 g k 4, 36) which is due to ρ 0 that is deduced by Corollary 3 and δ 0, ν min /µ). Applying 1), 6), Lemma 1 and 36), we obtain d k 2 = g k + βk CD d 2 = g k + g k 2 2 g T d d = + g k 4 g T d ) 2 d g k 2 g T d gk T d = g k 4 + g T d ) 2 d 2 + 2ρ g T d g T d = 1 + 2ρ ) + g k 4 g T d ) 2 d 2 [ δµ/ν min )] + g k 4 g 4 d 2, 37) g where k 4 g T d d ) 2 2 g k 4 g d 4 2 was gotten by using 36). By defining Ω := δµ/ν min ) and dividing both sides by g k 4 we get The proof for the CD method is complete. d k 2 g k 4 d 2 g 4 + Ω. 38) Lemma 7 Suppose Assumption 2 holds. Then the FR method satisfies where Ω is a positive constant. Proof. d k 2 g k 4 d 2 g 4 + Ω, 39) gk T d = ρ gd T = ρ g g T + βd F R k 2 ) = ρ g T g + g 2 ) g k 2 2 d k 2 Thus, we have a recursive equation which leads to g T k d = ρ g 2 ρ g 2 = ρ g 2 ρ g 2 g k 2 2 g T d k 2. 40) g k 2 2 g T d k 2 = ρ g 2 + ρ ρ k 2 g 2 ρ ρ k 2 g 2 g k 3 2 gt k 2d k 3 =... = g 2 ρ + ρ ρ k ρ ρ k 2 ρ 2 ) g 2 1 δλ/ν max ) k =: Ω 1 g 2. 41) k=1 6
8 Hence we obtain d k 2 g k 4 = 2β F R k + 2Ω 1 β F R k g T k d + β F R k ) 2 d 2) / g k 4 g 2 + β F R k ) 2 d 2) / g k 4 = 1 + 2Ω 1 )/ + d 2 / g 4. 42) By defining Ω := 1 + 2Ω 1, the proof is complete. Lemma 8 The DY method satisfies under Assumption 2. d k 2 ) 1 2 g k 4 = ρk 2 d 2 ) ρ g 4 + ρ 1 1 ρ 43) Proof. For the DY method, by 1), 7), Lemma 1 and Corollary 3 we have g T k d k = g T k g k + ) d T g k g ) d = + g T k d g T k d g T d = + ρ g T d ρ 1)g T d = 1 < 0, 1 ρ 44) where 0 < ρ < 1 because of Corollary 3. Applying 1), 7), Lemma 1 and 44), we obtain d k 2 = g k + β DY k d 2 = g k + βk CD d /1 ρ ) 2 = g k + g k 2 d g T d 1 ρ ) = + g k 4 1 ρ ) 2 g T d ) 2 d ρ )g T d gk T d = g k ρ ) 2 g T d ) 2 d 2 + 2ρ g T d 1 ρ )g T d ) 1 2 ρk 2 d 2 ) 1 + = 1 ρ g 4 g k 4 ρ +. 45) 1 ρ Note in the last step above we used equation 44). Dividing both sides of the above formula by g k 4, the proof is complete. Theorem 9 Suppose that Assumption 1 holds. Then the CD method will generate a sequence {x k } such that lim inf k g k = 0. The same conclusion holds for the FR and DY methods under Assumption 2. 7
9 Proof. We first show this for the CD and FR methods. If lim inf k g k = 0, then there exists γ > 0 such that g k γ for all k, and by Lemma 4 we have g k 4 / d k 2 <. 46) From Lemmae 6 and 7 the two methods satisfy d k 2 / g k 4 d 2 / g 4 + Ω/ 47) under either Assumption 1 or Assumption 2. Hence we get Thus, 48) means that d k 2 / g k 4 d 2 / g 4 + Ω/γ 2 d k 2 2 / g k Ω/γ 2 d 1 2 / g k 1)Ω/γ 2. 48) g k 4 / d k 2 1/ka a + b), 49) where a = Ω/γ 2 and b = d 1 2 / g 1 4 = 1/ g 1 2 both are constants. From 49) we have g k 4 / d k 2 = +, 50) which is contradictory to 46). Hence the theorem is valid for the CD and FR methods. We next consider the DY method. If lim inf k g k = 0, then there exists γ > 0 such that g k γ for all k, and by Lemma 4 we have g k 4 / d k 2 <. 51) From Lemma 8 the method satisfies d k 2 ) 1 2 g k 4 = ρk 2 d 2 ) ρ g 4 + ρ 1 1 ρ, 52) under Assumption 2. Hence we get d k 2 g k 4 ) 1 2 ρk 2 d 2 ) ρ g 4 + ρ 1 1 ρ γ 2 ) 1 2 ρk 3 d k ρ g k ρ1 1 ρ [ 1 ρ 2 1 ρ ) ρ2 k 2 1 ρ ) 2 ] 1 γ 2 ) 2 d 2 2 [ 1 ρ 2 g ρ ) ] ρ ρ ) 2 γ 2. 53) The quantities 1 ρ1 1 ρ ) 2, 1 ρ 2 1 ρ ) 2,, 1 ρ ρ ) 2 54) have a common upper bound due to Corollary 3. Denoting this common bound by Ω, from 53) we obtain g k 4 / d k 2 1/ka 2a + b), 55) 8
10 where a = Ω/γ 2, b = Ω d 2 2 / g 2 4 are constants. From 55) we have g k 4 / d k 2 = +, 56) which is contradictory to 51). Hence the proof for the DY method is complete and the theorem is valid. Now we turn to the PR and the HS methods. Theorem 10 Suppose that Assumption 1 holds. Then the PR method will generate a sequence {x k } such that lim inf k g k = 0. If Assumption 2 holds, then the same conclusion holds for the HS method as well. Proof. We consider the PR method first. Assume in contrary that g k γ for all k. From Lemma 4 we have xk+1 x k 2 = α k d k 2 = g T k d k) 2 d k 2 Q k 1 ν min g T k d k) 2 d k 2 <. 57) Hence x k+1 x k 2 0 and βk P R = gk+1g T k+1 g k )/ 0 58) as k. Since L is bounded, both {x k } and {g k } are bounded. By using one can show that d k is uniformly bounded. Thus we have for large k. Thus we have d k g k + β k d, 59) gk T d k = gk T g k + βk P R d ) βk P R g k d /2 since βk P R d g k /2) 60) gk T d k) 2 d k d k 2. 61) Since g k γ and d k is bounded above, we conclude that there is ε > 0 such that which implies gk T d k) 2 d k 2 ε, 62) gk T d k) 2 d k 2 =. 63) This is a contradiction to Lemma 4. The proof for the PR method is complete. We now consider the HS method. Again, we assume g k γ for all k and note that g k g 0. Since gk T d k = gk T g k + βk HS d ) = gk T g k gt k g ) k g ) 1 ρ )d T g d = ρ 1 ρ g T k g k g ), 64) 9
11 we have g T k d ) 2 [ k = 1 + ρ gk T g ] 2 k g ) 1 ρ 1 2 for large k. This is the same as 61). Moreover, from β HS k = gk T g k g ) 2 1 ρ )d T g δλ/ν max g T k g k g ) g 2 65) 0 66) we conclude that d k is bounded through 59). The rest of the proof is the same as the proof for the PR method. 3 Final Remarks We show that by taking a fixed steplength α k defined by formula 10), the conjugate gradient method is globally convergent for several popular choices of β k. The result discloses an interesting property of the conjugate gradient method its global convergence can be guaranteed by taking a pre-determined steplength rather than following a set of line search rules. This steplength might be practical in cases that the line search is expensive or hard. Our proofs require that the function is at least LC 1 sometimes strongly convex in addition) and the level set L is bounded. We point out that the latter condition can be relaxed to that the function is lower bounded on L in the case of the CD method because an upper bound of g k is not assumed in the corresponding proof. We allow certain flexibility in selecting the sequence {Q k } in practical computation. At the simplest case we may just let all Q k be the identity matrix. It would be more interesting to define Q k as a matrix of certain simple structure that carries some second order information of the objective function. This might be a topic of further research. Acknowledgments. The authors are grateful to the referees for their comments, which improved the paper. The second author is much indebted to Professor C. Wang for his cultivation and to Professor L. Qi for his support. References [1] M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search, IMA J. Numer. Anal., ) [2] Y.H. Dai, J.Y. Han, G.H. Liu, D.F. Sun, H.X. Yin and Y.X. Yuan, Convergence properties of nonlinear conjugate gradient methods, SIAM J. Optim., ) [3] Y.H. Dai and Y. Yuan, Convergence properties of the conjugate descent method, Advances in Mathematics, ) [4] Y.H. Dai and Y. Yuan, Convergence properties of the Fletcher-Reeves method, IMA J. Numer. Anal., ) [5] Y.H. Dai and Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optimization, ) [6] R. Fletcher, Practical Method of Optimization, Vol I: Unconstrained Optimization, 2nd edition, Wiley, New York 1987). 10
12 [7] R. Fletcher and C. Reeves, Function minimization by conjugate gradients, Comput. J., ) [8] J.C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM Journal on Optimization, ) [9] M.R. Hestenes and E. Stiefel, Method of conjugate gradient for solving linear system, J. Res. Nat. Bur. Stand ) [10] Y. Hu and C. Storey, Global convergence result for conjugate gradient methods, Journal of Optimization Theory and Applications ) [11] G. Liu, J. Han and H. Yin, Global convergence of the Fletcher-Reeves algorithm with inexact line search, Appl. Math. J. Chinese Univ. Ser. B ) [12] B. Polak, The conjugate gradient method in extreme Problems, USSR Comput. Math. and Math. Phys ) [13] B. Polak and G. Ribière, Note sur la convergence des méthodes de directions conjuguées, Rev. Fran. Informat. Rech. Opér., ) [14] M.J.D. Powell, Nonconvex minimization calculations and the conjugate gradient method, Lecture Notes in Mathematics ) [15] M.J.D. Powell, Convergence properties of algorithms for nonlinear optimization, SIAM Review ) [16] D. Touati-Ahmed and C. Storey, Efficient hybrid conjugate gradient techniques, Journal of Optimization Theory and Applications ) [17] P. Wolfe, Convergence conditions for ascent methods, SIAM Review, ) [18] G. Zoutendijk, Nonlinear programming, computational methods, in: Integer and Nonlinear Programming, J. Abadie, ed., North-Holland 1970)
New hybrid conjugate gradient methods with the generalized Wolfe line search
Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:
More informationNew Hybrid Conjugate Gradient Method as a Convex Combination of FR and PRP Methods
Filomat 3:11 (216), 383 31 DOI 1.2298/FIL161183D Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat New Hybrid Conjugate Gradient
More informationModification of the Armijo line search to satisfy the convergence properties of HS method
Université de Sfax Faculté des Sciences de Sfax Département de Mathématiques BP. 1171 Rte. Soukra 3000 Sfax Tunisia INTERNATIONAL CONFERENCE ON ADVANCES IN APPLIED MATHEMATICS 2014 Modification of the
More informationGlobal Convergence Properties of the HS Conjugate Gradient Method
Applied Mathematical Sciences, Vol. 7, 2013, no. 142, 7077-7091 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2013.311638 Global Convergence Properties of the HS Conjugate Gradient Method
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationConvergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize
Bol. Soc. Paran. Mat. (3s.) v. 00 0 (0000):????. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v38i6.35641 Convergence of a Two-parameter Family of Conjugate
More informationNew hybrid conjugate gradient algorithms for unconstrained optimization
ew hybrid conjugate gradient algorithms for unconstrained optimization eculai Andrei Research Institute for Informatics, Center for Advanced Modeling and Optimization, 8-0, Averescu Avenue, Bucharest,
More informationAn Efficient Modification of Nonlinear Conjugate Gradient Method
Malaysian Journal of Mathematical Sciences 10(S) March : 167-178 (2016) Special Issue: he 10th IM-G International Conference on Mathematics, Statistics and its Applications 2014 (ICMSA 2014) MALAYSIAN
More informationA Modified Hestenes-Stiefel Conjugate Gradient Method and Its Convergence
Journal of Mathematical Research & Exposition Mar., 2010, Vol. 30, No. 2, pp. 297 308 DOI:10.3770/j.issn:1000-341X.2010.02.013 Http://jmre.dlut.edu.cn A Modified Hestenes-Stiefel Conjugate Gradient Method
More informationand P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s
Global Convergence of the Method of Shortest Residuals Yu-hong Dai and Ya-xiang Yuan State Key Laboratory of Scientic and Engineering Computing, Institute of Computational Mathematics and Scientic/Engineering
More informationStep-size Estimation for Unconstrained Optimization Methods
Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations
More informationAN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.
Bull. Iranian Math. Soc. Vol. 40 (2014), No. 1, pp. 235 242 Online ISSN: 1735-8515 AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.
More informationOn the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search
ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised
More informationJanuary 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13
Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière Hestenes Stiefel January 29, 2014 Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes
More informationStep lengths in BFGS method for monotone gradients
Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly
More informationBulletin of the. Iranian Mathematical Society
ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 43 (2017), No. 7, pp. 2437 2448. Title: Extensions of the Hestenes Stiefel and Pola Ribière Polya conjugate
More informationNew Inexact Line Search Method for Unconstrained Optimization 1,2
journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.
More informationNUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS
NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS Adeleke O. J. Department of Computer and Information Science/Mathematics Covenat University, Ota. Nigeria. Aderemi
More informationA modified quadratic hybridization of Polak-Ribière-Polyak and Fletcher-Reeves conjugate gradient method for unconstrained optimization problems
An International Journal of Optimization Control: Theories & Applications ISSN:246-0957 eissn:246-5703 Vol.7, No.2, pp.77-85 (207) http://doi.org/0.2/ijocta.0.207.00339 RESEARCH ARTICLE A modified quadratic
More informationOpen Problems in Nonlinear Conjugate Gradient Algorithms for Unconstrained Optimization
BULLETIN of the Malaysian Mathematical Sciences Society http://math.usm.my/bulletin Bull. Malays. Math. Sci. Soc. (2) 34(2) (2011), 319 330 Open Problems in Nonlinear Conjugate Gradient Algorithms for
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationGlobal Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction
ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(2011) No.2,pp.153-158 Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method Yigui Ou, Jun Zhang
More informationA new class of Conjugate Gradient Methods with extended Nonmonotone Line Search
Appl. Math. Inf. Sci. 6, No. 1S, 147S-154S (2012) 147 Applied Mathematics & Information Sciences An International Journal A new class of Conjugate Gradient Methods with extended Nonmonotone Line Search
More informationGlobal convergence of a regularized factorized quasi-newton method for nonlinear least squares problems
Volume 29, N. 2, pp. 195 214, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems WEIJUN ZHOU
More informationResearch Article A Descent Dai-Liao Conjugate Gradient Method Based on a Modified Secant Equation and Its Global Convergence
International Scholarly Research Networ ISRN Computational Mathematics Volume 2012, Article ID 435495, 8 pages doi:10.5402/2012/435495 Research Article A Descent Dai-Liao Conjugate Gradient Method Based
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationNonlinear conjugate gradient methods, Unconstrained optimization, Nonlinear
A SURVEY OF NONLINEAR CONJUGATE GRADIENT METHODS WILLIAM W. HAGER AND HONGCHAO ZHANG Abstract. This paper reviews the development of different versions of nonlinear conjugate gradient methods, with special
More informationModification of the Wolfe line search rules to satisfy the descent condition in the Polak-Ribière-Polyak conjugate gradient method
Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Modification of the Wolfe line search rules to satisfy the descent condition in the Polak-Ribière-Polyak conjugate gradient method
More informationmin f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;
Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many
More informationAn Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations
International Journal of Mathematical Modelling & Computations Vol. 07, No. 02, Spring 2017, 145-157 An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations L. Muhammad
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationNONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM
NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM JIAYI GUO AND A.S. LEWIS Abstract. The popular BFGS quasi-newton minimization algorithm under reasonable conditions converges globally on smooth
More informationProximal-like contraction methods for monotone variational inequalities in a unified framework
Proximal-like contraction methods for monotone variational inequalities in a unified framework Bingsheng He 1 Li-Zhi Liao 2 Xiang Wang Department of Mathematics, Nanjing University, Nanjing, 210093, China
More informationResearch Article A New Conjugate Gradient Algorithm with Sufficient Descent Property for Unconstrained Optimization
Mathematical Problems in Engineering Volume 205, Article ID 352524, 8 pages http://dx.doi.org/0.55/205/352524 Research Article A New Conjugate Gradient Algorithm with Sufficient Descent Property for Unconstrained
More informationA family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations
Journal of Computational Applied Mathematics 224 (2009) 11 19 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: www.elsevier.com/locate/cam A family
More informationFirst Published on: 11 October 2006 To link to this article: DOI: / URL:
his article was downloaded by:[universitetsbiblioteet i Bergen] [Universitetsbiblioteet i Bergen] On: 12 March 2007 Access Details: [subscription number 768372013] Publisher: aylor & Francis Informa Ltd
More informationPreconditioned conjugate gradient algorithms with column scaling
Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Preconditioned conjugate gradient algorithms with column scaling R. Pytla Institute of Automatic Control and
More informationChapter 4. Unconstrained optimization
Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file
More informationA Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search
A Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search Yu-Hong Dai and Cai-Xia Kou State Key Laboratory of Scientific and Engineering Computing, Institute of
More information1 Numerical optimization
Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms
More informationGRADIENT METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION
GRADIENT METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION By HONGCHAO ZHANG A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
More informationAcceleration Method for Convex Optimization over the Fixed Point Set of a Nonexpansive Mapping
Noname manuscript No. will be inserted by the editor) Acceleration Method for Convex Optimization over the Fixed Point Set of a Nonexpansive Mapping Hideaki Iiduka Received: date / Accepted: date Abstract
More informationA Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions
The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded
More informationImproved Damped Quasi-Newton Methods for Unconstrained Optimization
Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified
More informationResearch Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search
Abstract and Applied Analysis Volume 013, Article ID 74815, 5 pages http://dx.doi.org/10.1155/013/74815 Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search Yuan-Yuan Chen
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationModification of the Wolfe line search rules to satisfy the descent condition in the Polak-Ribière-Polyak conjugate gradient method
Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Modification of the Wolfe line search rules to satisfy the descent condition in the Polak-Ribière-Polyak conjugate gradient method
More informationGlobally convergent three-term conjugate gradient projection methods for solving nonlinear monotone equations
Arab. J. Math. (2018) 7:289 301 https://doi.org/10.1007/s40065-018-0206-8 Arabian Journal of Mathematics Mompati S. Koorapetse P. Kaelo Globally convergent three-term conjugate gradient projection methods
More information1 Numerical optimization
Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................
More informationA PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More informationNew Accelerated Conjugate Gradient Algorithms for Unconstrained Optimization
ew Accelerated Conjugate Gradient Algorithms for Unconstrained Optimization eculai Andrei Research Institute for Informatics, Center for Advanced Modeling and Optimization, 8-0, Averescu Avenue, Bucharest,
More informationSOLVING SYSTEMS OF NONLINEAR EQUATIONS USING A GLOBALLY CONVERGENT OPTIMIZATION ALGORITHM
Transaction on Evolutionary Algorithm and Nonlinear Optimization ISSN: 2229-87 Online Publication, June 2012 www.pcoglobal.com/gjto.htm NG-O21/GJTO SOLVING SYSTEMS OF NONLINEAR EQUATIONS USING A GLOBALLY
More informationTwo new spectral conjugate gradient algorithms based on Hestenes Stiefel
Research Article Two new spectral conjuate radient alorithms based on Hestenes Stiefel Journal of Alorithms & Computational Technoloy 207, Vol. (4) 345 352! The Author(s) 207 Reprints and permissions:
More informationA Modified Polak-Ribière-Polyak Conjugate Gradient Algorithm for Nonsmooth Convex Programs
A Modified Polak-Ribière-Polyak Conjugate Gradient Algorithm for Nonsmooth Convex Programs Gonglin Yuan Zengxin Wei Guoyin Li Revised Version: April 20, 2013 Abstract The conjugate gradient (CG) method
More informationThe Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1
October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More informationA COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS
A COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS MEHIDDIN AL-BAALI AND HUMAID KHALFAN Abstract. Techniques for obtaining safely positive definite Hessian approximations with selfscaling
More informationDetermination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study
International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient
More informationarxiv: v1 [math.oc] 1 Jul 2016
Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the
More informationSOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1
SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationProximal Algorithm Meets a Conjugate descent
Proximal Algorithm Meets a Conjugate descent Matthieu Kowalski To cite this version: Matthieu Kowalski. Proximal Algorithm Meets a Conjugate descent. 2010. HAL Id: hal-00505733 https://hal.archives-ouvertes.fr/hal-00505733v1
More informationJorge Nocedal. select the best optimization methods known to date { those methods that deserve to be
Acta Numerica (1992), {199:242 Theory of Algorithms for Unconstrained Optimization Jorge Nocedal 1. Introduction A few months ago, while preparing a lecture to an audience that included engineers and numerical
More information230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n
Journal of Computational Mathematics, Vol.21, No.2, 2003, 229 236. A SELF-ADAPTIVE TRUST REGION ALGORITHM Λ1) Long Hei y (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy
More informationA DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION
1 A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION G E MANOUSSAKIS, T N GRAPSA and C A BOTSARIS Department of Mathematics, University of Patras, GR 26110 Patras, Greece e-mail :gemini@mathupatrasgr,
More informationA double projection method for solving variational inequalities without monotonicity
A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014
More informationMATH 409 Advanced Calculus I Lecture 12: Uniform continuity. Exponential functions.
MATH 409 Advanced Calculus I Lecture 12: Uniform continuity. Exponential functions. Uniform continuity Definition. A function f : E R defined on a set E R is called uniformly continuous on E if for every
More informationNumerical Optimization of Partial Differential Equations
Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationAlgorithms for Nonsmooth Optimization
Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization
More informationA Unified Approach to Proximal Algorithms using Bregman Distance
A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department
More informationA Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials
A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem
More informationOn an iterative algorithm for variational inequalities in. Banach space
MATHEMATICAL COMMUNICATIONS 95 Math. Commun. 16(2011), 95 104. On an iterative algorithm for variational inequalities in Banach spaces Yonghong Yao 1, Muhammad Aslam Noor 2,, Khalida Inayat Noor 3 and
More informationu( x) = g( y) ds y ( 1 ) U solves u = 0 in U; u = 0 on U. ( 3)
M ath 5 2 7 Fall 2 0 0 9 L ecture 4 ( S ep. 6, 2 0 0 9 ) Properties and Estimates of Laplace s and Poisson s Equations In our last lecture we derived the formulas for the solutions of Poisson s equation
More informationOne Mirror Descent Algorithm for Convex Constrained Optimization Problems with Non-Standard Growth Properties
One Mirror Descent Algorithm for Convex Constrained Optimization Problems with Non-Standard Growth Properties Fedor S. Stonyakin 1 and Alexander A. Titov 1 V. I. Vernadsky Crimean Federal University, Simferopol,
More informationAdaptive two-point stepsize gradient algorithm
Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of
More informationSTRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS
ARCHIVUM MATHEMATICUM (BRNO) Tomus 45 (2009), 147 158 STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS Xiaolong Qin 1, Shin Min Kang 1, Yongfu Su 2,
More informationSolutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.
Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes
More informationThe Steepest Descent Algorithm for Unconstrained Optimization
The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem
More informationQuasi-Newton methods for minimization
Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More information2010 Mathematics Subject Classification: 90C30. , (1.2) where t. 0 is a step size, received from the line search, and the directions d
Journal of Applie Mathematics an Computation (JAMC), 8, (9), 366-378 http://wwwhillpublisheror/journal/jamc ISSN Online:576-645 ISSN Print:576-653 New Hbri Conjuate Graient Metho as A Convex Combination
More informationA NOTE ON Q-ORDER OF CONVERGENCE
BIT 0006-3835/01/4102-0422 $16.00 2001, Vol. 41, No. 2, pp. 422 429 c Swets & Zeitlinger A NOTE ON Q-ORDER OF CONVERGENCE L. O. JAY Department of Mathematics, The University of Iowa, 14 MacLean Hall Iowa
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationOn the acceleration of augmented Lagrangian method for linearly constrained optimization
On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental
More informationSteepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720
Steepest Descent Juan C. Meza Lawrence Berkeley National Laboratory Berkeley, California 94720 Abstract The steepest descent method has a rich history and is one of the simplest and best known methods
More informationStrong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems
Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.
More informationOn the iterate convergence of descent methods for convex optimization
On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.
More informationMath 164: Optimization Barzilai-Borwein Method
Math 164: Optimization Barzilai-Borwein Method Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 online discussions on piazza.com Main features of the Barzilai-Borwein (BB) method The BB
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.16
XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of
More informationDifferentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA
Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationLinear Convergence under the Polyak-Łojasiewicz Inequality
Linear Convergence under the Polyak-Łojasiewicz Inequality Hamed Karimi, Julie Nutini and Mark Schmidt The University of British Columbia LCI Forum February 28 th, 2017 1 / 17 Linear Convergence of Gradient-Based
More informationFALL 2018 MATH 4211/6211 Optimization Homework 4
FALL 2018 MATH 4211/6211 Optimization Homework 4 This homework assignment is open to textbook, reference books, slides, and online resources, excluding any direct solution to the problem (such as solution
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationThe general solution of a quadratic functional equation and Ulam stability
Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 8 (05), 60 69 Research Article The general solution of a quadratic functional equation and Ulam stability Yaoyao Lan a,b,, Yonghong Shen c a College
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More information