A modified quadratic hybridization of Polak-Ribière-Polyak and Fletcher-Reeves conjugate gradient method for unconstrained optimization problems
|
|
- Pamela Ryan
- 5 years ago
- Views:
Transcription
1 An International Journal of Optimization Control: Theories & Applications ISSN: eissn: Vol.7, No.2, pp (207) RESEARCH ARTICLE A modified quadratic hybridization of Pola-Ribière-Polya Fletcher-Reeves conjugate gradient method for unconstrained optimization problems Sindhu Narayanan, P. Kaelo * M.V. Thuto Department of Mathematics, University of Botswana, Private Bag UB00704, Gaborone, Botswana sindhu.op@gmail.com, aelop@mopipi.ub.bw, thutomv@mopipi.ub.bw ARTICLE INFO Article History: Received 5 December 206 Accepted 23 February 207 Available 5 July 207 Keywords: Hybridization Conjugate gradient Wolfe line search conditions Global convergence AMS Classification 200: 90C06, 90C30, 65K05 ABSTRACT This article presents a modified quadratic hybridization of the Pola Ribière Polya Fletcher Reeves conjugate gradient method for solving unconstrained optimization problems. Global convergence, with the strong Wolfe line search conditions, of the proposed quadratic hybrid conjugate gradient method is established. The new method is tested on a number of benchmar problems that have been extensively used in the literature numerical results show the competitiveness of the new hybrid method.. Introduction Nonlinear conjugate gradient method is a very powerful technique for solving large scale unconstrained optimization problems min{f(x) : x R n }, () where f : R n R is a continuously differentiable function. It has advantages over Newton quasi-newton methods in that it only needs the first order derivative hence less storage capacity is needed. It is also relatively simple to program. Given an initial guess x 0 R n, the nonlinear conjugategradientmethodgeneratesasequence{x } for problem () as x + = x +α d, = 0,,2,..., (2) where α is a step length which is determined by a line search d is a descent direction of f at x generated as { g, if = 0, d = (3) g +β d, if, *Corresponding Author 77 where g = f(x ) is the gradient of f at x β is a parameter. Conjugate gradient methods differ in their way of definingtheparameterβ. Overtheyears, several choices of β, which give rise to different conjugate gradient methods, have been proposed. The most famous formulas for β are Fletcher-Reeves (FR) method [20] β FR = g 2 g 2, Pola-Ribiére-Polya (PRP) method [32, 33] β PRP = gt y g 2, Dai-Yuan (DY) method [2] β DY = g 2 d T y, the Hestenes-Stiefel (HS) method [8, 23]
2 78 S. Narayanan, P. Kaelo, M.V. Thuto / IJOCTA, Vol.7, No.2, pp (207) β HS = gt y d T y, where y = g g denotes the Euclidean norm of vectors. These were the first scalars β for nonlinear conjugate gradient methods to be proposed. Since then, other parameters β have been proposed in the literature (see for example [,2,4 6,4,5,7,9,22,28,35,39,40] references therein). From the literature, it is well nown that FR DY methods have strong convergence properties. However, they may not perform well in practice. Ontheotherside, PRPHSmethods are nown to perform better numerically but may not converge in general. Given this, researchers try to devise some new methods, which have the advantages of these two inds of methods. This has been done mostly by combining two or more β parameters in the same conjugate gradient method to come up with hybrid methods. Thus, hybrids try to combine attractive features of different algorithms. For example, Touati-Ahmed Storey [36] proposed this hybrid method β TS = max { 0,min(β FR,β PRP ) } to tae advantage of the attractive convergence properties of β FR numerical performance of β PRP. Many other hybrids have been proposed by parametrically combining different parameters β. In Dai Yuan [], for instance, a one-parameter family of conjugate gradient methods is proposed as g 2 β = λ g 2 +( λ )d T y, where the parameter λ is such that λ [0,]. Liu Li [28] proposes a convex combination of β LS β DY to get where β LS β = ( γ )β LS +γ β DY, = gt y d T g is the Liu-Storey (LS) [26] parameter γ [0,]. Other hybrid conjugate gradient methods can be found in [2,4 8,3,2,22,24,25,27,29,35,38,4]. The step length α is often chosen to satisfy certain line search conditions. It is very important in the convergence analysis implementation of conjugate gradient methods. The line search in the conjugate gradient methods is often based on the wea Wolfe conditions f(x +α d ) f(x )+µα g T d (4) g(x +α d ) T d σg T d, (5) or the stronger version of the Wolfe line search conditions f(x +α d ) f(x )+µα g T d (6) g(x +α d ) T d σg T d, (7) where 0 < µ < σ <. More information on these line search methods other line search methods can be found in the literature [9,4,25,3, 34,37,39,4]. In this paper, we suggest another approach to get a new hybrid nonlinear conjugate gradient method. The rest of the paper is organised as follows. In section 2, we present the proposed method. In Section 3 we prove that the proposed algorithm (method) globally converges. Section 4 presents some numerical experiments conclusion is given in Section A new hybrid conjugate gradient method We now present our proposed hybrid conjugate gradient method. The hybrid method we propose is motivated by the wor of Babaie-Kafai [4,5] Mo, Gu Wei [29]. Babaie-Kafai [4,5] suggested a quadratic hybridization of β FR β PRP where method of the form β HQ± = β + (θ ) = ( θ 2 )βprp β + (θ± ), θ± [,], β PRP+, θ ± C, β FR, θ ± <, β FR, θ ± >, +θ β FR, (8) θ [,],
3 A modified quadratic hybridization of PRP FR conjugate gradient method θ ± = βfr ± (β FR ) 2 4β PRP (β HS 2β PRP is the solution of the quadratic equation θ 2 βprp θ β FR +β HS β PRP ) β PRP = 0. Thus, the author suggested two methods β HQ+ β HQ. The parameter β PRP+ = max{0,β PRP } is a hybrid parameter that was suggested by Gilbert Nocedal [2] to improve on the convergence properties of β PRP. In Mo, Gu Wei [29], the authors suggest a β defined by β = βprp + 2gT g g 2, (9) which then modifies the Touati-Ahmed Storey method [36] to give β = max { 0,min(β FR,β PRP,β )}. This method by Mo et al. [29] was shown to be very competitive with the other hybrids in the literature it was shown to perform much better than the original β PRP. Now, motivated by this suggestion (9) from [29] the wor of Babaie-Kafai [4,5], in this wor we modify Babaie-Kafai s method by introducing β S as where β + (θ ), θ [,], max{0,β β S = }, θ C, β FR, θ <, β FR, θ >, θ = βfr (β FR ) 2 4β (βhs β ) 2β (0) β + (θ ) = ( θ 2 )(max{0,β })+θ β FR, θ [,], () where β is as defined in (9), then define g, = 0, d = (2) g +β S d,. This leads to our hybrid conjugate gradient method presented below. Algorithm. New Hybrid β S Conjugate Gradient Method Step Give initial guess x 0 R n, the parameters 0 < µ < σ < ǫ > 0. Step 2 Set d 0 = g 0 = 0. If g 0 < ǫ, stop. Step 3 Compute α using the strong Wolfe conditions (6) (7). Step 4 Set x + = x +α d, = +. Step 5 If g < ǫ, stop. Step 6 Compute β using (0 ). Step 7 Compute d = g + β S d, go to Step Global convergence of the proposed method The global convergence analysis in this section follows that of Babaie-Kafai [4, 5]. To analyze the global convergence property of our hybrid method, the following assumptions are required. These assumptions have been used extensively in the literature for the global convergence analysis of conjugate gradient methods. Assumption. Let the level set Ω = {x R n : f(x) f(x 0 )}, where x 0 is the initial guess, be bounded. That is, there exists a positive constant B such that x B, x Ω. (3) Assumption 2. In some neighbourhood N of Ω, the function f is continuously differentiable its gradient, g(x) = f(x), is Lipschitz continuous, that is, there exists a constant L > 0 such that g(x) g(y) L x y for all x,y N. These assumptions imply that there exists a positive constant ˆγ such that g(x) ˆγ. (4)
4 80 S. Narayanan, P. Kaelo, M.V. Thuto / IJOCTA, Vol.7, No.2, pp (207) Also, under Assumptions 2, the following lemma can be established. Lemma (Zoutendij lemma). Consider any iteration of the form x + = x + α d, where d is a descent direction α satisfies the wea Wolfe conditions (4) (5). Suppose Assumptions 2 hold, then cos 2 θ g 2 <. =0 It follows from Lemma the sufficient descent condition with the Wolfe line search that =0 g 4 <. (5) d 2 Lemma 2. Suppose that Assumptions 2 hold. Consider any conjugate gradient method in the form of x + = x +α d, = 0,,2,, (2) in which, for all 0, the search direction d is a descent direction the step length α is determined to satisfy the Wolfe conditions. If =0 =, (6) d 2 then the method converges in the sense that lim inf g = 0. (7) Lemma 3. Suppose that Assumptions 2 hold. Consider any conjugate gradient method in the form of x + = x +α d, = 0,,2,, (2), with the conjugate gradient parameter β + (θ ) defined by (), in which the step length α is determined to satisfy the strong Wolfe conditions (6) (7). Also assume that the descent condition d T g < 0, 0 (8) holds there exists a positive constant ξ such that θ ξα, 0. (9) then d 0 g γ, 0, (20) u + u 2 <, (2) =0 where u = d d. Proof. Firstly, note that the descent condition (8) guarantees that d 0. So, u is welldefined. Moreover, from (20) Lemma 2, we have =0 <, (22) d 2 since otherwise (7) holds contradicting (20). Now, we divide β + (θ ) into two parts as β () = ( θ 2 )max(0,β ), β(2) = θ β FR,, for all 0, we define r + = v + d +, δ + = β () d d +, where v + = g + +β (2) d. Therefore, from (2) we obtain that u + = r + +δ + u. (23) Since u = u + =, from (23) we can write r + = u + δ + u = δ + u + u. (24) Because θ [,], we have δ + 0. Using the condition δ + 0, the triangle inequality (24), we get u + u (+δ + )u + (+δ + )u u + δ + u + δ + u + u = 2 r +. (25) If, for a positive constant γ, we have Also, from (3), (4), (9) (20) we have
5 A modified quadratic hybridization of PRP FR conjugate gradient method... 8 v + = g + +β (2) d = g + +θ g + 2 g 2 d g + + θ g + 2 g 2 d ˆγ + ξα ˆ d = ˆγ + ξˆγ2 x + x ˆγ + ξˆγ2 ( x + + x ) ˆγ + 2Bξˆγ2. (26) Theorem. Suppose that Assumptions 2 hold. Consider any conjugate gradient method in the form of x + = x +α d, = 0,,2,, (2), with the conjugate gradient parameter β + (θ ) defined by (), in which the step length α is determined to satisfy the strong Wolfe conditions (6) (7). If the search directions satisfy the descent condition (8) there exists a positive constant η such that θ η α d, 0, (30) Now, from (22), (25), (26) we have u + u 2 4 r + 2 =0 = 4 =0 =0 v + 2 d + 2 ( ) 2 4 ˆγ + 2Bˆγ2 ξ =0 ( ) 2 4 ˆγ + 2Bˆγ2 ξ γ 4 d + 2 =0 g 4 d + 2 then the method converges in the sense that lim inf g = 0. Proof. Because of the descent condition strong Wolfe conditions, we have proven that the sequence {x } 0 is a subset of the level set Ω. Also, sincealltheassumptionsof Lemma 2hold, the inequality (2) holds. Now, to prove the convergence, it is enough to show that the method has property (*). Since θ [,], from (), (4), (20) we have < (27) β + (θ ) = ( θ ) 2 (max{0,β })+θ β FR ( θ ) 2 βprp + 2gT + g g 2 + θ β FR We now define the following property, called property (*). Definition. [0] Consider any conjugate gradient method in the form of x + = x +α d, = 0,,2,, (2). Suppose that for a positive constant γ the inequality (20) holds. Under this assumption, we say that the method has property (*) if only if there exist constants b > λ > 0 such that for all 0, β b, (28) α d λ β b. (29) β PRP + 2gT + g +β FR g 2 g + ( g + + g ) g 2 2ˆγ2 + 2ˆγ2 + ˆγ2 = 5ˆγ g + g + g + 2 g 2 g 2 (3) Moreover, from Assumption 2 equations (), (4), (20), (30) we get β + (θ ) = ( θ ) 2 (max{0,β })+θ β FR ( θ ) 2 βprp + 2gT + g g 2 + θ β FR β PRP + 2gT + g + θ g 2 β FR g + g + g g g + g + θ g 2 g + 2 g 2
6 82 S. Narayanan, P. Kaelo, M.V. Thuto / IJOCTA, Vol.7, No.2, pp (207) Lˆγ x + x + 2ˆγ2 + ηˆγ2 α d Lˆγ α d + 2ηˆγ2 α d + ηˆγ2 α d = Lˆγ+3ηˆγ2 α d. So, from (3) (32), if we let b = 5ˆγ2 λ = b(lˆγ +3ηˆ ), (32) then (28) (29) hold consequently, the method has property (*). 4. Numerical Experiments We now present numerical experiments obtained byourmethodonsometestproblemschosenfrom Morè, et al. [30] Andrei [3] to analyse its efficiency effectiveness. A number of these test problems are widely used in the literature for testing unconstrained optimization methods. We present these test problems in Table, where the columns Prob Dim, respectively, represent the name dimension of the test problem, the dimensions of the problems range from 2 to We compare our proposed new hybrid conjugate gradient method (β S ) with the quadratic hybridization β HQ of Babaie-Kafai [4, 5] the method β by Mo, Gu Wei [29]. In [4], βhq was shown to be the better hybridization as comparedtoβ HQ+, hence our comparison will only focus on β HQ. For all the methods, we considered the stopping condition to be ǫ = 0 5, that is, the algorithms (methods) were stopped once the condition g < 0 5 was satisfied, or the maximum number of iterations of 5000 was reached. For the line search, the strong Wolfe conditions (6) (7) were used to find the step length α, with µ = σ = 0.6. All the methods were coded in MATLAB R205b numerical results are compared based on number of gradient evaluations, function evaluations CPU time. In Table, we present the number of functions evaluations(n F E) gradient evaluations (NGE) obtained for the methods β HQ, β S β, where the best results for each problem are indicated in bold. We observe from the table that, overall, the incorporation of β in the quadratic hybridization has a positive effect on β HQ, even though for some problems it is worse off. We also compare the methods using the performance profiles tool suggested by Dolan Moré [6] which, over the years, has been used extensively to judge the performance of different methods on a given set of test problems. The tool evaluates then compares the performance of the set of methods S on a set P of test problems. That is, using the ratio r p,s = t p,s min{t p,s : s S}, wheret p,s is(function, gradient, CPUtime)evaluations required to solve p by method s, the overall performance profile function is ρ s (τ) = n p size{p : p n p,log(r p,s ) τ}, where n p is the total number of problems in P τ 0. In case the method s fails to solve problem p, the ratio r p,s is set to some sufficiently large number. The function ρ s (τ) is then plotted against τ to give the performance profile. Notice that the function ρ s (τ) taes the values ρ s (τ) [0,] so the inequality ρ s (τ ) < ρ t (τ ) shows that the method t outperforms the method s at τ. We now present the plots of these performance profiles on function evaluations, gradient evaluations CPU time as figures. The function evaluations performance profile is presented in Figure, gradient evaluations in Figure 2 CPU time in Figure 3. It is clear from the figures that replacing β PRP by β in the quadratic hybridization β HQ has a positive effect. From the figures, we observe that β is the best method overall. As for β S βhq, we see that in Figures 2, for τ 0.5, β HQ is slightly better than β S. However, infigure3, theplotshowsthatinterms ofcputime, thereisnotmuchdifferencebetween β S βhq with the methods being much competitive. Overall the figures show the influence of β on the quadratic hybridization over the use of β PRP.
7 A modified quadratic hybridization of PRP FR conjugate gradient method Table. Results of test problems. Prob Dim β HQ β S β NFE NGE NFE NGE NFE NGE Rosenbroc Freud Roth Beale Helical valley Bard Gaussian Box Powell Singular Wood Biggs EXP Osborne Broyden tridiagonal Ext. TET Gen. White & Holst Ext. Penalty Ext. Maratos Gen. Rosenbroc Fletcher Ext. Rosenbroc Ext. Powell singular Raydan Ext. Beale Ext. Himmelblau Ext. DENSCHNB Ext. DENSCHNF Ext. Freud & Roth Ext. White & Holst Ext. Wood NONSCOMP Quartic β HQ- 0.9 β HQ β S β * β S β * ρ s (τ) 0.6 ρ s (τ) τ Figure. Function evaluations profile τ Figure 2. Gradient evaluations profile.
8 84 S. Narayanan, P. Kaelo, M.V. Thuto / IJOCTA, Vol.7, No.2, pp (207) ρ s (τ) τ 5. Conclusion Figure 3. CPU time profile. In this article, a modified quadratic hybridization of Pola Ribière Polya Fletcher Reeves conjugate gradient method (β S ) was presented. Its global convergence under the strong Wolfe line search conditions was also established. The β S method presented was tested on a number of unconstrained problems that have been extensively used in the literature compared to the original quadratic hybridization of Pola Ribière Polya Fletcher Reeves conjugate gradient method β HQ. The numerical results show that this proposed modification has a positive effect on the performance of β HQ. However, the numerical results from this study show that further research to improve the efficiency effectiveness of β HQ other conjugate gradient hybrids is still needed. A number of hybrid conjugate gradient methods have been proposed in the literature but there are many problems that are currently not properly hled by these methods, hence the need for more research in this field. Acnowledgments The authors than the anonymous Referees the Editors whose comments suggestions largely improved this wor. References [] Al-Baali, M., Narushima, Y. Yabe, H., A family of three term conjugate gradient methods with sufficient descent property for unconstrained optimization, Comput. Optim. Appl., 60, 89 0 (205). [2] Andrei, N., A new three-term conjugate gradient algorithm for unconstrained optimization, Numer. Algorithms, 68, (205). [3] Andrei, N., An unconstrained optimization test functions collection, Adv. Model. Optim., 0(), 47 6 (2008). β HQβ S β * [4] Babaie-Kafai, S., A quadratic hybridization of Pola- Ribiere-Polya Fletcher-Reeves conjugate gradient methods, J Optim. Theory Appl., 54(3), (202). [5] Babaie-Kafai, S., A Note on the global convergence of the quadratic hybridization of Pola-Ribiere-Polya Fletcher-Reeves conjugate gradient methods, J Optim. Theory Appl., 57(), (203). [6] Babaie-Kafai, S., Two modified scaled nonlinear conjugate gradient methods, J Comput. Appl. Math., 26, (204). [7] Babaie-Kafai, S., Fatemi, M., Mahdavi-Amiri, N., Two effective hybrid conjugate gradient algorithms based on modified BFGS updates, Numer. Algorithms, 58, (20). [8] Babaie-Kafai, S. Ghanbari, R., A descent extension of the Pola-Ribière-Polya conjugate gradient method, Comp. Math. Appl., 68, (204). [9] Dai, Y.-H., Conjugate gradient methods with Armijotype line searches, Acta Math. Appl. Sin. Engl. Ser., 8(), (2002). [0] Dai, Y.-H. Liao, L.Z., New conjugacy conditions related nonlinear conjugate gradient methods, Appl. Math. Optim., 43, 87 0 (200). [] Dai, Y.-H. Yuan, Y., A class of globally convergent conjugate gradient methods, Sci. China Ser. A., 46(2), (2003). [2] Dai, Y.-H. Yuan, Y., A nonlinear conjugate gradient method with strong global convergence property, SIAM J. Optim., 0(), (999). [3] Dai, Y.-H., Yuan, Y., An efficient hybrid conjugate gradient method for unconstrained optimization, Ann. Oper. Res., 03, (200). [4] Dai, Z. Wen, F., Another improved Wei-Yao-Liu nonlinear conjugate gradient method with sufficient descent property, Appl. Math. Comput., 28: (202). [5] Dai, Z.-F. Tian, B.-S., Global convergence of some modified PRP nonlinear conjugate gradient methods, Optim. Lett., 5, (20). [6] Dolan, E.D. Moré, J.J., Benchmaring optimization software with profile performance profiles, Math. Program., 9(2), (2002). [7] Dong, X.L., Liu, H. He, Y., A self-adjusting conjugate gradient method with sufficient descent condition conjugacy condition, J Optim. Theory Appl., 65, (205). [8] Dong, X.L., Liu, H.W., He, Y.B. Yang, X.M., A modified Hestenes-Stiefel conjugate gradient method with sufficient descent condition conjugacy condition, J Comput. Appl. Math., 28, (205). [9] Dong, Y., A practical PR+ conjugate gradient method only using gradient, Appl. Math. Comput., 29(4), (202). [20] Fletcher, R. Reeves, C., Function minimization by conjugate gradients, Comp. J., 7, (964). [2] Gilbert, J.C., Nocedal, J., Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2(), 2 42 (992). [22] Hager, W.W., Zhang, H., A survey of nonlinear conjugate gradient methods, Pac. J. Optim., 2(), (2006). [23] Hestenes, M.R., Stiefel, E., Methods for conjugate gradients for solving linear systems, J Res. Natl. Bur. St., 49, (952).
9 A modified quadratic hybridization of PRP FR conjugate gradient method [24] Jia, W., Zong, J. Wang, X., An improved mixed conjugate gradient method, Systems Engineering Procedia 2, (202). [25] Liu, H., A mixture conjugate gradient method for unconstrained optimization, Third International Symposium on Intelligent Information Technology Security Informatics, IEEE, (200). [26] Liu, Y. Storey, C., Efficient generalized conjugate gradient algorithms, Part : theory, J Optim. Theory Appl., 69, (992). [27] Liu, J., A hybrid nonlinear conjugate gradient method, Lobachevsii J. Math., 33(3), (202). [28] Liu, J.K. Li, S.J., New hybrid conjugate gradient method for unconstrained optimization, Appl. Math. Comput., 245, (204). [29] Mo, J., Gu, N. Wei, Z., Hybrid conjugate gradient methods for unconstrained optimization, Optim. Methods Softw., 22(2), (2007). [30] Moré, J.J., Garbow, B.S. Hillstrom, K.E., Testing unconstrained optimization software, ACM Trans. Math. Software, 7, 7 4 (98). [3] Nocedal, J. Wright, S.J., Numerical Optimization, 2nd Edition, Springer Science+ Business Media, LLC. Printed in the United States of Ameraica (2006). [32] Pola, E., Ribière, G., Note sur la convergence de mèthodes de directions conjuguèes, Rev. Française d Infom. Recherche Opèrationnelle, 3(6), (969). [33] Polya, B.T., The conjugate gradient method in Extreme problems, USSR Comput. Math. Math. Phys., 9(4), 94 2 (969). [34] Shi, Z.-J., Convergence of line search methods for unconstrained optimization, Appl. Math. Comput., 57, (2004). [35] Sun, M. Liu, J., Three modified Pola-Ribière- Polya conjugate gradient methods with sufficient descent property, Journal of Inequalities Applications, (205). [36] Touati-Ahmed, D. Storey, C., Efficient hybrid conjugate gradient techniques, J Optim. Theory Appl., 64(2), (990). [37] Yabe, H. Saaiwa, N., A new nonlinear conjugate gradient method for unconstrained optimization, J Oper. Res., 48(4), (2005). [38] Yan, H., Chen, L. Jiao, B., HS-LS-CD hybrid conjugate gradient algorithm for unconstrained optimization, Second International Worshop on Computer Science Engineering, IEEE, (2009). [39] Yuan, G. Lu, X., A modified PRP conjugate gradient method, Ann. Oper. Res., 66, (2009). [40] Zhang, L., Zhou, W. Li, D.-H., A descent modified Pola-Ribière-Polya conjugate gradient method its global convergence, IMA J. Numer. Anal., 26, (2006). [4] Zhou, A., Zhu, Z., Fan, H. Qing, Q., Three new hybrid conjugate gradient methods for optimization, Appl. Math., 2(3), (20). S. Narayanan graduated with an MSc in Mathematics from the Department of Mathematics, University of Botswana, in 205. She is currently pursuing her PhD in Mathematics from the Department of Mathematics, University of Botswana. Her research interests are in Optimization Analysis. P. Kaelo is a lecturer in the Department of Mathematics, University of Botswana. He obtained his PhD in Mathematics from the University of the Witwatersr, Johannesburg, South Africa, in His research interests include Optimization, Optimal Control Theory, Numerical Analysis Functional Analysis. M.V. Thuto is a lecturer in the Department of Mathematics, University of Botswana. He obtained his PhD in Mathematics from the University of Exeter, Engl, United Kingdom, in His research interests are in Control Theory, Optimization Functional Analysis. An International Journal of Optimization Control: Theories & Applications ( This wor is licensed under a Creative Commons Attribution 4.0 International License. The authors retain ownership of the copyright for their article, but they allow anyone to download, reuse, reprint, modify, distribute, /or copy articles in IJOCTA, so long as the original authors source are credited. To see the complete license contents, please visit
New hybrid conjugate gradient methods with the generalized Wolfe line search
Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:
More informationBulletin of the. Iranian Mathematical Society
ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 43 (2017), No. 7, pp. 2437 2448. Title: Extensions of the Hestenes Stiefel and Pola Ribière Polya conjugate
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationA Modified Hestenes-Stiefel Conjugate Gradient Method and Its Convergence
Journal of Mathematical Research & Exposition Mar., 2010, Vol. 30, No. 2, pp. 297 308 DOI:10.3770/j.issn:1000-341X.2010.02.013 Http://jmre.dlut.edu.cn A Modified Hestenes-Stiefel Conjugate Gradient Method
More informationNew Hybrid Conjugate Gradient Method as a Convex Combination of FR and PRP Methods
Filomat 3:11 (216), 383 31 DOI 1.2298/FIL161183D Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat New Hybrid Conjugate Gradient
More informationAN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.
Bull. Iranian Math. Soc. Vol. 40 (2014), No. 1, pp. 235 242 Online ISSN: 1735-8515 AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.
More informationConvergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize
Bol. Soc. Paran. Mat. (3s.) v. 00 0 (0000):????. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v38i6.35641 Convergence of a Two-parameter Family of Conjugate
More informationGLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH
GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics
More informationNew hybrid conjugate gradient algorithms for unconstrained optimization
ew hybrid conjugate gradient algorithms for unconstrained optimization eculai Andrei Research Institute for Informatics, Center for Advanced Modeling and Optimization, 8-0, Averescu Avenue, Bucharest,
More informationOn the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search
ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised
More informationResearch Article A Descent Dai-Liao Conjugate Gradient Method Based on a Modified Secant Equation and Its Global Convergence
International Scholarly Research Networ ISRN Computational Mathematics Volume 2012, Article ID 435495, 8 pages doi:10.5402/2012/435495 Research Article A Descent Dai-Liao Conjugate Gradient Method Based
More informationResearch Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search
Abstract and Applied Analysis Volume 013, Article ID 74815, 5 pages http://dx.doi.org/10.1155/013/74815 Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search Yuan-Yuan Chen
More informationAn Efficient Modification of Nonlinear Conjugate Gradient Method
Malaysian Journal of Mathematical Sciences 10(S) March : 167-178 (2016) Special Issue: he 10th IM-G International Conference on Mathematics, Statistics and its Applications 2014 (ICMSA 2014) MALAYSIAN
More informationGlobally convergent three-term conjugate gradient projection methods for solving nonlinear monotone equations
Arab. J. Math. (2018) 7:289 301 https://doi.org/10.1007/s40065-018-0206-8 Arabian Journal of Mathematics Mompati S. Koorapetse P. Kaelo Globally convergent three-term conjugate gradient projection methods
More informationModification of the Armijo line search to satisfy the convergence properties of HS method
Université de Sfax Faculté des Sciences de Sfax Département de Mathématiques BP. 1171 Rte. Soukra 3000 Sfax Tunisia INTERNATIONAL CONFERENCE ON ADVANCES IN APPLIED MATHEMATICS 2014 Modification of the
More informationGlobal Convergence Properties of the HS Conjugate Gradient Method
Applied Mathematical Sciences, Vol. 7, 2013, no. 142, 7077-7091 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2013.311638 Global Convergence Properties of the HS Conjugate Gradient Method
More informationFirst Published on: 11 October 2006 To link to this article: DOI: / URL:
his article was downloaded by:[universitetsbiblioteet i Bergen] [Universitetsbiblioteet i Bergen] On: 12 March 2007 Access Details: [subscription number 768372013] Publisher: aylor & Francis Informa Ltd
More informationTwo new spectral conjugate gradient algorithms based on Hestenes Stiefel
Research Article Two new spectral conjuate radient alorithms based on Hestenes Stiefel Journal of Alorithms & Computational Technoloy 207, Vol. (4) 345 352! The Author(s) 207 Reprints and permissions:
More informationA family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations
Journal of Computational Applied Mathematics 224 (2009) 11 19 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: www.elsevier.com/locate/cam A family
More informationStep-size Estimation for Unconstrained Optimization Methods
Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations
More informationStep lengths in BFGS method for monotone gradients
Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly
More informationNonlinear conjugate gradient methods, Unconstrained optimization, Nonlinear
A SURVEY OF NONLINEAR CONJUGATE GRADIENT METHODS WILLIAM W. HAGER AND HONGCHAO ZHANG Abstract. This paper reviews the development of different versions of nonlinear conjugate gradient methods, with special
More informationA Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search
A Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search Yu-Hong Dai and Cai-Xia Kou State Key Laboratory of Scientific and Engineering Computing, Institute of
More informationImproved Damped Quasi-Newton Methods for Unconstrained Optimization
Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified
More informationNUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS
NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS Adeleke O. J. Department of Computer and Information Science/Mathematics Covenat University, Ota. Nigeria. Aderemi
More informationOpen Problems in Nonlinear Conjugate Gradient Algorithms for Unconstrained Optimization
BULLETIN of the Malaysian Mathematical Sciences Society http://math.usm.my/bulletin Bull. Malays. Math. Sci. Soc. (2) 34(2) (2011), 319 330 Open Problems in Nonlinear Conjugate Gradient Algorithms for
More informationA COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS
A COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS MEHIDDIN AL-BAALI AND HUMAID KHALFAN Abstract. Techniques for obtaining safely positive definite Hessian approximations with selfscaling
More informationNew Inexact Line Search Method for Unconstrained Optimization 1,2
journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationResearch Article A New Conjugate Gradient Algorithm with Sufficient Descent Property for Unconstrained Optimization
Mathematical Problems in Engineering Volume 205, Article ID 352524, 8 pages http://dx.doi.org/0.55/205/352524 Research Article A New Conjugate Gradient Algorithm with Sufficient Descent Property for Unconstrained
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationJanuary 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13
Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière Hestenes Stiefel January 29, 2014 Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes
More informationSOLVING SYSTEMS OF NONLINEAR EQUATIONS USING A GLOBALLY CONVERGENT OPTIMIZATION ALGORITHM
Transaction on Evolutionary Algorithm and Nonlinear Optimization ISSN: 2229-87 Online Publication, June 2012 www.pcoglobal.com/gjto.htm NG-O21/GJTO SOLVING SYSTEMS OF NONLINEAR EQUATIONS USING A GLOBALLY
More informationConjugate gradient methods based on secant conditions that generate descent search directions for unconstrained optimization
Conjugate gradient methods based on secant conditions that generate descent search directions for unconstrained optimization Yasushi Narushima and Hiroshi Yabe September 28, 2011 Abstract Conjugate gradient
More informationAn Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations
International Journal of Mathematical Modelling & Computations Vol. 07, No. 02, Spring 2017, 145-157 An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations L. Muhammad
More informationGlobal convergence of a regularized factorized quasi-newton method for nonlinear least squares problems
Volume 29, N. 2, pp. 195 214, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems WEIJUN ZHOU
More informationand P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s
Global Convergence of the Method of Shortest Residuals Yu-hong Dai and Ya-xiang Yuan State Key Laboratory of Scientic and Engineering Computing, Institute of Computational Mathematics and Scientic/Engineering
More informationon descent spectral cg algorithm for training recurrent neural networks
Technical R e p o r t on descent spectral cg algorithm for training recurrent neural networs I.E. Livieris, D.G. Sotiropoulos 2 P. Pintelas,3 No. TR9-4 University of Patras Department of Mathematics GR-265
More informationGlobal Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction
ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(2011) No.2,pp.153-158 Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method Yigui Ou, Jun Zhang
More informationAdaptive two-point stepsize gradient algorithm
Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of
More informationNew Accelerated Conjugate Gradient Algorithms for Unconstrained Optimization
ew Accelerated Conjugate Gradient Algorithms for Unconstrained Optimization eculai Andrei Research Institute for Informatics, Center for Advanced Modeling and Optimization, 8-0, Averescu Avenue, Bucharest,
More informationPreconditioned conjugate gradient algorithms with column scaling
Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Preconditioned conjugate gradient algorithms with column scaling R. Pytla Institute of Automatic Control and
More informationA new class of Conjugate Gradient Methods with extended Nonmonotone Line Search
Appl. Math. Inf. Sci. 6, No. 1S, 147S-154S (2012) 147 Applied Mathematics & Information Sciences An International Journal A new class of Conjugate Gradient Methods with extended Nonmonotone Line Search
More informationOn Descent Spectral CG algorithms for Training Recurrent Neural Networks
29 3th Panhellenic Conference on Informatics On Descent Spectral CG algorithms for Training Recurrent Neural Networs I.E. Livieris, D.G. Sotiropoulos and P. Pintelas Abstract In this paper, we evaluate
More informationGradient method based on epsilon algorithm for large-scale nonlinearoptimization
ISSN 1746-7233, England, UK World Journal of Modelling and Simulation Vol. 4 (2008) No. 1, pp. 64-68 Gradient method based on epsilon algorithm for large-scale nonlinearoptimization Jianliang Li, Lian
More informationA New Nonlinear Conjugate Gradient Coefficient. for Unconstrained Optimization
Applie Mathematical Sciences, Vol. 9, 05, no. 37, 83-8 HIKARI Lt, www.m-hiari.com http://x.oi.or/0.988/ams.05.4994 A New Nonlinear Conjuate Graient Coefficient for Unconstraine Optimization * Mohame Hamoa,
More informationIntroduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems
New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,
More informationChapter 4. Unconstrained optimization
Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file
More informationMultipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,
More informationQuadrature based Broyden-like method for systems of nonlinear equations
STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 6, March 2018, pp 130 138. Published online in International Academic Press (www.iapress.org) Quadrature based Broyden-like
More informationResearch Article A Nonmonotone Weighting Self-Adaptive Trust Region Algorithm for Unconstrained Nonconvex Optimization
Discrete Dynamics in Nature and Society Volume 2015, Article ID 825839, 8 pages http://dx.doi.org/10.1155/2015/825839 Research Article A Nonmonotone Weighting Self-Adaptive Trust Region Algorithm for Unconstrained
More information230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n
Journal of Computational Mathematics, Vol.21, No.2, 2003, 229 236. A SELF-ADAPTIVE TRUST REGION ALGORITHM Λ1) Long Hei y (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy
More informationNonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization
Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong
More informationThe Wolfe Epsilon Steepest Descent Algorithm
Applied Mathematical Sciences, Vol. 8, 204, no. 55, 273-274 HIKARI Ltd, www.m-hiari.com http://dx.doi.org/0.2988/ams.204.4375 The Wolfe Epsilon Steepest Descent Algorithm Haima Degaichia Department of
More informationA DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION
1 A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION G E MANOUSSAKIS, T N GRAPSA and C A BOTSARIS Department of Mathematics, University of Patras, GR 26110 Patras, Greece e-mail :gemini@mathupatrasgr,
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Optimization II: Unconstrained Multivariable 1
More informationNumerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09
Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods
More informationResearch Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization
Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 00, Article ID 843609, 0 pages doi:0.55/00/843609 Research Article Finding Global Minima with a Filled Function Approach for
More informationA Conjugate Gradient Method with Inexact. Line Search for Unconstrained Optimization
Applie Mathematical Sciences, Vol. 9, 5, no. 37, 83-83 HIKARI Lt, www.m-hiari.com http://x.oi.or/.988/ams.5.4995 A Conuate Graient Metho with Inexact Line Search for Unconstraine Optimization * Mohame
More informationON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASI-NEWTON METHODS ON QUADRATIC PROBLEMS
ON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASI-NEWTON METHODS ON QUADRATIC PROBLEMS Anders FORSGREN Tove ODLAND Technical Report TRITA-MAT-203-OS-03 Department of Mathematics KTH Royal
More informationOn efficiency of nonmonotone Armijo-type line searches
Noname manuscript No. (will be inserted by the editor On efficiency of nonmonotone Armijo-type line searches Masoud Ahookhosh Susan Ghaderi Abstract Monotonicity and nonmonotonicity play a key role in
More informationNONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM
NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM JIAYI GUO AND A.S. LEWIS Abstract. The popular BFGS quasi-newton minimization algorithm under reasonable conditions converges globally on smooth
More informationIterative common solutions of fixed point and variational inequality problems
Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1882 1890 Research Article Iterative common solutions of fixed point and variational inequality problems Yunpeng Zhang a, Qing Yuan b,
More informationGRADIENT METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION
GRADIENT METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION By HONGCHAO ZHANG A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
More informationAcceleration Method for Convex Optimization over the Fixed Point Set of a Nonexpansive Mapping
Noname manuscript No. will be inserted by the editor) Acceleration Method for Convex Optimization over the Fixed Point Set of a Nonexpansive Mapping Hideaki Iiduka Received: date / Accepted: date Abstract
More informationSuppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.
Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of
More informationBOUNDARY VALUE PROBLEMS OF A HIGHER ORDER NONLINEAR DIFFERENCE EQUATION
U.P.B. Sci. Bull., Series A, Vol. 79, Iss. 4, 2017 ISSN 1223-7027 BOUNDARY VALUE PROBLEMS OF A HIGHER ORDER NONLINEAR DIFFERENCE EQUATION Lianwu Yang 1 We study a higher order nonlinear difference equation.
More informationA Modified Polak-Ribière-Polyak Conjugate Gradient Algorithm for Nonsmooth Convex Programs
A Modified Polak-Ribière-Polyak Conjugate Gradient Algorithm for Nonsmooth Convex Programs Gonglin Yuan Zengxin Wei Guoyin Li Revised Version: April 20, 2013 Abstract The conjugate gradient (CG) method
More informationExtra-Updates Criterion for the Limited Memory BFGS Algorithm for Large Scale Nonlinear Optimization 1
journal of complexity 18, 557 572 (2002) doi:10.1006/jcom.2001.0623 Extra-Updates Criterion for the Limited Memory BFGS Algorithm for Large Scale Nonlinear Optimization 1 M. Al-Baali Department of Mathematics
More informationBulletin of the. Iranian Mathematical Society
ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation
More informationStatistics 580 Optimization Methods
Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of
More information1 Numerical optimization
Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms
More informationFALL 2018 MATH 4211/6211 Optimization Homework 4
FALL 2018 MATH 4211/6211 Optimization Homework 4 This homework assignment is open to textbook, reference books, slides, and online resources, excluding any direct solution to the problem (such as solution
More informationGradient methods exploiting spectral properties
Noname manuscript No. (will be inserted by the editor) Gradient methods exploiting spectral properties Yaui Huang Yu-Hong Dai Xin-Wei Liu Received: date / Accepted: date Abstract A new stepsize is derived
More informationOn Lagrange multipliers of trust-region subproblems
On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008
More informationEXISTENCE OF SOLUTIONS FOR KIRCHHOFF TYPE EQUATIONS WITH UNBOUNDED POTENTIAL. 1. Introduction In this article, we consider the Kirchhoff type problem
Electronic Journal of Differential Equations, Vol. 207 (207), No. 84, pp. 2. ISSN: 072-669. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu EXISTENCE OF SOLUTIONS FOR KIRCHHOFF TYPE EQUATIONS
More informationModification of the Wolfe line search rules to satisfy the descent condition in the Polak-Ribière-Polyak conjugate gradient method
Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Modification of the Wolfe line search rules to satisfy the descent condition in the Polak-Ribière-Polyak conjugate gradient method
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationFinite Convergence for Feasible Solution Sequence of Variational Inequality Problems
Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,
More informationA class of Smoothing Method for Linear Second-Order Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin
More informationA new initial search direction for nonlinear conjugate gradient method
International Journal of Mathematis Researh. ISSN 0976-5840 Volume 6, Number 2 (2014), pp. 183 190 International Researh Publiation House http://www.irphouse.om A new initial searh diretion for nonlinear
More informationNumerical Optimization of Partial Differential Equations
Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada
More informationOn the Midpoint Method for Solving Generalized Equations
Punjab University Journal of Mathematics (ISSN 1016-56) Vol. 40 (008) pp. 63-70 On the Midpoint Method for Solving Generalized Equations Ioannis K. Argyros Cameron University Department of Mathematics
More informationHybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].
Hybrid particle swarm algorithm for solving nonlinear constraint optimization problems BINGQIN QIAO, XIAOMING CHANG Computers and Software College Taiyuan University of Technology Department of Economic
More informationChapter 10 Conjugate Direction Methods
Chapter 10 Conjugate Direction Methods An Introduction to Optimization Spring, 2012 1 Wei-Ta Chu 2012/4/13 Introduction Conjugate direction methods can be viewed as being intermediate between the method
More informationMonotone variational inequalities, generalized equilibrium problems and fixed point methods
Wang Fixed Point Theory and Applications 2014, 2014:236 R E S E A R C H Open Access Monotone variational inequalities, generalized equilibrium problems and fixed point methods Shenghua Wang * * Correspondence:
More informationResearch Article Strong Convergence of a Projected Gradient Method
Applied Mathematics Volume 2012, Article ID 410137, 10 pages doi:10.1155/2012/410137 Research Article Strong Convergence of a Projected Gradient Method Shunhou Fan and Yonghong Yao Department of Mathematics,
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More information5 Quasi-Newton Methods
Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationResearch Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations
Applied Mathematics Volume 2012, Article ID 348654, 9 pages doi:10.1155/2012/348654 Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations M. Y. Waziri,
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationMaria Cameron. f(x) = 1 n
Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that
More informationTHE RELATIONSHIPS BETWEEN CG, BFGS, AND TWO LIMITED-MEMORY ALGORITHMS
Furman University Electronic Journal of Undergraduate Mathematics Volume 12, 5 20, 2007 HE RELAIONSHIPS BEWEEN CG, BFGS, AND WO LIMIED-MEMORY ALGORIHMS ZHIWEI (ONY) QIN Abstract. For the solution of linear
More informationMath 164: Optimization Barzilai-Borwein Method
Math 164: Optimization Barzilai-Borwein Method Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 online discussions on piazza.com Main features of the Barzilai-Borwein (BB) method The BB
More informationModification of the Wolfe line search rules to satisfy the descent condition in the Polak-Ribière-Polyak conjugate gradient method
Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Modification of the Wolfe line search rules to satisfy the descent condition in the Polak-Ribière-Polyak conjugate gradient method
More informationDeterministic convergence of conjugate gradient method for feedforward neural networks
Deterministic convergence of conjugate gradient method for feedforard neural netorks Jian Wang a,b,c, Wei Wu a, Jacek M. Zurada b, a School of Mathematical Sciences, Dalian University of Technology, Dalian,
More informationA new Newton like method for solving nonlinear equations
DOI 10.1186/s40064-016-2909-7 RESEARCH Open Access A new Newton like method for solving nonlinear equations B. Saheya 1,2, Guo qing Chen 1, Yun kang Sui 3* and Cai ying Wu 1 *Correspondence: yksui@sina.com
More informationImproving L-BFGS Initialization for Trust-Region Methods in Deep Learning
Improving L-BFGS Initialization for Trust-Region Methods in Deep Learning Jacob Rafati http://rafati.net jrafatiheravi@ucmerced.edu Ph.D. Candidate, Electrical Engineering and Computer Science University
More information