Research Article A New Conjugate Gradient Algorithm with Sufficient Descent Property for Unconstrained Optimization

Similar documents
Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search

New hybrid conjugate gradient methods with the generalized Wolfe line search

A Modified Hestenes-Stiefel Conjugate Gradient Method and Its Convergence

An Efficient Modification of Nonlinear Conjugate Gradient Method

New Hybrid Conjugate Gradient Method as a Convex Combination of FR and PRP Methods

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

Conjugate gradient methods based on secant conditions that generate descent search directions for unconstrained optimization

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH

Convergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize

Step-size Estimation for Unconstrained Optimization Methods

AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.

New hybrid conjugate gradient algorithms for unconstrained optimization

Modification of the Wolfe line search rules to satisfy the descent condition in the Polak-Ribière-Polyak conjugate gradient method

New Inexact Line Search Method for Unconstrained Optimization 1,2

Modification of the Wolfe line search rules to satisfy the descent condition in the Polak-Ribière-Polyak conjugate gradient method

Research Article A Nonmonotone Weighting Self-Adaptive Trust Region Algorithm for Unconstrained Nonconvex Optimization

A modified quadratic hybridization of Polak-Ribière-Polyak and Fletcher-Reeves conjugate gradient method for unconstrained optimization problems

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

Global Convergence Properties of the HS Conjugate Gradient Method

Modification of the Armijo line search to satisfy the convergence properties of HS method

Journal of Computational and Applied Mathematics. Notes on the Dai Yuan Yuan modified spectral gradient method

Nonlinear conjugate gradient methods, Unconstrained optimization, Nonlinear

Bulletin of the. Iranian Mathematical Society

Research Article Identifying a Global Optimizer with Filled Function for Nonlinear Integer Programming

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s

New class of limited-memory variationally-derived variable metric methods 1

Research Article A Descent Dai-Liao Conjugate Gradient Method Based on a Modified Secant Equation and Its Global Convergence

Step lengths in BFGS method for monotone gradients

A Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search

MSS: MATLAB SOFTWARE FOR L-BFGS TRUST-REGION SUBPROBLEMS FOR LARGE-SCALE OPTIMIZATION

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

A derivative-free nonmonotone line search and its application to the spectral residual method

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

Globally convergent three-term conjugate gradient projection methods for solving nonlinear monotone equations

First Published on: 11 October 2006 To link to this article: DOI: / URL:

Two new spectral conjugate gradient algorithms based on Hestenes Stiefel

Research Article Modified T-F Function Method for Finding Global Minimizer on Unconstrained Optimization

Research Article Strong Convergence of a Projected Gradient Method

Open Problems in Nonlinear Conjugate Gradient Algorithms for Unconstrained Optimization

Research Article Existence of Periodic Positive Solutions for Abstract Difference Equations

Spectral gradient projection method for solving nonlinear monotone equations

Adaptive two-point stepsize gradient algorithm

Research Article A Novel Filled Function Method for Nonlinear Equations

A family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction

Research Article A New Roper-Suffridge Extension Operator on a Reinhardt Domain

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization

The Wolfe Epsilon Steepest Descent Algorithm

Research Article The Dirichlet Problem on the Upper Half-Space

Chapter 4. Unconstrained optimization

A New Nonlinear Conjugate Gradient Coefficient. for Unconstrained Optimization

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

on descent spectral cg algorithm for training recurrent neural networks

Research Article An Auxiliary Function Method for Global Minimization in Integer Programming

Research Article New Oscillation Criteria for Second-Order Neutral Delay Differential Equations with Positive and Negative Coefficients

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization

Institute of Computer Science. A conjugate directions approach to improve the limited-memory BFGS method

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n

Research Article Almost Sure Central Limit Theorem of Sample Quantiles

Research Article A Note about the General Meromorphic Solutions of the Fisher Equation

A new class of Conjugate Gradient Methods with extended Nonmonotone Line Search

Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations

Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations

Research Article A Characterization of E-Benson Proper Efficiency via Nonlinear Scalarization in Vector Optimization

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

January 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13

NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS

GENERALIZED second-order cone complementarity

Preconditioned conjugate gradient algorithms with column scaling

Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem

An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations

On Descent Spectral CG algorithms for Training Recurrent Neural Networks

New Accelerated Conjugate Gradient Algorithms for Unconstrained Optimization

Research Article Strong Convergence of Parallel Iterative Algorithm with Mean Errors for Two Finite Families of Ćirić Quasi-Contractive Operators

Research Article Algorithms for a System of General Variational Inequalities in Banach Spaces

Research Article Sharp Bounds by the Generalized Logarithmic Mean for the Geometric Weighted Mean of the Geometric and Harmonic Means

Research Article A New Fractional Integral Inequality with Singularity and Its Application

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

Research Article Some Generalizations of Fixed Point Results for Multivalued Contraction Mappings

A COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS

Research Article Extended Precise Large Deviations of Random Sums in the Presence of END Structure and Consistent Variation

Research Article Cyclic Iterative Method for Strictly Pseudononspreading in Hilbert Space

Research Article A Note on Kantorovich Inequality for Hermite Matrices

Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions

Research Article Convergence Theorems for Infinite Family of Multivalued Quasi-Nonexpansive Mappings in Uniformly Convex Banach Spaces

Research Article Existence for Elliptic Equation Involving Decaying Cylindrical Potentials with Subcritical and Critical Exponent

Research Article A Nice Separation of Some Seiffert-Type Means by Power Means

Research Article Multiple-Decision Procedures for Testing the Homogeneity of Mean for k Exponential Distributions

A SUBSPACE MINIMIZATION METHOD FOR THE TRUST-REGION STEP

GRADIENT METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION

Acceleration Method for Convex Optimization over the Fixed Point Set of a Nonexpansive Mapping

Research Article Applying GG-Convex Function to Hermite-Hadamard Inequalities Involving Hadamard Fractional Integrals

KingSaudBinAbdulazizUniversityforHealthScience,Riyadh11481,SaudiArabia. Correspondence should be addressed to Raghib Abu-Saris;

Numerical Optimization of Partial Differential Equations

Research Article An Unbiased Two-Parameter Estimation with Prior Information in Linear Regression Model

Research Article Degenerate-Generalized Likelihood Ratio Test for One-Sided Composite Hypotheses

Research Article Constrained Solutions of a System of Matrix Equations

Math 164: Optimization Barzilai-Borwein Method

Transcription:

Mathematical Problems in Engineering Volume 205, Article ID 352524, 8 pages http://dx.doi.org/0.55/205/352524 Research Article A New Conjugate Gradient Algorithm with Sufficient Descent Property for Unconstrained Optimization XiaoPing Wu, LiYing Liu, 2 FengJie Xie, and YongFei Li School of Economic Management, Xi an University of Posts and Telecommunications, Shaanxi, Xi an 7006, China 2 School of Mathematics Science, Liaocheng University, Shandong, Liaocheng 252000, China Correspondence should be addressed to XiaoPing Wu; wuxiaoping978@26.com Received 6 May 205; Revised 24 September 205; Accepted 29 September 205 Academic Editor: Masoud Hajarian Copyright 205 XiaoPing Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original wor is properly cited. A new nonlinear conjugate gradient formula, which satisfies the sufficient descent condition, for solving unconstrained optimization problem is proposed. The global convergence of the algorithm is established under wea Wolfe line search. Some numerical experiments show that this new WWPNPRP + algorithm is competitive to the SWPPRP + algorithm, the SWPHS + algorithm, and the WWPDYHS + algorithm.. Introduction In this paper, we consider the following unconstrained optimization problem: min {f (x) x R n }, () where f(x) : R n R is a twice continuously differentiable function whose gradient is denoted by g(x) : R n R n.its iterative formula is given by where x + =x +t d, (2) d = { g { g { +β d if =, if 2, and t is a step size which is computed by carrying out a line search, β is a scalar, and g denotes g(x ). There are at least sixfamousformulasforβ, which are given below: β HS = gt (g g ) (g g ) T d, β FR = gt g g T g, (3) = gt (g g ) g T g, β PRP β CD = gt g d T g, β LS = gt (g g ) d T g, β DY g T = g (g g ) T. d To establish the global convergence results of the above conjugate gradient (CG) methods, it is usually required that the step size t should satisfy some line search conditions, suchastheweawolfe-powell(wwp)linesearch (4) f(x +t d ) f(x ) δt g T d, (5) g(x +t d ) T d σg T d, (6) where δ (0, /2) and σ (δ, ), andstrongwolfe-powell (SWP) line search (5) and g(x +t d ) T d σgt d, (7)

2 Mathematical Problems in Engineering where δ (0, /2) and σ (δ, ). Wolfe-Powell is referred to as Wolfe. Considerable attentions have been made on the global convergence behaviors for the above methods. Zoutendij [] proved that the FR method with exact line search is globally convergent. Al-Baali [2] extended this result to the strongwolfelinesearchconditions.in[3],daiandyuan proposed the DY method which produces a descent search direction at every iteration and converges globally provided thatthelinesearchsatisfiestheweawolfeconditions.in [4], Wei et al. discussed the global convergence of the PRP conjugate gradient method (CGM) with inexact line search for nonconvex unconstrained optimization. Recently, based on [5 7], Jiang et al. [8] proposed a hybrid CGM with β JHJ = g 2 max {0, ( g / d )gt d,( g / g )gt g } d T (g. (8) g ) Under the Wolfe line search, the method possesses global convergence and efficient numerical performance. On some studies of the conjugate gradient methods, the sufficient descent condition g T d c g 2, c > 0 (9) is often used to analyze the global convergence of the nonlinear conjugate gradient method with the inexact line search techniques. For instance, Touati-Ahmed and Storey [9], Al-Baali [2], Gilbert and Nocedal [0], and Hu and Storey [] hinted that the sufficient descent condition may be crucial for conjugate gradient methods. Unfortunately, this condition is hard to hold. It has been showed that the PRP method with the strong Wolfe Powell line search does not ensure this condition at each iteration. So, Grippo and Lucidi [2] managed to find some line searches which ensure the sufficient descent condition, and they presented a new line search which ensures this condition. The convergence of the PRP method with this line search had been established. Yu et al. [3] analyzed the global convergence of modified PRP CGM with sufficient descent property. Gilbert and Nocedal [0] gave another way to discuss the global convergence of the PRP method with the wea Wolfe line search. By using a complicated line search, they were able to establish the global convergence result of the PRP and HS methods by restricting the parameter β in (3), not allowed to be negative; that is, β + = max {0, βprp }, (0) which yields a globally convergent CG method, being also computationally efficient [4]. In spite of the numerical efficiency of the PRP method, as an important defect, the method lacs the following descent property: g T d 0, 0, () even for uniformly convex objective functions [5]. This motivated the researchers to pay much attention to finding some extensions of the PRP method with descent property. In this context, Yu et al. [6] proposed a modified form of β PRP as follows: β DPRP =β PRP C y 2 g 4 gt + d, (2) with a constant C /4, leading to a CG method with the sufficient descent property. Dai and Kou [7] propose a family of conjugate gradient methods and an improved Wolfe line search; meanwhile, to accelerate the algorithm, an adaptive restart along negative gradients method is introduced. Jiang and Jian [8] proposed two modified CGMs with disturbance factors based on a variant of PRP method; the two proposed methods not only generate sufficient descent direction at each iteration but also converge globally for nonconvex minimization if the strong Wolfe line search is used. A new hybrid conjugate gradient method was presented for unconstrained optimization. The proposed method can generate decent directions at every iteration; moreover, this property is independent of the steplength line search. Under thewolfelinesearch,theproposedmethodpossessesglobal convergence [9]. The main purpose of this paper is to design an efficient algorithm which possesses the properties of global convergence, sufficient descent, and good numerical results. In next section, we present a new CG formula and give its properties. In Section 3, the new algorithm and its global convergence result will be established. To test and compare the numerical performance of the proposed method, in the last part of this wor, a large amount of medium-scale numerical experiments are reported by tables and performance profiles. 2. The Formula and Its Property Because sufficient descent condition (9) is a very nice and important property to analyze the global convergence of the CG methods, we hope to find β such that d satisfies (9). In the following, we propose a sequence {β } and prove that it has such property. Firstly, we give a definition of a descent sequence (or a sufficient descent sequence): a sequence {β } is called a descent sequence (or a sufficient descent sequence) forthecgmethodsifthereexistsaconstantτ (0, ) (or τ [0,))suchthat,forall 2, By using (3), we have, for all 2, β g T d τ g 2. (3) g T d = g 2 +β g T d. (4)

Mathematical Problems in Engineering 3 From the above discussion, we require that g 2 +β g T d 0. (5) The above inequality implies (3). In [20], the authors proposed a variation of the FR formula: β VFR (μ) = μ g 2 μ 2 gt d +μ 3 g 2, (6) where μ (0,+ ), μ 2 [μ +ε,+ ), μ 3 (0,+ ),andε is any given positive constant. It is easy to prove that {β VFR } is a descent sequence (with τ=μ /μ 2 )forcgmethdsifg T d 0. Formula (6) possesses the sufficient descent property and proved that there exist some nonlinear conjugate gradient formulae possessing the sufficient descent property without any line searches, where β WYL = gt (g g / g ) g T g. (7) By restricting the parameter σ /4 under the SWP line search condition, the WYL method possessed the sufficient descent condition [2]. In [22], the authors designed the following variation of the PRP formula which possesses the sufficient descent property without any line searches: β N (μ) { g 2 gt d = μ { gt d + g 2 if g 2 gt d, { 0 Otherwise, (8) in which μ. Motivated by the ideas in [20, 22] without any line search and sufficient descent, and taing into account the good convergence properties of [0] and the good numerical performance in [4], we propose a class new formula about β as follows: β NPRP = λμ g 2 + ( λ) μ ( g 2 gt g ) μ 2 gt d +μ g 2, (9) where the definitions of μ,μ 2 arethesameasthosein formula (6); λ (0, ). In order to ensure the nonnegative of the parameter β, we define β NPRP+ = max {0, β NPRP }. (20) Thus if a negative of β NPRP occurs,thisstrategywillrestartthe iteration along the steepest direction. The following two propositions show that the {β NPRP+ } is a descent sequence, so that d can mae sufficient descent condition (9) hold. Proposition. Suppose that β is defined by (9)-(20); then one has that β NPRP+ where 0< σ =μ /μ 2 <. σ g 2 gt d, (2) Proof. It is clear that inequality (2) holds when β NPRP+ =0. Nowweconsiderthecasewhereβ NPRP+ =β NPRP.Sowehave β NPRP+ = λμ g 2 + ( λ) μ ( g 2 gt g ) μ 2 gt d +μ g 2 λμ g2 + ( λ) μ g 2 μ 2 gt d μ g 2 (22) μ 2 gt d = σ g 2 gt d. Hence β NPRP+ can mae (2) hold. Furthermore {β NPRP+ } is a descent sequence without any line search. Proposition 2. Suppose that β is defined by (9)-(20); then d satisfies the sufficient descent condition (9) for all, where c = λ( μ /μ 2 ). Proof. For any >,supposethatg T d <0. If β NPRP+ =0,thend = g.sowehave g T d = g 2 c g 2, (23) where c = λ( μ /μ 2 ). Otherwise, from the definition of β NPRP+,wecanobtain g T d =g T [ [ g +( λμ g 2 + ( λ) μ ( g 2 gt g ) μ 2 gt d +μ g 2 )d ] ] g 2 +( λμ g 2 μ 2 gt d gt d + ( λ) μ g 2 μ 2 gt d gt d ) g 2 +λ μ μ 2 g 2 + g 2 λ g 2 λ g 2 +λ μ μ 2 g 2 λ( μ μ 2 ) g 2 = c g 2. (24) For g T d = g 2 < 0,wecandeducethatd can mae sufficient descent condition (9) hold for all. By the proof of Proposition 2, we can now that the formula μ /μ 2 < is necessary; otherwise, the sufficient descent condition can not be held.

4 Mathematical Problems in Engineering 3. Global Convergence In this section, we propose an algorithm related to β NPRP+ and then we study the global convergence property of this algorithm. Firstly, we mae the following two assumptions, which have been widely used in the literature to analyze the global convergence of the CG methods with the inexact line searches. Assumption A. The level set is bounded. Ω={x R n f(x) f(x )} (25) Assumption B. The gradient g(x) is Lipschitz continuous; that is, there exists a constant L>0such that, for any x, y Ω, g (x) g(y) L x y. (26) Now we give the algorithm. Algorithm 3. Step 0. Givenx R n,setd = g ; =.Ifg =0,then stop. Otherwise go to Step. Step. Findt >0satisfying wea Wolfe conditions (5) and (6). Step 2.Letx + =x +t d and g + =g(x + ).If g + =0, then stop. Otherwise go to Step 3. Step 3. Computeβ NPRP+ + by formula (9) and (20). Then generate d + by (3). Step 4.Set=:+;gotoStep0. Since {f(x )} is decreasing sequence, it is clear that the sequence {x } is contained in Ω, and there exists a constant f,suchthat lim f(x )=f. (27) By using Assumptions A and B, we can deduce that there exists M>0such that g M x Ω. (28) The following important result was obtained by Zoutendij [] and Wolfe [23, 24]. Lemma 4. Suppose f(x) is bounded below, and g(x) satisfies the Lipschitz condition. Consider any iteration method of formula (2), where d satisfies d T g < 0 and t is obtained bytheweawolflinesearch.then = (g T d ) 2 d 2 <+. (29) The following lemma was obtained by Dai and Yuan [25]. Lemma 5. Assume that a positive series {a i } satisfies the following inequality for all : i= a i l+c, (30) where l>0and c are constant. Then one has a 2 i i i a 2 =+, i= a =+. i (3) Theorem 6. Suppose that Assumptions A and B hold; {x } is a sequence generated by Algorithm 3. Then one has lim inf g =0. (32) Proof. Equation (3) indicates that, for all 2, Squaring both sides of (33), we obtain Suppose that β NPRP+ d +g =β d. (33) d 2 = g 2 2g T d +β 2 d 2. (34) =β NPRP in (20). Then, d 2 = g 2 2g T d +(β NPRP+ ) 2 d 2 We have = g 2 2g T d +( λμ g 2 + ( λ) μ ( g 2 gt g ) 2 μ 2 gt d +μ g 2 ) d 2 g 2 2g T d + g 4 d 2 h h g 4. (35) g 2 + 2γ g 2, (36) where h = d 2 / g 4 and γ = g T d / g 2. Note that h =/ g 2 and γ =. It follows from (36) that h i= g i 2 +2 γ i i= g i 2. (37) Suppose that conclusion (32) does not hold. Then, there exists apositivescalarε such that, for all, g ε. (38)

Mathematical Problems in Engineering 5 Thus, it follows from (28) and (38) that Further, we have h M 2 + 2 ε 2 h 2 ε 2 i= i= γ i. (39) γ i. (40) On the other hand, using h 0, relation (39) implies that γ i ε2 i= 2M. (4) UsingLemma5and(40),itfollowsthat (g T d ) 2 γ 2 d 2 = =+, (42) h which contradicts to Zoutendij condition (29). This shows that (32) holds. The proof of the theorem is complete. From the proof of the above theorem, we can conclude that any conjugate gradient method with the formula β NPRP+ and some certain step size technique which ensures that Zoutendij condition (29) holds is globally convergent. In particular, the formula β NPRP+ with the wea Wolfe conditions can generate a globally convergent result. 4. Numerical Results All methods above are tested on 56 test problems, where the former test problems 48 (from arwhead to woods) in Table are taen from the CUTE library in Bongartz et al. [26] and the others are taen from Moréetal.[27]; DYHS: β = max {0, min {β HS,βDY }} (43) isgeneratedbygrippoandlucidi[2]. AllcodeswerewritteninMatlab7.5andrunonaHP with.87 GB RAM and Windows XP operating system. The parameters are σ = 0., δ = 0.0, u =, u 2 =2,andλ = 0.3. Stop the iteration if criterion g ε=0 6 is satisfied. In Table, Name denotes the abbreviation of the test problems, n denotes the dimension of the test problems, Itr/NF/NG denote the number of iteration, function evaluations, and gradient evaluations, respectively, and Tcpu denotes the computing time of CPU for computing a test problem (units: second). On the other hand, to show the performance difference clearly between the hjhj, han, hdy, and hhus method, we adopt the performance profiles given by Dolan and Moré [28] to compare the performance according to Itr, NF, NG, and Tcpu, respectively. Benchmar results are generated by running a solver on a set P of problems and recording information of interest such as NF and Tcpu. Let S be the set of solvers in comparison. Assume that S consists of n s ρ(τ) ρ(τ) 0.8 0.6 0.4 0.2 0 0 2 4 6 8 0 0.8 0.6 0.4 0.2 τ SWPHS + WWPNPRP + SWPPRP + WWPDYHS Figure : Performance profiles on Tpu. 0 0 2 4 6 8 0 2 SWPHS + WWPNPRP + SWPPRP + WWPDYHS Figure 2: Performance profile on NF. solvers and P consists of n p problems. For each problem p P and solver s S, denote t p,s by the computing time (or the number of function evaluation, etc.) required to solve problem p P by solver s S, and the comparison between different solvers is based on the performance ratio defined by r p,s = τ t p,s min {t p,s :s S}. (44) Assume that a large enough parameter r M r p,s for all p, s is chosen, and r p,s =r M if and only if solvers s do not solve problem p.define ρ s (τ) = n p size {p P : log r p,s τ}, (45) where size A means the number of elements in set A; then ρ s (τ) is the probability for solver s S that a performance

6 Mathematical Problems in Engineering Table : Numerical test report. Name/n SWPHS + SWPPRP + WWPNPRP + WWPDYHS Itr/NF/NG/Tcpu Itr/NF/NG/Tcpu Itr/NF/NG/Tcpu Itr/NF/NG/Tcpu arwhead 000 0/293/98/0.078 204/5484/606/.462 6/92/6/0.026 23/275/32/0.073 arwhead 2000 6/44/39/0.84 26/6824/825/2.753 6/97/6/0.035 23/325/43/0.35 arwhead 0000 3/952/30/.554 366/324/339/7.892 8/54/3/0.29 8/38/2/0.206 chainwoo 000 /329/9/0.058 28/3745/70/0.632 26/305/43/0.053 32/373/44/0.062 chnrosnb 20 3/72/3/0.004 3/72/3/0.004 3/72/7/0.004 3/72/3/0.004 chnrosnb 50 6/49/37/0.009 8/88/58/0.03 5/89/4/0.005 5/92/2/0.006 cosine 000 3/368/53/0.236 3/3/38/0.59 30/2/32/0.048 400/246/534/3.743 cragglvy 5000 //2/0.026 //2/0.026 //2/0.026 //2/0.026 cragglvy 0000 //2/0.027 //2/0.027 //2/0.026 //2/0.026 dixmaana 3000 348/42287/953/85.754 5/3/4/0.535 2/9/3/0.28 6/4/6/0.45 dixmaanb 3000 3/47/23/0.253 5/70/5/0.272 7/0/7/0.066 6/9/6/0.058 dixmaanc 3000 4/49/3/0.288 5/7/8/0.286 8/2/8/0.077 9//9/0.086 dixmaand 3000 44/422/686/7.85 5/72/2/0.289 /6//0.09 2/3/2/0.6 dixmaani 3000 F/F/F/F 4/98/33/0.403 F/F/F/F F/F/F/F dixmaanj 3000 F/F/F/F 6/69/23/0.305 086/602/676/2.069 522/420/2937/8.554 dixmaanl 3000 3/355/34/.506 3/7/8/0.272 F/F/F/F F/F/F/F dqdrtic 000 F/F/F/F 804/5595/22587/6.84 205/470/205/0.205 7/484/7/0.064 dqdrtic 3000 F/F/F/F 805/56040/22893/3.054 82/30/82/0.334 96/648/96/0.65 dqdrtic 5000 F/F/F/F 805/56078/23060/9.535 42/029/42/0.390 79/530/79/0.204 dqrtic 6000 /63/9/0.237 /63/9/0.238 /27/2/0.085 /27/2/0.085 dqrtic 5000 /30/2/0.234 /30/2/0.235 /30/2/0.234 /30/2/0.235 dqrtic 20000 /3/2/0.325 /3/2/0.323 /3/2/0.325 /3/2/0.324 edensch 000 297/39852/832/35.054 202/3704/7643/2.938 34/8/39/0.7 58/263/77/0.265 engval 3000 4/00/33/0.033 608/8698/7376/6.380 6/54/7/0.08 7/2/33/0.042 engval 0000 4/00/28/0.092 580/7809/6994/7.446 7/58/0/0.056 9/46/45/0.55 engval 20000 4/00/29/0.84 565/7366/687/33.92 7/58/0/0. 20/52/28/0.304 errinros 50 49/4029/445/0.32 43/2363/4708/0.986 F/F/F/F F/F/F/F genhumps 2000 32/566/245/0.408 22/29/9/0.237 /02/30/0.07 2/97/25/0.085 genhumps 5000 8/78/6/.83 4/282/2/.767 22/7/37/0.65 3/87/56/.229 genhumps 20000 4/78/23/0.69 5/02/36/0.928 26/88/73/.28 4/37/0/0.275 genrose 0000 209/369/2775/35.424 2/63/4/0.056 789/7297/792/7.23 263/2365/265/2.333 genrose 20000 97/6000/2284/7.729 2/64/7/0.9 53/4754/55/9.346 297/2676/298/5.283 nondia 0000 /4/0/0.028 /4/0/0.028 /2/2/0.05 /2/2/0.03 nondia 5000 /4/0/0.053 /4/0/0.044 /22/2/0.026 /22/2/0.02 nondia 20000 /42/9/0.063 /42/9/0.06 /22/2/0.029 /22/2/0.029 nondquar 3000 2/63//0.208 40/404/75/7.728 472/896/660/6.86 662/956/045/8.098 nondquar 5000 2/65/8/.025 57/49/507/29.773 96/630/30/6.855 822/496/395/55.029 penalty 00 7/439/38/0.24 6/46/49/0.063 86/26/86/0.072 8/2/8/0.08 penalty 000 9/750/57/7.833 2/328/99/2.655 /49/4/0.438 24/65/29/0.72 power 0000 /28/2/0.0 /28/2/0.02 /28/2/0.02 /28/2/0.02 power 5000 /53/2/0.035 /53/2/0.036 /29/2/0.07 /29/2/0.08 power 20000 /30/2/0.025 /30/2/0.025 /30/2/0.025 /30/2/0.025 quartc 000 /54/7/0.038 /54/7/0.037 /22/2/0.03 /22/2/0.03 quartc 6000 /63/9/0.236 /63/9/0.237 /27/2/0.087 /27/2/0.085 quartc 0000 /29/2/0.52 /29/2/0.52 /29/2/0.52 /29/2/0.5 srosenbr 5000 3/94/22/0.032 90/5870/2002/2.40 39/354/4/0.3 7/8/8/0.028 tridia 00 F/F/F/F F/F/F/F 553/5057/553/0.372 348/3098/348/0.223 tridia 000 F/F/F/F F/F/F/F 932/24283/932/3.235 443/7580/443/2.335 bv 2000 6/358/6/20.822 434/3397/5234/752.704 4//4/0.636 707/220/707/40.003 lin0 0000 /90/6/86.643 /90/6/86.659 /65/2/58.80 /65/2/58.86 lin 3000 /84//9.26 /84//9.09 /60/2/6.84 /60/2/6.97 lin 20000 /93/8/22.849 /93/8/22.946 /68/2/82.87 /68/2/82.707 pen 000 /3/2/0.780 /3/2/0.770 /3/2/0.768 /3/2/0.77 pen 5000 /38/2/7.46 /38/2/7.45 /38/2/7.425 /38/2/7.45 vardim 000 /68/2/0.76 /68/2/0.685 /68/2/0.689 /68/2/0.687 vardim 5000 /84/2/9.569 /84/2/20.066 /84/2/9.628 /84/2/9.720

Mathematical Problems in Engineering 7 ρ(τ) ρ(τ) 0.8 0.6 0.4 0.2 0 0 2 4 6 8 0 0.8 0.6 0.4 0.2 τ SWPHS + WWPNPRP + SWPPRP + WWPDYHS Figure 3: Performance profile on NG. 0 0 2 4 6 8 0 τ SWPHS + WWPNPRP + SWPPRP + WWPDYHS Figure 4: Performance profile on Itr. ratio r p,s is within a factor τ R n.theρ s is the (cumulative) distribution function for the performance ratio. The value of ρ s () is the probability that the solver will win over the rest of the solvers. Based on the theory of the performance profile above, four performance figures, that is, Figures 4, can be generated according to Table. From the four figures, we can see that the NPRP is superior to the other three CGMs on the testing problems. 5. Conclusion In this paper, we carefully studied the combination of the variations of the formulas β FR and β PRP.Wehavefound that the new formula possesses the following features: () β NPRP+ is a descent sequence without any line search; (2) the new method possesses the sufficient descent property and converge globally; (3) the strategy will restart the iteration automatically along the steepest descent direction if a negative value of β NPRP occurs; (4) the initial numerical results are promising. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Acnowledgments The authors are very grateful to the anonymous referees for their useful suggestions and comments, which improved the quality of this paper. This wor is supported by the Natural Science Foundation of Shanxi Province (202JQ9004), New Star Team of Xi an University of Post and Telecommucications (XY20506), Science Foundation of Liaocheng University (380303), and General Project of the National Social Science Foundation (5BGL04). References [] G. Zoutendij, Nonlinear programming computational methods, in Integer and Non-Linear Programming,J.Abadie,Ed.,pp. 37 86, North-Holland Publishing, Amsterdam, The Nertherlands, 970. [2] M. Al-Baali, Descent property and global convergence of the fletcher-reeves method with inexact line search, IMA Journal of Numerical Analysis,vol.5,no.,pp.2 24,985. [3] Y. H. Dai and Y. Yuan, An efficient hybrid conjugate gradient method for unconstrained optimization, Annals of Operations Research,vol.03,no. 4,pp.33 47,200. [4]Z.X.Wei,G.Y.Li,andL.Q.Qi, Globalconvergenceof the Pola-Ribière-Polya conjugate gradient method with an Armijo-type inexact line search for nonconvex unconstrained optimization problems, Mathematics of Computation, vol. 77, no.264,pp.273 293,2008. [5] Y. H. Dai and Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM Journal on Optimization,vol.0,no.,pp.77 82,2000. [6] S.W.Yao,Z.X.Wei,andH.Huang, AnoteaboutWYL sconjugate gradient method and its applications, Applied Mathematics and Computation, vol. 9, no. 2, pp. 38 388, 2007. [7] X.Z.Jiang,G.D.Ma,andJ.B.Jian, Anewglobalconvergent conjugate gradient method with Wolfe line search, Chinese Engineering Mathematics,vol.28,no.6,pp.779 786, 20. [8]X.Z.Jiang,L.Han,andJ.B.Jian, Agloballyconvergent mixed conjugate gradient method with Wolfe line search, Mathematica Numerica Sinica,vol.34,no.,pp.03 2,202. [9] D. Touati-Ahmed and C. Storey, Efficient hybrid conjugate gradient techniques, JournalofOptimizationTheoryandApplications,vol.64,no.2,pp.379 397,990. [0] J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM Journal on Optimization,vol.2,no.,pp.2 42,992. [] Y. F. Hu and C. Storey, Global convergence result for conjugate gradient methods, JournalofOptimizationTheoryandApplications,vol.7,no.2,pp.399 405,99.

8 Mathematical Problems in Engineering [2] L. Grippo and S. Lucidi, A globally convergent version of the pola-ribière conjugate gradient method, Mathematical Programming, Series B,vol.78,no.3,pp.375 39,997. [3] G. H. Yu, L. T. Guan, and G. Y. Li, Global convergence of modified Pola-Ribière-Polya conjugate gradient methods with sufficient descent property, Industrial and Management Optimization,vol.4,no.3,pp.565 579,2008. [4] N. Andrei, Numerical comparison of conjugate gradient algorithms for unconstrained optimization, Studies in Informatics & Control,vol.6,no.4,pp.333 352,2007. [5] Y. H. Dai, Analyses of conjugate gradient methods [Ph.D. thesis], Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, 997. [6] G. Yu, L. Guan, and G. Li, Global convergence of modified Pola-Ribière-Polya conjugate gradient methods with sufficient descent property, Industrial and Management Optimization,vol.4,no.3,pp.565 579,2008. [7] Y.-H. Dai and C.-X. Kou, A nonlinear conjugate gradient algorithm with an optimal property and an improved wolfe line search, SIAM Journal on Optimization, vol.23,no.,pp.296 320, 203. [8] X.-Z. Jiang and J.-B. Jian, Two modified nonlinear conjugate gradient methods with disturbance factors for unconstrained optimization, Nonlinear Dynamics,vol.77, no.-2,pp.387 397, 204. [9] J. B. Jian, L. Han, and X. Z. Jiang, A hybrid conjugate gradient method with descent property for unconstrained optimization, Applied Mathematical Modelling,vol.39,pp.28 290,205. [20] Z. Wei, G. Li, and L. Qi, New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems, Applied Mathematics and Computation,vol.79,no.2,pp.407 430, 2006. [2] H. Huang, Z. Wei, and Y. Shengwei, The proof of the sufficient descent condition of the Wei-Yao-Liu conjugate gradient method under the strong Wolfe-Powell line search, Applied Mathematics and Computation, vol.89,no.2,pp.24 245, 2007. [22] G. Yu, Y. Zhao, and Z. Wei, A descent nonlinear conjugate gradient method for large-scale unconstrained optimization, Applied Mathematics and Computation,vol.87,no.2,pp.636 643, 2007. [23] P. Wolfe, Convergence conditions for ascent methods, SIAM Review,vol.,no.2,pp.226 235,969. [24] P. Wolfe, Convergence conditions for ascent methods. ii: some corrections, SIAM Review,vol.3,no.2,pp.85 88,97. [25] Y. Dai and Y. Yuan, Nonlinear Conjugate Methods, Science Press of Shanghai, Shanghai, China, 2000. [26]I.Bongartz,A.R.Conn,N.Gould,andP.L.Toint, CUTE: constrained and unconstrained testing environment, ACM Transactions on Mathematical Software, vol.2,no.,pp.23 60, 995. [27] J. J. Moré, B. S. Garbow, and K. E. Hillstrom, Testing unconstrained optimization software, ACM Transactions on Mathematical Software,vol.7,no.,pp.7 4,98. [28] E. D. Dolan and J. J. Moré, Benchmaring optimization software with performance profiles, Mathematical Programming, Series B,vol.9,no.2,pp.20 23,2002.

Advances in Operations Research Advances in Decision Sciences Applied Mathematics Algebra Probability and Statistics The Scientific World Journal International Differential Equations Submit your manuscripts at International Advances in Combinatorics Mathematical Physics Complex Analysis International Mathematics and Mathematical Sciences Mathematical Problems in Engineering Mathematics Discrete Mathematics Discrete Dynamics in Nature and Society Function Spaces Abstract and Applied Analysis International Stochastic Analysis Optimization