A derivative-free nonmonotone line search and its application to the spectral residual method
|
|
- Morris Barker
- 5 years ago
- Views:
Transcription
1 IMA Journal of Numerical Analysis (2009) 29, doi: /imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral residual method WANYOU CHENG College of Software, Dongguan University of Technology, Dongguan , China AND DONG-HUI LI College of Mathematics and Econometrics, Hunan University, Changsha , China [Received on 31 May 2007; revised on 23 February 2008] In this paper we propose a derivative-free nonmonotone line search for solving large-scale nonlinear systems of equations. Under appropriate conditions, we show that the spectral residual method with this line search is globally convergent. We also present some numerical experiments. The results show that the spectral residual method with the new nonmonotone line search is promising. Keywords: large-scale nonlinear systems; spectral residual method; nonmonotone line search. 1. Introduction We consider the nonlinear system of equations F(x) = 0, (1.1) where F is a continuously differentiable mapping from R n into itself. We are interested in large-scale systems for which the Jacobian of F(x) is not available. In this case, to solve the problem, we need to use derivative-free methods. Derivative-free methods for solving (1.1) include the well-known quasi-newton methods (Dennis & Moré, 1977; Martínez, 1990, 1992, 2000; Li & Fukushima, 1999; Zhou & Li, 2007, 2008) and the recently developed spectral residual method (La Cruz & Raydan, 2003; La Cruz et al., 2006; Zhang & Zhou, 2006), etc. In a derivative-free method, a derivative-free line search technique is necessary. To the authors knowledge, the earliest derivative-free line search is due to Griewank (1986). A favourable property of the line search in Griewank (1986) is that it produces an iterative method possessing a norm descent property. However, when the gradient of the function F(x k ) 2 is orthogonal to F(x k ), that line search may fail. The first well-defined derivative-free line search for solving nonlinear systems of equations was proposed by Li & Fukushima (2000). The line search in Li & Fukushima (2000) provides the iterative method with an approximate norm descent property. It was shown that the Broyden-like quasi-newton method with this line search is globally and superlinearly convergent (Li & Fukushima, 2000). Birgin et al. (2003) proposed another derivative-free line search where the amount of norm reduction required is proportional to the residual norm. Based on this line search, they proposed an inexact Corresponding author. chengwanyou421@yahoo.com.cn dhli@hnu.cn c The author Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
2 DERIVATIVE-FREE NONMONOTONE LINE SEARCH 815 quasi-newton method and established its global convergence. Solodov & Svaiter (1999) proposed a nice derivative-free line search and developed an inexact Newton method for solving monotone equations. We refer to a recent review paper (Li & Cheng, 2007) for a summary of derivative-free line searches. Quite recently, La Cruz et al. (2006) proposed a new derivative-free line search in which the stepsize α k is determined by the following inequality: f (x k + α k d k ) max f (x k j) + θ k γ 1 αk 2 f (x k), (1.2) 0 j min{k,m 1} where f is a merit function such that f (x) = 0 if and only if F(x) = 0, d k is the direction generated by some iterative method, M is a positive integer, γ 1 (0, 1) and the positive sequence {θ k } satisfies k=0 θ k <. The line search (1.2) is a combination of the nonmonotone line search in Grippo et al. (1986) and the derivative-free line search in Li & Fukushima (2000). The term max 0 j min{k,m 1} f (x k j ) comes from the well-known nonmonotone line search in Grippo et al. (1986) for solving unconstrained optimization problems. The purpose of this term is to enlarge the possibly small stepsize generated by the line search in Li & Fukushima (2000). By the use of this line search, the spectral residual method is globally convergent (La Cruz et al., 2006). The reported numerical results in La Cruz et al. (2006) showed that this line search works very well. As pointed out by Zhang & Hager (2004), although the nonmonotone technique in Grippo et al. (1986) works well in many cases, there are some drawbacks. First, a good function value generated in any iteration is essentially discarded due to the use of the max term. Second, in some cases the numerical performance is very dependent on the choice of M (Grippo et al., 1986; Raydan, 1997). Furthermore, Dai (2002) gave an example that shows that, although an iterative method is generating R-linearly convergent iterations for a strongly convex function, the iterates may not satisfy the nonmonotone line search in Grippo et al. (1986) for k sufficiently large, for any fixed bound M on the memory. We refer to Dai (2002) and Zhang & Hager (2004) for details. To overcome these drawbacks, Zhang & Hager (2004) recently proposed a new nonmonotone line search that requires that an average of the successive function values decreases. Under appropriate conditions, they established the global convergence and R-linear convergence of some iterative methods including the limited memory BFGS method for solving strongly convex minimization problems. The reported numerical results showed that the new nonmonotone line search was superior to the traditional nonmonotone line search in Grippo et al. (1986). The purpose of this paper is to extend the nonmonotone line search proposed by Zhang & Hager (2004) to the spectral residual method for solving nonlinear systems of equations. The remainder of the paper is organized as follows. In Section 2 we propose the derivative-free nonmonotone line search and the algorithm. In Section 3 we establish the global convergence of the algorithm. We report some numerical results in Section 4. Throughout the paper we use J(x) to denote the Jacobian matrix of F(x). We use to denote the Euclidean norm of vectors. We denote by N the set of positive integers. 2. The derivative-free nonmonotone line search and the algorithm In this section, based on the nonmonotone line search proposed by Zhang & Hager (2004), we propose a derivative-free nonmonotone line search and apply it to the spectral residual method proposed by La Cruz et al. (2006). Let us briefly recall the Zhang Hager nonmonotone line search technique for solving
3 816 W. CHENG AND D.-H. LI the unconstrained optimization problem min f (x), x R n, where f : R n R is continuously differentiable. Suppose that d k R n is a descent direction of f at x k, i.e. f (x k ) T d k < 0, where f (x k ) denotes the gradient of f at x k. In the Zhang Hager line search the stepsize α k satisfies the following Armijo-type condition: f (x k + α k d k ) C k + βα k f (x k ) T d k, where β (0, 1), C 0 = f (x 0 ) and C k is updated by the following rules: = η k Q k + 1, C k+1 = η k Q k C k + f k+1, with Q 0 = 1 and η k [0, 1], where f k+1 is an abbreviation of f (x k+1 ). This line search strategy ensures that an average of the successive function values is decreasing. The choice of η k controls the degree of nonmonotonicity. In fact, if η k = 0 for all k, then the line search is the usual monotone Armijo line search. As η k 1 the line search becomes more nonmonotone, treating all the previous function values with equal weight when we compute C k. In what follows we extend this idea to develop a derivative-free nonmonotone line search for nonlinear systems of equations. Let α k satisfy the following condition: f (x k + α k d k ) C k + ɛ k γ α 2 k f (x k), (2.1) where γ (0, 1), C 0 = f 0, the positive sequence {ɛ k } satisfies k=0 ɛ k < and f is a merit function such that f (x) = 0 if and only if F(x) = 0 and C k is updated by the following rules: = η k Q k + 1, C k+1 = η k Q k (C k + ɛ k ) + f k+1, with Q 0 = 1 and η k [0, 1]. The line search condition (2.1) is a combination of the nonmonotone line search in Zhang & Hager (2004) and the nonmonotone derivative-free line search in Li & Fukushima (2000). If ɛ k = 0 then the update rule of C k is the same as that in Zhang & Hager (2004). We will apply the derivative-free nonmonotone line search (2.1) to the spectral residual method (La Cruz et al., 2006). In the latter part of the paper we let f (x) = 1 2 F(x) 2. We now we state the steps of the spectral residual method with the derivative-free line search (2.1) for solving nonlinear systems of equations. ALGORITHM 2.1 (N-DF-SANE) Step 1: Given the starting point x 0, and constants ɛ > 0, 0 η min η max 1, 0 < ρ min < ρ max < 1, 0 < σ min < σ max, γ > 0, and ɛ > 0. Choose a positive sequence {ɛ k } satisfying Set C 0 = f (x 0 ), Q 0 = 1, σ 0 = 1 and k = 0. ɛ k ɛ. (2.2) k=0
4 DERIVATIVE-FREE NONMONOTONE LINE SEARCH 817 Step 2: If F k ɛ then stop. Step 3: Compute d k = σ k F(x k ), where σ k [σ min, σ max ] (the spectral coefficient). Set α + = 1 and α = 1. Step 4: Nonmonotone line search. If then set α k = α + and x k+1 = x k + α k d k. Else, if f (x k + α + d k ) C k + ɛ k γ α 2 + f (x k) (2.3) f (x k α d k ) C k + ɛ k γ α 2 f (x k) then set α k = α, d k = d k and x k+1 = x k + α k d k. Else choose α +new [ρ min α +, ρ max α + ] and α new [ρ min α, ρ max α ]. Replace α + by α +new and α by α new. Go to Step 4. Step 5: Choose η k [η min, η max ] and compute Set k = k + 1 and go to Step 2. = η k Q k + 1, C k+1 = η k Q k (C k + ɛ k ) + f k+1. (2.4) The following lemma shows that, for any choice of η k [0, 1], C k lies between f k and ki=0 ( f i + iɛ i 1 ) A k =, (2.5) k + 1 where ɛ 1 = 0. This implies that the line search process is well defined. LEMMA 2.2 The iterates generated by Algorithm 2.1 satisfy f k C k A k k 0, where A k is defined by (2.5). Moreover, the sequence {C k } satisfies Proof. First, by Step 4 of Algorithm 2.1, we have By (2.4) and (2.7), we get C k C k 1 + ɛ k 1. (2.6) f k C k 1 + ɛ k 1. (2.7) C k = η k 1Q k 1 (C k 1 + ɛ k 1 ) + f k Q k f k. This establishes the lower bound for C k. We also have from (2.4) and (2.7) again that C k C k 1 +ɛ k 1. We are going to derive that C k A k by induction. Since C 0 = f 0 and ɛ 1 = 0, we obviously have C 0 = A 0. Suppose that C j A j for all 0 j < k. Since η k [0, 1] and Q 0 = 1, we have Q j+1 = η j Q j + 1 Q j + 1 j + 2. (2.8)
5 818 W. CHENG AND D.-H. LI Define h k (t): R + R + by h k (t) = t(c k 1 + ɛ k 1 ) + f k. t + 1 It is easy to see from (2.7) that h k is monotonically nondecreasing in t. So we have By the inductive assumption, we obtain C k = h k (η k 1 Q k 1 ) = h k (Q k 1) h k (k). (2.9) h k (k) = k(c k 1 + ɛ k 1 ) + f k k A k 1 + kɛ k 1 + f k = A k. k + 1 k + 1 The last inequality together with (2.9) implies that C k A k. REMARK 2.3 Since ɛ k > 0, after a finite number of reductions of α + the condition f (x k + α + d k ) f k + ɛ k γ α 2 + f (x k) necessarily holds. From Lemma 2.2 we know that f k C k. So the line search process, i.e. Step 4 of Algorithm 2.1, is well defined. 3. Global convergence This section is devoted to the global convergence of Algorithm 2.1. Let Ω be the level set defined by where ɛ is a positive constant satisfying (2.2). We first prove the following two lemmas. Ω = {x R n f (x) f (x 0 ) + ɛ}, LEMMA 3.1 The sequence {x k } generated by Algorithm 2.1 is contained in Ω. Proof. From Step 4 of Algorithm 2.1 we have, for each k, It then follows from (2.6) that The proof is complete. f (x k+1 ) C k + ɛ k. f (x k+1 ) C k 1 + ɛ k 1 + ɛ k f (x 0 ) + k i=0 f (x 0 ) + ɛ. LEMMA 3.2 Let the sequence {x k } be generated by Algorithm 2.1. Then there exists an infinite index set K N such that Moreover, if η max < 1 then ɛ k lim k K α2 k f (x k) = 0. (3.1) lim k α2 k f (x k) = 0. (3.2)
6 DERIVATIVE-FREE NONMONOTONE LINE SEARCH 819 Proof. From Step 4 of Algorithm 2.1 we have Together with (2.4) this implies that f (x k+1 ) C k + ɛ k γ α 2 k f k. C k+1 = η k Q k (C k + ɛ k ) + f k+1 (η k Q k + 1)(C k + ɛ k ) γ α 2 k f k = C k + ɛ k γ α2 k f k. (3.3) So we get from (2.2) that k=0 α 2 k f k <. (3.4) If lim inf k α 2 k f k 0 then (3.4) would be violated since k + 2 by (2.8). Hence (3.1) holds. If η max < 1 then = 1 + k j=0 i=0 j η k i 1 + k j=0 η j+1 max ηmax j = j=0 1 1 η max. Consequently, (3.2) follows immediately from (3.4). The following theorem establishes the global convergence of Algorithm 2.1. It is similar to but slightly stronger than Theorem 1 in La Cruz et al. (2006). THEOREM 3.3 Let the sequence {x k } be generated by Algorithm 2.1 and η max < 1. Then every limit point x of {x k } satisfies F(x ) T J(x )F(x ) = 0. (3.5) In particular, if F is strict, namely F or F is strictly monotone, then the whole sequence {x k } converges to the unique solution of (1.1). Proof. Let x be an arbitrary limit point of {x k }. Then there exists an infinite index set K 1 N such that lim k K1 x k = x. By (3.2), we have lim k K1 α 2 k f k = 0. Case I: If lim k K1 sup α k 0 then there exists an infinite index set K 2 K 1 such that {α k } K2 is bounded away from zero. By (3.2), we have lim k K2 f (x k ) = 0. This implies (3.5). Case II: If lim α k = 0 (3.6) k K 1 then there exists an index k 0 K 1 such that α k < 1 for all k k 0 with k K 1. Let m k denote the number of inner iterations in Step 4 (i.e. the inequalities in Step 4 were violated m k times). Let α k + and
7 820 W. CHENG AND D.-H. LI αk be the values of α+ and α, respectively, in the last unsuccessful line search step in N-DF-SANE step k. Then we have From the choice of α +new and α new we have α k ρ m k min k > k 0, k K 1. and α + k α k ρm k 1 max ρm k 1 max. Since ρ max < 1 and lim k K1 m k =, we get lim k K 1 α + k = lim k K 1 α k = 0. By the line search rule, we obtain, for all k K 1 with k k 0, f (x k + α + k d k) > C k + ɛ k γ (α + k )2 f k (3.7) and f (x k α k d k) > C k + ɛ k γ (α k )2 f k. Since C k f k 0, inequality (3.7) implies that f (x k + α + k d k) > f k γ (α + k )2 f k. From Lemma 3.1 we know that f k c = f (x 0 ) + ɛ. So we have f (x k + α + k d k) f k > cγ (α + k )2. In a way similar to the proof of Theorem 1 in La Cruz et al. (2006), repeating the above process, we can prove (3.5). 4. Numerical experiments In this section we test Algorithm 2.1, which we call the N-DF-SANE method, and compare it with the DF-SANE method (La Cruz et al., 2006). The set of test problems was described in La Cruz et al. (2004). The N-DF-SANE code was written in Fortran 77 and in double-precision arithmetic. The programs were carried out on a PC (CPU 1.6 GHz, 256Mb memory) with a Windows operation system. We implemented N-DF-SANE with the following parameters: η k = 0.85, σ min = 10 10, σ max = 10 10, ρ min = 0.1, ρ max = 0.5, γ = 10 4 and ɛ k = F(x 0) for all k 0. For each test problem we used (1+k) 2 the same termination criterion as that in La Cruz et al. (2006). Specifically, we stopped the iteration if the following inequality was satisfied: F(x k ) n e a + e r F(x 0 ) n, (4.1)
8 DERIVATIVE-FREE NONMONOTONE LINE SEARCH 821 TABLE 1 N-DF-SANE DF-SANE Pro Dim IT NF T IT NF T
9 822 W. CHENG AND D.-H. LI TABLE 2 N-DF-SANE DF-SANE Pro Dim IT NF T IT NF T
10 DERIVATIVE-FREE NONMONOTONE LINE SEARCH 823 where e a = 10 5 and e r = A limit of 1000 iterations was also imposed. We chose α +new, α new and the spectral stepsize σ k in the same way as in La Cruz et al. (2006). We implemented the DF-SANE algorithm with the following parameters: nexp = 2, σ min = 10 10, σ max = 10 10, σ 0 = 1, τ min = 0.1, τ max = 0.5, γ = 10 4, M = 10 and η k = F(x 0) for all k N. (1+k) 2 The DF-SANE code was provided by Prof. Raydan. In Tables 1 and 2 we report the dimension of each test problem (Dim), the number of iterations (IT), the number of function evaluations (NF) and the CPU times in seconds (T). In the tables Pro denotes the number of the test problem that appeared in La Cruz et al. (2004, 2006) and the symbol indicates that the related algorithm failed. We see from Tables 1 and 2 that in many cases the numbers of iterations, the numbers of function evaluations and the CPU times of the two algorithms are identical. In summary we observed the following: 30 problems where N-DF-SANE was superior to DF-SANE in IT; TABLE 3 DF-SANE M = 5 M = 10 M = 20 Pro Dim IT NF T IT NF T IT NF T TABLE 4 N-DF-SANE η k = 0.1 η k = 0.5 η k = 0.9 Pro Dim IT NF T IT NF T IT NF T
11 824 W. CHENG AND D.-H. LI 26 problems where N-DF-SANE was superior to DF-SANE in NF; 17 problems where N-DF-SANE was superior to DF-SANE in CPU time; 15 problems where DF-SANE was superior to N-DF-SANE in IT; 18 problems where DF-SANE was superior to N-DF-SANE in NF; 12 problems where DF-SANE was superior to N-DF-SANE in CPU time. The results in Tables 1 and 2 show that the proposed method is computationally efficient. We then tested the sensitivity of the parameters M and η k. According to an anonymous referee s suggestion, we tested the two algorithms on problems 2, 4, 7, 20, 34 and 42 with different parameters. First, we tested DF-SANE with four values of M: 5, 10, 20 and 40. The performance of DF-SANE with M = 40 is almost the same as the performance of DF-SANE with M = 20. Therefore in Table 3 we only list the results of DF-SANE with M = 5, 10 and 20. We observe from Table 3 that the behaviour of DF-SANE is sensitive to the choice of M. Second, we tested N-DF-SANE with three values of η k : 0.1, 0.5 and 0.9. The results are listed in Table 4. We see from Table 4 that the performance of N-DF-SANE with η k = 0.1 is inferior to the performance of N-DF-SANE with η k = 0.5 and 0.9. One possible reason is that the choice of η k controls the degree of nonmonotonicity. As η k 0 the line search in (2.1) is closer to the approximate norm descent line search in Li & Fukushima (2000). We also see from Tables 1, 2 and 4 that when η k 0.5 the numerical behaviour of N-DF-SANE is not very sensitive to the choice of η k. Acknowledgements The authors would like to thank the two anonymous referees for their valuable suggestions and comments that improved this paper greatly. We are grateful to Prof. M. Raydan for providing us with the test problems and the DF-SANE codes. Funding The National Development Project on Key Basic Research (2004CB719402); National Science Foundation project of China ( ). REFERENCES BIRGIN, E. G., KREJIC, N. K. & MARTÍNEZ, J. M. (2003) Globally convergent inexact quasi-newton methods for solving nonlinear systems. Numer. Algorithms, 32, DAI, Y. H. (2002) On the nonmonotone line search. J. Optim. Theory Appl., 112, DENNIS, J. E. & MORÉ, J. J. (1977) Quasi-Newton methods, motivation and theory. SIAM Rev., 19, GRIEWANK, A. (1986) The global convergence of Broyden-like methods with suitable line search. J. Aust. Math. Soc. Ser. B, 28, GRIPPO, L., LAMPARIELLO, F. & LUCIDI, S. (1986) A nonmonotone line search technique for Newton s method. SIAM J. Numer. Anal., 23, LA CRUZ, W., MARTÍNEZ, J. M. & RAYDAN, M. (2004) Spectral residual method without gradient information for solving large-scale nonlinear systems: theory and experiments. Technical Report RT Departamento de Computation, UCV. LA CRUZ, W., MARTÍNEZ, J. M. & RAYDAN, M. (2006) Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput., 75,
12 DERIVATIVE-FREE NONMONOTONE LINE SEARCH 825 LA CRUZ, W. & RAYDAN, M. (2003) Nonmonotone spectral methods for large-scale nonlinear systems. Optim. Methods Softw., 18, LI, D. H. & CHENG, W. Y. (2007) Recent progress in the global convergence of quasi-newton methods for nonlinear equations. Hokkaido Math. J., 36, LI, D. H. & FUKUSHIMA, M. (1999) A globally and superlinearly convergent Gauss Newton-based BFGS methods for symmetric nonlinear equations. SIAM J. Numer. Anal., 37, LI, D. H. & FUKUSHIMA, M. (2000) A derivative-free line search and global convergence of Broyden-like method for nonlinear equations. Optim. Methods Softw., 13, MARTÍNEZ, J. M. (1990) A family of quasi-newton methods for nonlinear equations with direct secant updates of matrix factorizations. SIAM J. Numer. Anal., 27, MARTÍNEZ, J. M. (1992) Fixed-point quasi-newton methods. SIAM J. Numer. Anal., 29, MARTÍNEZ, J. M. (2000) Practical quasi-newton methods for solving nonlinear systems. J. Comput. Appl. Math., 124, RAYDAN, M. (1997) The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim., 7, SOLODOV, M. V. & SVAITER, B. F. (1999) A globally convergent inexact Newton method for systems of monotone equations. Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods (M. Fukushima & L. Qi eds). Kluwer, pp ZHANG, H. & HAGER, W. W. (2004) A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim., 4, ZHANG, L. & ZHOU, W. J. (2006) Spectral gradient projection method for solving nonlinear monotone equations. J. Comput. Appl. Math., 196, ZHOU, W. J. & LI, D. H. (2007) Limited memory BFGS method for nonlinear monotone equations. J. Comput. Math., 25, ZHOU, W. J. & LI, D. H. (2008) A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math. Comput., 77,
Spectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationA family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations
Journal of Computational Applied Mathematics 224 (2009) 11 19 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: www.elsevier.com/locate/cam A family
More informationAn Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations
International Journal of Mathematical Modelling & Computations Vol. 07, No. 02, Spring 2017, 145-157 An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations L. Muhammad
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationStep-size Estimation for Unconstrained Optimization Methods
Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations
More informationNew Inexact Line Search Method for Unconstrained Optimization 1,2
journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.
More informationMath 164: Optimization Barzilai-Borwein Method
Math 164: Optimization Barzilai-Borwein Method Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 online discussions on piazza.com Main features of the Barzilai-Borwein (BB) method The BB
More informationGlobal convergence of a regularized factorized quasi-newton method for nonlinear least squares problems
Volume 29, N. 2, pp. 195 214, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems WEIJUN ZHOU
More informationAdaptive two-point stepsize gradient algorithm
Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of
More informationOn the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search
ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised
More informationMultipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,
More informationStep lengths in BFGS method for monotone gradients
Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly
More informationGlobally convergent three-term conjugate gradient projection methods for solving nonlinear monotone equations
Arab. J. Math. (2018) 7:289 301 https://doi.org/10.1007/s40065-018-0206-8 Arabian Journal of Mathematics Mompati S. Koorapetse P. Kaelo Globally convergent three-term conjugate gradient projection methods
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationA proximal-like algorithm for a class of nonconvex programming
Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,
More informationMaria Cameron. f(x) = 1 n
Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that
More informationNew hybrid conjugate gradient methods with the generalized Wolfe line search
Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:
More informationThe speed of Shor s R-algorithm
IMA Journal of Numerical Analysis 2008) 28, 711 720 doi:10.1093/imanum/drn008 Advance Access publication on September 12, 2008 The speed of Shor s R-algorithm J. V. BURKE Department of Mathematics, University
More informationA Modified Hestenes-Stiefel Conjugate Gradient Method and Its Convergence
Journal of Mathematical Research & Exposition Mar., 2010, Vol. 30, No. 2, pp. 297 308 DOI:10.3770/j.issn:1000-341X.2010.02.013 Http://jmre.dlut.edu.cn A Modified Hestenes-Stiefel Conjugate Gradient Method
More informationA Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems
A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based
More informationA Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions
The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded
More information1. Introduction. We develop an active set method for the box constrained optimization
SIAM J. OPTIM. Vol. 17, No. 2, pp. 526 557 c 2006 Society for Industrial and Applied Mathematics A NEW ACTIVE SET ALGORITHM FOR BOX CONSTRAINED OPTIMIZATION WILLIAM W. HAGER AND HONGCHAO ZHANG Abstract.
More informationOn the convergence properties of the projected gradient method for convex optimization
Computational and Applied Mathematics Vol. 22, N. 1, pp. 37 52, 2003 Copyright 2003 SBMAC On the convergence properties of the projected gradient method for convex optimization A. N. IUSEM* Instituto de
More informationResidual iterative schemes for largescale linear systems
Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz
More informationA Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems
A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationNewton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions
260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum
More informationOn efficiency of nonmonotone Armijo-type line searches
Noname manuscript No. (will be inserted by the editor On efficiency of nonmonotone Armijo-type line searches Masoud Ahookhosh Susan Ghaderi Abstract Monotonicity and nonmonotonicity play a key role in
More informationModification of the Armijo line search to satisfy the convergence properties of HS method
Université de Sfax Faculté des Sciences de Sfax Département de Mathématiques BP. 1171 Rte. Soukra 3000 Sfax Tunisia INTERNATIONAL CONFERENCE ON ADVANCES IN APPLIED MATHEMATICS 2014 Modification of the
More informationJournal of Computational and Applied Mathematics. Notes on the Dai Yuan Yuan modified spectral gradient method
Journal of Computational Applied Mathematics 234 (200) 2986 2992 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Notes
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationHandling nonpositive curvature in a limited memory steepest descent method
IMA Journal of Numerical Analysis (2016) 36, 717 742 doi:10.1093/imanum/drv034 Advance Access publication on July 8, 2015 Handling nonpositive curvature in a limited memory steepest descent method Frank
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationAccelerated Gradient Methods for Constrained Image Deblurring
Accelerated Gradient Methods for Constrained Image Deblurring S Bonettini 1, R Zanella 2, L Zanni 2, M Bertero 3 1 Dipartimento di Matematica, Università di Ferrara, Via Saragat 1, Building B, I-44100
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationSteepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720
Steepest Descent Juan C. Meza Lawrence Berkeley National Laboratory Berkeley, California 94720 Abstract The steepest descent method has a rich history and is one of the simplest and best known methods
More informationBulletin of the. Iranian Mathematical Society
ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 43 (2017), No. 7, pp. 2437 2448. Title: Extensions of the Hestenes Stiefel and Pola Ribière Polya conjugate
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationOn the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities
J Optim Theory Appl 208) 76:399 409 https://doi.org/0.007/s0957-07-24-0 On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities Phan Tu Vuong Received:
More informationGlobal Convergence Properties of the HS Conjugate Gradient Method
Applied Mathematical Sciences, Vol. 7, 2013, no. 142, 7077-7091 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2013.311638 Global Convergence Properties of the HS Conjugate Gradient Method
More informationR-Linear Convergence of Limited Memory Steepest Descent
R-Linear Convergence of Limited Memory Steepest Descent Fran E. Curtis and Wei Guo Department of Industrial and Systems Engineering, Lehigh University, USA COR@L Technical Report 16T-010 R-Linear Convergence
More informationBulletin of the. Iranian Mathematical Society
ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation
More information1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that
SIAM J. CONTROL OPTIM. Vol. 37, No. 3, pp. 765 776 c 1999 Society for Industrial and Applied Mathematics A NEW PROJECTION METHOD FOR VARIATIONAL INEQUALITY PROBLEMS M. V. SOLODOV AND B. F. SVAITER Abstract.
More informationTHE solution of the absolute value equation (AVE) of
The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method
More informationNonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization
Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong
More informationApplying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone
Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Xin-He Miao 1, Nuo Qi 2, B. Saheya 3 and Jein-Shan Chen 4 Abstract: In this
More informationGradient method based on epsilon algorithm for large-scale nonlinearoptimization
ISSN 1746-7233, England, UK World Journal of Modelling and Simulation Vol. 4 (2008) No. 1, pp. 64-68 Gradient method based on epsilon algorithm for large-scale nonlinearoptimization Jianliang Li, Lian
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationHandling Nonpositive Curvature in a Limited Memory Steepest Descent Method
Handling Nonpositive Curvature in a Limited Memory Steepest Descent Method Fran E. Curtis and Wei Guo Department of Industrial and Systems Engineering, Lehigh University, USA COR@L Technical Report 14T-011-R1
More informationA projection-type method for generalized variational inequalities with dual solutions
Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method
More informationDifferentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA
Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics
More informationQuadrature based Broyden-like method for systems of nonlinear equations
STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 6, March 2018, pp 130 138. Published online in International Academic Press (www.iapress.org) Quadrature based Broyden-like
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More informationOn the Convergence and O(1/N) Complexity of a Class of Nonlinear Proximal Point Algorithms for Monotonic Variational Inequalities
STATISTICS,OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 2, June 204, pp 05 3. Published online in International Academic Press (www.iapress.org) On the Convergence and O(/N)
More informationON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES
U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable
More informationGlobal Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction
ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(2011) No.2,pp.153-158 Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method Yigui Ou, Jun Zhang
More informationCONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS
MATHEMATICS OF COMPUTATION Volume 68, Number 228, Pages 165 1613 S 25-5718(99)1135-7 Article electronically published on March 1, 1999 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS BENEDETTA MORINI Abstract.
More informationQuasi-Newton Methods
Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references
More informationKeywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence.
STRONG LOCAL CONVERGENCE PROPERTIES OF ADAPTIVE REGULARIZED METHODS FOR NONLINEAR LEAST-SQUARES S. BELLAVIA AND B. MORINI Abstract. This paper studies adaptive regularized methods for nonlinear least-squares
More information5 Quasi-Newton Methods
Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min
More informationRandomized Block Coordinate Non-Monotone Gradient Method for a Class of Nonlinear Programming
Randomized Block Coordinate Non-Monotone Gradient Method for a Class of Nonlinear Programming Zhaosong Lu Lin Xiao June 25, 2013 Abstract In this paper we propose a randomized block coordinate non-monotone
More informationA Smoothing Newton Method for Solving Absolute Value Equations
A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,
More informationAn accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems
Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 0 (207), 4822 4833 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa An accelerated Newton method
More informationA class of Smoothing Method for Linear Second-Order Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin
More informationA double projection method for solving variational inequalities without monotonicity
A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014
More informationA NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION. 1. Introduction Consider the following unconstrained programming problem:
J. Korean Math. Soc. 47, No. 6, pp. 53 67 DOI.434/JKMS..47.6.53 A NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION Youjiang Lin, Yongjian Yang, and Liansheng Zhang Abstract. This paper considers the
More informationAn Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods
An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This
More informationA Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming
A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming Zhaosong Lu Lin Xiao June 8, 2014 Abstract In this paper we propose a randomized nonmonotone block
More informationConvex Optimization. Problem set 2. Due Monday April 26th
Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining
More informationOn nonexpansive and accretive operators in Banach spaces
Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive
More informationNearest Correlation Matrix
Nearest Correlation Matrix The NAG Library has a range of functionality in the area of computing the nearest correlation matrix. In this article we take a look at nearest correlation matrix problems, giving
More informationSearch Directions for Unconstrained Optimization
8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All
More informationThe cyclic Barzilai Borwein method for unconstrained optimization
IMA Journal of Numerical Analysis Advance Access published March 24, 2006 IMA Journal of Numerical Analysis Pageof24 doi:0.093/imanum/drl006 The cyclic Barzilai Borwein method for unconstrained optimization
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationAdaptive First-Order Methods for General Sparse Inverse Covariance Selection
Adaptive First-Order Methods for General Sparse Inverse Covariance Selection Zhaosong Lu December 2, 2008 Abstract In this paper, we consider estimating sparse inverse covariance of a Gaussian graphical
More informationError bounds for symmetric cone complementarity problems
to appear in Numerical Algebra, Control and Optimization, 014 Error bounds for symmetric cone complementarity problems Xin-He Miao 1 Department of Mathematics School of Science Tianjin University Tianjin
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationCubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization J. M. Martínez M. Raydan November 15, 2015 Abstract In a recent paper we introduced a trust-region
More informationSpectral Projected Gradient Methods
Spectral Projected Gradient Methods E. G. Birgin J. M. Martínez M. Raydan January 17, 2007 Keywords: Spectral Projected Gradient Methods, projected gradients, nonmonotone line search, large scale problems,
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente October 25, 2012 Abstract In this paper we prove that the broad class of direct-search methods of directional type based on imposing sufficient decrease
More informationA PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)
A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION Bintong Chen 1 Xiaojun Chen 2 Christian Kanzow 3 September 1997 revised May 1998 and March 1999 Abstract: We introduce a new NCP-function in order to reformulate
More informationTrust-region methods for rectangular systems of nonlinear equations
Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini
More informationA SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION
A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract
More informationNumerical Methods for Large-Scale Nonlinear Systems
Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.
More informationSTRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS
ARCHIVUM MATHEMATICUM (BRNO) Tomus 45 (2009), 147 158 STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS Xiaolong Qin 1, Shin Min Kang 1, Yongfu Su 2,
More informationResearch Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization
Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 00, Article ID 843609, 0 pages doi:0.55/00/843609 Research Article Finding Global Minima with a Filled Function Approach for
More informationAn efficient Newton-type method with fifth-order convergence for solving nonlinear equations
Volume 27, N. 3, pp. 269 274, 2008 Copyright 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam An efficient Newton-type method with fifth-order convergence for solving nonlinear equations LIANG FANG 1,2, LI
More informationA PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
More informationAn inexact subgradient algorithm for Equilibrium Problems
Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,
More informationGLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH
GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics
More informationOPER 627: Nonlinear Optimization Lecture 14: Mid-term Review
OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review Department of Statistical Sciences and Operations Research Virginia Commonwealth University Oct 16, 2013 (Lecture 14) Nonlinear Optimization
More informationQuasi-Newton methods for minimization
Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1
More informationA NOTE ON Q-ORDER OF CONVERGENCE
BIT 0006-3835/01/4102-0422 $16.00 2001, Vol. 41, No. 2, pp. 422 429 c Swets & Zeitlinger A NOTE ON Q-ORDER OF CONVERGENCE L. O. JAY Department of Mathematics, The University of Iowa, 14 MacLean Hall Iowa
More informationCONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS
CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:
More informationInexact Newton Methods Applied to Under Determined Systems. Joseph P. Simonis. A Dissertation. Submitted to the Faculty
Inexact Newton Methods Applied to Under Determined Systems by Joseph P. Simonis A Dissertation Submitted to the Faculty of WORCESTER POLYTECHNIC INSTITUTE in Partial Fulfillment of the Requirements for
More informationMerit functions and error bounds for generalized variational inequalities
J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada
More informationOpen Problems in Nonlinear Conjugate Gradient Algorithms for Unconstrained Optimization
BULLETIN of the Malaysian Mathematical Sciences Society http://math.usm.my/bulletin Bull. Malays. Math. Sci. Soc. (2) 34(2) (2011), 319 330 Open Problems in Nonlinear Conjugate Gradient Algorithms for
More informationThe Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces
Applied Mathematical Sciences, Vol. 11, 2017, no. 12, 549-560 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2017.718 The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive
More information