An Implicit Multi-Step Diagonal Secant-Type Method for Solving Large-Scale Systems of Nonlinear Equations
|
|
- Chloe Hutchinson
- 5 years ago
- Views:
Transcription
1 Applied Mathematical Sciences, Vol. 6, 2012, no. 114, An Implicit Multi-Step Diagonal Secant-Type Method for Solving Large-Scale Systems of Nonlinear Equations 1 M. Y. Waziri, 2 Z. A. Majid and 3 H. Aisha 2 Institute for Mathematical Research, Universiti Putra Malaysia Serdang, Selangor, Malaysia 1,2 Department of Mathematics, Faculty of Science Universiti Putra Malaysia Serdang, Malaysia 1,3 Department of Mathematical Sciences Faculty of Science, Bayero University Kano, Nigeria mywaziri@gmail.com Abstract This paper presents an improved diagonal Secant-like method using two-step approach for solving large scale systems of nonlinear equations. In this scheme, instead of using direct updating matrix in every iteration to construct the interpolation curves, we chose to use an implicit updating approach to obtain an enhanced approximation of the Jacobian matrix which only requires a vector storage. The fact that the proposed method solves systems of nonlinear equations without the cost of storing the weighted matrix can be considered as a clear advantage of this method over some variants of Newton s methods. Mathematics Subject Classification: 65H11, 65K05 Keywords: Diagonal, Newton s Method 1 Introduction Let us consider the systems of nonlinear equations F (x) =0, (1) where F : R n R n is a nonlinear mapping. Often, the mapping, F is assumed to satisfying the following assumptions: A1. There exists an x R n s.t F (x )=0;
2 5668 M. Y. Waziri, Z. A. Majid and H. Aisha A2. F is a continuously differentiable mapping in a neighborhood of x ; A3. F (x ) is invertible. The prominent method for finding the solution to (1), is the Newton s method which generates a sequence of iterates {x k } from a given initial guess x 0 via x k+1 = x k (F (x k )) 1 F (x k ), (2) where k = 0, 1, 2... Nevertheless, Newton s method requires the computation of the matrix which entails the first-order derivatives of the systems. In practice, computations of some functions derivatives are quite costly and sometime they are not available or could not be done precisely. In this case Newton s method cannot be applied directly. Moreover, some significant efforts have been made in order to eliminate the well known shortcomings of Newton s method for solving systems of nonlinear equations, particularly large-scale systems. For example Chord s Newton s method, inexact Newton s method, quasi-newton s method, e.t.c. (see for e.g.[1],[3],[16],[6],[9]). On the other hand, most of these variants of Newton s method still have some shortcomings as Newton s counterpart. For example Broyden s method and Chord s Newton s method need to store an n n matrix and their floating points operations are O(n 2 ). To do away with these disadvantages, a single step diagonally Newton s method has been suggested by Leong et al.[5] and showed that their approach is significantly cheaper than Newton s method and some of its variants. To incorporate the higher order accuracy in approximating the Jacobian matrix, Waziri et al. [7] proposed a two step approach to develop the diagonal updating scheme D k+1 that will satisfy the weak secant equation ρ T k D k+1 ρ k = ρ T k μ k, (3) where ρ k = s k α k s k 1 and μ k = y k α k y k 1. For this purpose, equation 3 is obtained by means of constructing the interpolation quadratic curves x(ν) and y(ν), where x(ν) interpolates the iterates x k 1 x k and x k+1, and y(ν) interpolates the functions F k 1, F k and F k+1. A diagonal updating matrix D k+1 that satisfies 3 is obtained via the following relation D k+1 = D k + (ρt k μ k ρ T k D kρ k ) Tr(Hk 2) H k, (4) where H k = diag((ρ (1) k )2, (ρ (2) k )2,...,(ρ (n) k )2 ), n i=1 (ρ(i) k )4 = Tr(Hk 2 ) and Tr is the trace operation. In order to find the value of ρ, let us consider the following approach by defining the metric as Z m {Z T MZ}. (5)
3 Implicit multi-step diagonal secant-type method 5669 By letting ν 2 = 0 and M = D k, ν 0 and ν 1 can be by computed as follows [7]: and ν 1 = ν 2 ν 1 = x(ν 2 ) x(ν 1 ) Dk = x k+1 x k Dk = s k Dk =(s T k D k s k ) 1 2, (6) ν 0 = ν 2 ν 0 = x(ν 2 ) x(ν 0 ) Dk = x k+1 x k 1 Dk = s k + s k 1 Dk =((s k + s k 1 ) T D k (s k + s k 1 )) 1 2. (7) Waziri et al. [7] incorporated the attribute of the multi-step approach to improve over the single step method. Moreover, they use diagonal updating matrix D k to represent the weighted matrix that is used to parameterize the interpolating polynomials in every iteration. The numerical performance of the multi-step approach is very encouraging. Based on this fact, it is interesting to present an approach which will improve further the Jacobian approximation in diagonal form. This is what lead to the idea of this paper which is organized as follows: the next section present the details of the proposed method. Some numerical results are reported in Section 3. Finally, conclusions are made in Section 4. 2 Derivation process of IMDS update A new two-step diagonal secant-like method for solving large-scale systems of nonlinear equations has been presented in this section. The main idea is to employ an implicit strategy to obtain the weighted matrix than the one proposed by Waziri et al.[7]. The anticipation has been to improve the accuracy of the Jacobian approximation, this is made possible by choosing a new approach for interpolating the curve while maintaining storage and time requirements of the proposed method to only linear order (see [13] for more details). The performance of this approach is highly dependent on how to obtain the {v j } 2 j=0, hence, a diagonal weighted matrix say D k is considered to be used in obtaining more precisely the values of ρ k, μ k and β respectively. It is important to note that, D k is quit different from the D k. Therefore, the weighted diagonal
4 5670 M. Y. Waziri, Z. A. Majid and H. Aisha matrix of this scheme is given as: D k 1 = Diag(D k,ρ k 1,μ k 1 ). (8) Note that, in [7] the weighted matrix is considered as D k. The multi-step implicit updating scheme is given as ν 1 = D k 1 s k }, (9) ν 0 = {s T k {(s k +2s k 1 ) T D k 1 s k +(s T k 1 D k 1 s k 1 )}. (10) where β is denoted by ρ k 1 = s k β ( β +2)s k 1, (11) μ k 1 = y k β ( β +2)y k 1, (12) β = ν 2 ν 0 ν 1, (13) ν 0 we require that { D }}{ k 1 satisfies the secant equation, i.e. ρ k 1 D k 1 = μ k 1. (14) Since it is difficult in letting a diagonal matrix to satisfy the secant equation, in particular, because Jacobian approximations are not usually done in element wise, we consider to use the weak secant condition [4] instead: ρ T k 1 D k 1 ρ k 1 = μ k 1. (15) ρ T k 1 Using the same approach as in [7], the new weighted matrix is obtained as D k 1 = D k + ( ρ T k 1 μ k 1 ρ T k 1 D k ρ k 1 ) G k 1, (16) Tr( G 2 k 1 ) where { G }}{ k 1 = diag((ρ (1) k 1 )2, (ρ (2) k 1 )2...(ρ (n) k 1 )2. Since D k 1 is a diagonal matrix, is clearly that one can obtained it very cheaply. Hence, by using the new
5 Implicit multi-step diagonal secant-type method 5671 weighted matrix, the values of the {v j } 2 j=0 following: can be obtain which lead to the ρ k = s k β2 1+2β s k 1, (17) μ k = y k β2 1+2β y k 1, (18) where β is denoted by β = ν 2 ν 0 ν 1 ν 0. (19) To this end, the updating formula for D is given as follow : D k+1 = D k + (ρt k μ k ρ T k D kρ k ) Tr(G 2 k ) G k, (20) where G k = diag((ρ (1) k )2, (ρ (2) k )2,...,(ρ (n) k )2 ), n i=1 (ρ(i) k )4 = Tr(G 2 k ) and Tr is the trace operation. To safeguard on the possibility of generating undefined D k+1, the proposed updating scheme for D k+1 is used whenever : D k+1 = D k + (ρt k μ k ρ T k Q kρ k) Tr(G 2 k ) G k ; ρ k > 10 4, D k ; otherwise. Hence, the stages of the proposed method are given as follows: Algorithm (IMDS) Step 1 : Choose an initial guess x 0 and D 0 = I, let k := 0. Step 2 : Compute F (x k ). If F (x k ) ɛ 1 stop, where ɛ 1 =10 4. Step 3:Ifk := 0 define x 1 = x 0 D0 1 F (x 0). Else if k := 1 set ρ k = s k and μ k = y k and goto 5. Step 4: Ifk 2 compute ν 1, ν 0 and β via (9), (10) and (13), respectively and find ρ k 1 and μ k 1 using (11) and (12), respectively. If ρ T k 1 μ k ρ k 1 2 μ k 1 2 set ρ k 1 = s k and μ k 1 = y k. Else retains the computed (11) and (12) and compute weighted matrix D k 1 by using (16), Step 5 : Compute ρ k and μ k using (17) and (18). Check If
6 5672 M. Y. Waziri, Z. A. Majid and H. Aisha ρ T k μ k 10 4 ρ k 2 μ k 2, set ρ k = s k and μ k = y k. Else retain the (17) and (18), Step 6 : Check if ρ k 2 ɛ 1 if yes retain D k+1 that is computed by step 5. Else set, D k+1 = D k Step 7 : Set k := k + 1 and goto 2. 3 Numerical results In this section, the performance of IMDS update with that of the Newton s method (NM), Broyden s method (BM), MFDN method proposed by Leong et al.[5] and 2-MFDN update presented by Waziri et al.[7] has been compared. The comparison is based upon the following criterion: Number of iterations and CPU time in seconds. The computations are performed in MATLAB 7.0 using double precision computer. The stopping criterion used is ρ k + F (x k ) (21) The identity matrix has been chosen as an initial approximate Jacobian. Seven (7) different dimensions are performed on the benchmark problems ranging from 50 to The codes also terminates when ever one of the following happens; (i) The number of iteration is at least 600 but no point of x k that satisfies (21) is obtained; (ii) CPU time in second reaches 600, (iii) Insufficient memory to initiate the run. All the results are presented using performance profiles indices in terms of robustness and efficiency as proposed in [2]. In the following, some details on the benchmarks test problems are given as. Problem 1 Trigonometric System of Byeong [14] : f i (x) = cos(x i ) 1 i =1, 2,...,n and x 0 =(0.87, 0.87,..., 0.87) Problem 2 [7]: f i (x) = ln(x i ) cos((1 (1 + (x T x) 2 ) 1 )) exp((1 (1 + (x T x) 2 ) 1 )) i =1, 2,...,n, and x 0 =(2.5, 2.5,.., 2.5). Problem 3 Spares System of Byeong [14] : f i (x) =x i x i+1 1 f n (x) =x n x 1 1, i =1, 2,...,n 1 and x 0 =(0.5, 0.5,..,.5).
7 Implicit multi-step diagonal secant-type method 5673 Problem 4 [7]: f i (x) =n(x i 3) 2 + cos(x i 3) 2 x i 2 exp(x i 3) + log(x 2 i +1) i =1, 2,...,n and x 0 =( 3, 3, 3,..., 3) Problem 5 [15] : f 1 (x) =x 1 f i (x) = cos x i+1 + x i 1, i =2, 3,...n and x 0 =(0.5, 0.5,....5) MFDN IMDS Efficiency indices BM MFDN NM System s Dimensions Figure 1. Efficiency Profile of NM, BM, MFDN, 2-MFDN and IMDS methods as the dimensions increases (in term of CPU time)
8 5674 M. Y. Waziri, Z. A. Majid and H. Aisha MFDN IMDS Robustness index BM MFDN NM System s Dimensions Figure 2. Robustness profile of NM, BM, MFDN, 2-MFDN and IMDS methods as the dimensions increases (in term of CPU time)
9 Implicit multi-step diagonal secant-type method 5675 As expected, Figure 1 and 2 imply that diagonal updating proposed in this paper improves notably over the performance of NM, BM, MFDN and 2-MFDN methods, as having the greatest efficiency and robustness indices. This is not surprising that any method that required storage of full elements of Jacobian or its inverse, particularly when solving large-scale systems, NM, and BM methods are necessary inferior. Moreover, Figure 1 and 2 detailed the growth of the NM, BM, MFDN, 2-MFDN and IDMS methods s CPU time as the dimension gets higher, which shows that IMDS method CPU time increases linearly as apposed by some variants of Newton s method. Therefore, we declare that IDMS method has significantly improved the performance of 2-MFDN method in terms of CPU time for handling large-scale systems of nonlinear equations. 4 Conclusions A new two-step diagonal variant of secant method for solving large-scale system of nonlinear equations has been presented. The fundamental idea behind this approach is to use an implicit updating approach to obtain an enhanced approximation of the Jacobian matrix in diagonal form which only require a vector storage. Numerical testing provides strong indication that the IMDS method exhibits enhanced performance in all the tested problems ( as measured by the CPU time, floating points operations and matrix storage requirement) by comparison with the other variants of Newton s methods, an attribute which becomes more obvious as the dimension of the problems increases. References [1] Dennis, J, E., 1983, Numerical methods for unconstrained optimization and nonlinear equations, Prince-Hall, Inc., Englewood Cliffs, New Jersey [2] Bogle,I.andJ.D.Perkins,1990. A new sparsity preservingquasi-newton updatefor solving nonlinear equations.siam J. Sci. Statist,11: [3] Natasa K. and Zorna L., 2001, Newton-like method with modification of the right-hand vector, J. Maths. Compt. 71, [4] Dennis J.E and Wolkowicz, H.Sizing and least change secant methods, SIAM J. Numer. Anal., 30 (1993), [5] Leong, W. J., Hassan, M. A and Waziri, M. Y., 2011, A matrix-free quasi- Newton method for solving large-scale nonlinear systems, Comput. Math. App. 62 (5):
10 5676 M. Y. Waziri, Z. A. Majid and H. Aisha [6] M. Y. Waziri, W.J. Leong, M. A. Hassan,M. Monsi, A New Newton method with diagonal Jacobian approximation for systems of Non-Linear equations. Journal of Mathematics and Statistics Science Publication. 6 :(3) (2010). [7] M.Y. Waziri, W.J. Leong, M. Mamat, A Two-Step Matrix-Free Secant Method for Solving Large-scale Systems of Nonlinear Equations. Journal of Applied Mathematics, vol. 2012, Article ID , 9 pages, doi: /2012/ [8] Kelley, C. T Iterative Methods for Linear and Nonlinear Equations. PA: SIAM, Philadelphia [9] M.Y. Waziri, Leong, W. J. and Hassan, M. A, 2011, Diagonal Broyden- Like Method for Large-Scale Systems of Nonlinear Equations,Malaysian Journal of Mathematical Sciences 6(1): (2012) [10] Ford, J. A. and Moghrabi, L. A., Alternating multi-step quasi-newton methods for unconstrained optimization, J. Comput. Appl. Math (1997). [11] Ford, J. A. and Moghrabi, L. A., Multi-step quasi-newton methods for optimization, J. Comput. Appl. Math (1994). [12] Farid, M., Leong, W. J. and Hassan, M. A., A new two-step gradienttype method for large scale unconstrained optimization. J. Comput. Math. Appl. 59: [13] Ford, J. A. and S. Thrmlikit, S., New implicite updates in multi-step quasi-newton methods for unconstrained optimization, J. Comput. Appl. Math (2003). [14] Byeong, C. S. Darvishi, M. T. and Chang, H. K A comparison of the Newton-Krylov method with high order Newton-like methods to solve nonlinear systems. Appl. Math. Comput. 217: [15] Roose, A. Kulla, V. L. M. and Meressoo, T., Test examples of systems of nonlinear equations. Tallin: Estonian Software and Computer Service Company. [16] Waziri, M.Y., Leong W. J., Hassan M. A. and Monsi M., Mathematical Problems in Engineering Volume 2011, Article ID , doi: /2011/467017, 12 pages. Received: June, 2012
Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations
Applied Mathematics Volume 2012, Article ID 348654, 9 pages doi:10.1155/2012/348654 Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations M. Y. Waziri,
More informationAn Efficient Solver for Systems of Nonlinear. Equations with Singular Jacobian. via Diagonal Updating
Applied Mathematical Sciences, Vol. 4, 2010, no. 69, 3403-3412 An Efficient Solver for Systems of Nonlinear Equations with Singular Jacobian via Diagonal Updating M. Y. Waziri, W. J. Leong, M. A. Hassan
More informationQuadrature based Broyden-like method for systems of nonlinear equations
STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 6, March 2018, pp 130 138. Published online in International Academic Press (www.iapress.org) Quadrature based Broyden-like
More informationA New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods
From the SelectedWorks of Dr. Mohamed Waziri Yusuf August 24, 22 A New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods Mohammed Waziri Yusuf, Dr. Available at:
More informationAn Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations
International Journal of Mathematical Modelling & Computations Vol. 07, No. 02, Spring 2017, 145-157 An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations L. Muhammad
More informationNewton s Method and Efficient, Robust Variants
Newton s Method and Efficient, Robust Variants Philipp Birken University of Kassel (SFB/TRR 30) Soon: University of Lund October 7th 2013 Efficient solution of large systems of non-linear PDEs in science
More informationNumerical solutions of nonlinear systems of equations
Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points
More informationNumerical Methods for Large-Scale Nonlinear Equations
Slide 1 Numerical Methods for Large-Scale Nonlinear Equations Homer Walker MA 512 April 28, 2005 Inexact Newton and Newton Krylov Methods a. Newton-iterative and inexact Newton methods. Slide 2 i. Formulation
More informationNumerical Methods I Solving Nonlinear Equations
Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)
More informationAn Improved Parameter Regula Falsi Method for Enclosing a Zero of a Function
Applied Mathematical Sciences, Vol 6, 0, no 8, 347-36 An Improved Parameter Regula Falsi Method for Enclosing a Zero of a Function Norhaliza Abu Bakar, Mansor Monsi, Nasruddin assan Department of Mathematics,
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationMaria Cameron. f(x) = 1 n
Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that
More informationNumerical Method for Solution of Fuzzy Nonlinear Equations
1, Issue 1 (218) 13-18 Journal of Advanced Research in Modelling and Simulations Journal homepage: www.akademiabaru.com/armas.html ISSN: XXXX-XXXX Numerical for Solution of Fuzzy Nonlinear Equations Open
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationHandout on Newton s Method for Systems
Handout on Newton s Method for Systems The following summarizes the main points of our class discussion of Newton s method for approximately solving a system of nonlinear equations F (x) = 0, F : IR n
More informationTwo improved classes of Broyden s methods for solving nonlinear systems of equations
Available online at www.isr-publications.com/jmcs J. Math. Computer Sci., 17 (2017), 22 31 Research Article Journal Homepage: www.tjmcs.com - www.isr-publications.com/jmcs Two improved classes of Broyden
More informationStatistics 580 Optimization Methods
Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of
More informationCS 450 Numerical Analysis. Chapter 5: Nonlinear Equations
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationImproved Damped Quasi-Newton Methods for Unconstrained Optimization
Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational
More informationOutline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations
Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationMultipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,
More informationBindel, Spring 2016 Numerical Analysis (CS 4220) Notes for
Life beyond Newton Notes for 2016-04-08 Newton s method has many attractive properties, particularly when we combine it with a globalization strategy. Unfortunately, Newton steps are not cheap. At each
More informationA new nonmonotone Newton s modification for unconstrained Optimization
A new nonmonotone Newton s modification for unconstrained Optimization Aristotelis E. Kostopoulos a George S. Androulakis b a a Department of Mathematics, University of Patras, GR-265.04, Rio, Greece b
More informationA Novel of Step Size Selection Procedures. for Steepest Descent Method
Applied Mathematical Sciences, Vol. 6, 0, no. 5, 507 58 A Novel of Step Size Selection Procedures for Steepest Descent Method Goh Khang Wen, Mustafa Mamat, Ismail bin Mohd, 3 Yosza Dasril Department of
More informationMATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.
MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of F (x) = 0, their advantages and disadvantages.
More informationOn Application Of Modified Gradient To Extended Conjugate Gradient Method Algorithm For Solving Optimal Control Problems
IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn:2319-765x. Volume 9, Issue 5 (Jan. 2014), PP 30-35 On Application Of Modified Gradient To Extended Conjugate Gradient Method Algorithm For
More informationA new Newton like method for solving nonlinear equations
DOI 10.1186/s40064-016-2909-7 RESEARCH Open Access A new Newton like method for solving nonlinear equations B. Saheya 1,2, Guo qing Chen 1, Yun kang Sui 3* and Cai ying Wu 1 *Correspondence: yksui@sina.com
More informationQuasi-Newton Methods
Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references
More informationNotes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)
Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 1 2 Contents 1 Error Analysis 5 2 Nonlinear Algebraic Equations 7 2.1 Convergence
More informationWHEN studying distributed simulations of power systems,
1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen
More informationREPORTS IN INFORMATICS
REPORTS IN INFORMATICS ISSN 0333-3590 A class of Methods Combining L-BFGS and Truncated Newton Lennart Frimannslund Trond Steihaug REPORT NO 319 April 2006 Department of Informatics UNIVERSITY OF BERGEN
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More informationStep lengths in BFGS method for monotone gradients
Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More information2. Quasi-Newton methods
L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization
More informationResearch Article Diagonally Implicit Block Backward Differentiation Formulas for Solving Ordinary Differential Equations
International Mathematics and Mathematical Sciences Volume 212, Article ID 767328, 8 pages doi:1.1155/212/767328 Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving
More informationChapter 4. Unconstrained optimization
Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file
More informationNumerical Methods in Informatics
Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch
More informationMath 551 Homework Assignment 3 Page 1 of 6
Math 551 Homework Assignment 3 Page 1 of 6 Name and section: ID number: E-mail: 1. Consider Newton s method for finding + α with α > 0 by finding the positive root of f(x) = x 2 α = 0. Assuming that x
More informationExtra-Updates Criterion for the Limited Memory BFGS Algorithm for Large Scale Nonlinear Optimization 1
journal of complexity 18, 557 572 (2002) doi:10.1006/jcom.2001.0623 Extra-Updates Criterion for the Limited Memory BFGS Algorithm for Large Scale Nonlinear Optimization 1 M. Al-Baali Department of Mathematics
More informationSome Third Order Methods for Solving Systems of Nonlinear Equations
Some Third Order Methods for Solving Systems of Nonlinear Equations Janak Raj Sharma Rajni Sharma International Science Index, Mathematical Computational Sciences waset.org/publication/1595 Abstract Based
More informationA Family of Iterative Methods for Solving Systems of Nonlinear Equations Having Unknown Multiplicity
Article A Family of Iterative Methods for Solving Systems of Nonlinear Equations Having Unknown Multiplicity Fayyaz Ahmad 1,2, *, S Serra-Capizzano 1,3, Malik Zaka Ullah 1,4 and A S Al-Fhaid 4 Received:
More informationNUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS
NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS Adeleke O. J. Department of Computer and Information Science/Mathematics Covenat University, Ota. Nigeria. Aderemi
More informationJim Lambers MAT 419/519 Summer Session Lecture 11 Notes
Jim Lambers MAT 49/59 Summer Session 20-2 Lecture Notes These notes correspond to Section 34 in the text Broyden s Method One of the drawbacks of using Newton s Method to solve a system of nonlinear equations
More informationCONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS
MATHEMATICS OF COMPUTATION Volume 68, Number 228, Pages 165 1613 S 25-5718(99)1135-7 Article electronically published on March 1, 1999 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS BENEDETTA MORINI Abstract.
More informationConvergence of a Positive-Definite Scaled Symmetric Rank One Method
Matematia, 2002, Jilid 18, bil. 2, hlm. 117 127 c Jabatan Matemati, UTM. Convergence of a Positive-Definite Scaled Symmetric Ran One Method Mali b. Hj. Abu Hassan, Mansor b. Monsi & Leong Wah June Department
More informationp 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.
80 CHAP. 2 SOLUTION OF NONLINEAR EQUATIONS f (x) = 0 y y = f(x) (p, 0) p 2 p 1 p 0 x (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- Figure 2.16 cant method. Secant Method The
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationA family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations
Journal of Computational Applied Mathematics 224 (2009) 11 19 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: www.elsevier.com/locate/cam A family
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of
More informationx 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.
Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written
More informationA DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION
1 A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION G E MANOUSSAKIS, T N GRAPSA and C A BOTSARIS Department of Mathematics, University of Patras, GR 26110 Patras, Greece e-mail :gemini@mathupatrasgr,
More informationAcademic Editors: Alicia Cordero, Juan R. Torregrosa and Francisco I. Chicharro
Algorithms 2015, 8, 774-785; doi:10.3390/a8030774 OPEN ACCESS algorithms ISSN 1999-4893 www.mdpi.com/journal/algorithms Article Parallel Variants of Broyden s Method Ioan Bistran, Stefan Maruster * and
More informationMulti-step derivative-free preconditioned Newton method for solving systems of nonlinear equations
Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations Fayyaz Ahmad Abstract Preconditioning of systems of nonlinear equations modifies the associated Jacobian
More informationAn Accelerated Block-Parallel Newton Method via Overlapped Partitioning
An Accelerated Block-Parallel Newton Method via Overlapped Partitioning Yurong Chen Lab. of Parallel Computing, Institute of Software, CAS (http://www.rdcps.ac.cn/~ychen/english.htm) Summary. This paper
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationA SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION
A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract
More informationAn Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations
An Overly Simplified and Brief Review of Differential Equation Solution Methods We will be dealing with initial or boundary value problems. A typical initial value problem has the form y y 0 y(0) 1 A typical
More informationSteady-State Optimization Lecture 2: Solution Methods for Systems of Nonlinear Algebraic Equations
Steady-State Optimization Lecture 2: Solution Methods for Systems of Nonlinear Algebraic Equations Newton Algorithm Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal
More information10.3 Steepest Descent Techniques
The advantage of the Newton and quasi-newton methods for solving systems of nonlinear equations is their speed of convergence once a sufficiently accurate approximation is known. A weakness of these methods
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationNon-polynomial Least-squares fitting
Applied Math 205 Last time: piecewise polynomial interpolation, least-squares fitting Today: underdetermined least squares, nonlinear least squares Homework 1 (and subsequent homeworks) have several parts
More informationIMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS
136 IMPROVING THE CONVERGENCE ORDER AND EFFICIENCY INDEX OF QUADRATURE-BASED ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS 1Ogbereyivwe, O. and 2 Ojo-Orobosa, V. O. Department of Mathematics and Statistics,
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter IV: Locating Roots of Equations Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu
More informationGeneralization Of The Secant Method For Nonlinear Equations
Applied Mathematics E-Notes, 8(2008), 115-123 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Generalization Of The Secant Method For Nonlinear Equations Avram Sidi
More informationQuasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS)
Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno (BFGS) Limited memory BFGS (L-BFGS) February 6, 2014 Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb
More informationQuasi-Newton Methods
Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications
More informationA COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS
A COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS MEHIDDIN AL-BAALI AND HUMAID KHALFAN Abstract. Techniques for obtaining safely positive definite Hessian approximations with selfscaling
More informationMotivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)
AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.
More informationA Fifth-Order Iterative Method for Solving Nonlinear Equations
International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767, P-ISSN: 2321 4759 www.ijmsi.org Volume 2 Issue 10 November. 2014 PP.19-23 A Fifth-Order Iterative Method for Solving
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationGradient method based on epsilon algorithm for large-scale nonlinearoptimization
ISSN 1746-7233, England, UK World Journal of Modelling and Simulation Vol. 4 (2008) No. 1, pp. 64-68 Gradient method based on epsilon algorithm for large-scale nonlinearoptimization Jianliang Li, Lian
More informationA NEW INTEGRATOR FOR SPECIAL THIRD ORDER DIFFERENTIAL EQUATIONS WITH APPLICATION TO THIN FILM FLOW PROBLEM
Indian J. Pure Appl. Math., 491): 151-167, March 218 c Indian National Science Academy DOI: 1.17/s13226-18-259-6 A NEW INTEGRATOR FOR SPECIAL THIRD ORDER DIFFERENTIAL EQUATIONS WITH APPLICATION TO THIN
More informationAN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION
AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION JOEL A. TROPP Abstract. Matrix approximation problems with non-negativity constraints arise during the analysis of high-dimensional
More informationApplied Computational Economics Workshop. Part 3: Nonlinear Equations
Applied Computational Economics Workshop Part 3: Nonlinear Equations 1 Overview Introduction Function iteration Newton s method Quasi-Newton methods Practical example Practical issues 2 Introduction Nonlinear
More informationMethods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent
Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationCubic regularization in symmetric rank-1 quasi-newton methods
Math. Prog. Comp. (2018) 10:457 486 https://doi.org/10.1007/s12532-018-0136-7 FULL LENGTH PAPER Cubic regularization in symmetric rank-1 quasi-newton methods Hande Y. Benson 1 David F. Shanno 2 Received:
More informationDocument downloaded from:
Document downloaded from: http://hdl.handle.net/1051/56036 This paper must be cited as: Cordero Barbero, A.; Torregrosa Sánchez, JR.; Penkova Vassileva, M. (013). New family of iterative methods with high
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 3 Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 2 3.1. Gradient method Classical gradient method: to minimize a differentiable convex
More informationNEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS
Bulletin of Mathematical Analysis and Applications ISSN: 181-191, URL: http://www.bmathaa.org Volume 3 Issue 4(011, Pages 31-37. NEW ITERATIVE METHODS BASED ON SPLINE FUNCTIONS FOR SOLVING NONLINEAR EQUATIONS
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More information15 Nonlinear Equations and Zero-Finders
15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).
More informationOn Solving Large Algebraic. Riccati Matrix Equations
International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University
More information5 Quasi-Newton Methods
Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min
More informationPART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435
PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu
More informationApplied Optimization: Formulation and Algorithms for Engineering Systems Slides
Applied Optimization: Formulation and Algorithms for Engineering Systems Slides Ross Baldick Department of Electrical and Computer Engineering The University of Texas at Austin Austin, TX 78712 Copyright
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationNew hybrid conjugate gradient methods with the generalized Wolfe line search
Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationA NOTE ON Q-ORDER OF CONVERGENCE
BIT 0006-3835/01/4102-0422 $16.00 2001, Vol. 41, No. 2, pp. 422 429 c Swets & Zeitlinger A NOTE ON Q-ORDER OF CONVERGENCE L. O. JAY Department of Mathematics, The University of Iowa, 14 MacLean Hall Iowa
More informationTwo Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur
28 International Journal of Advance Research, IJOAR.org Volume 1, Issue 1, January 2013, Online: Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur ABSTRACT The following paper focuses
More information13. Nonlinear least squares
L. Vandenberghe ECE133A (Fall 2018) 13. Nonlinear least squares definition and examples derivatives and optimality condition Gauss Newton method Levenberg Marquardt method 13.1 Nonlinear least squares
More informationA new sixth-order scheme for nonlinear equations
Calhoun: The NPS Institutional Archive DSpace Repository Faculty and Researchers Faculty and Researchers Collection 202 A new sixth-order scheme for nonlinear equations Chun, Changbum http://hdl.handle.net/0945/39449
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More information