On the influence of center-lipschitz conditions in the convergence analysis of multi-point iterative methods

Size: px
Start display at page:

Download "On the influence of center-lipschitz conditions in the convergence analysis of multi-point iterative methods"

Transcription

1 On the influence of center-lipschitz conditions in the convergence analysis of multi-point iterative methods Ioannis K. Argyros 1 Santhosh George 2 Abstract The aim of this article is to extend the local as well as the semilocal convergence analysis of multi-point iterative methods using center Lipschitz conditions in combination with our idea, of the restricted convergence region. It turns out that this way a finer convergence analysis for these methods is obtained than in earlier works without additional hypotheses. Numerical examples favoring our technique over earlier ones completes this article. AMS Subject Classification: 65F08, 37F50, 65N12. Key Words: Multi-point iterative methods; Banach space; local-semilocal convergence analysis. 1 Introduction Let X, Y be Banach spaces Ω X be a nonempty open set. By B(X, Y), we denote the space of bounded linear operators from X into Y. Let also U(w, d), be an open set centered at w X of radius d > 0 Ū(w, d) be its closure. Many problems from diverse disciplines such that Mathematics, Optimization, Mathematical Programming, Chemistry, Biology, Physics, Economics, Statistics, Engineering other disciplines [?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?], can be reduced to finding a solution x of the equation H(x) = 0, (1.1) where H : Ω Y is a continuous operator. Since,a unique solution x of equation (??) in a neighborhood of some initial data x 0 can be obtained only in special cases. Researchers construct iterative methods which generate a sequence converging to x. The most widely used iterative method is Newton s defined for each n = 0, 1, 2,... by x 0 Ω, x n+1 = x n H (x n ) 1 H(x n ). (1.2) 1 Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA, iargyros@cameron.edu 2 Department of Mathematical Computational Sciences, National Institute of Technology Karnataka, India , sgeorge@nitk.edu.in by the author(s). Distributed under a Creative Commons CC BY license.

2 The order of convergence is an important concern when dealing with iterative methods. The computational cost increases in general especially when the convergence order increases. That is why researchers practitioners have developed iterative methods that on the one h avoid the computation of derivatives on the other h achieve high order of convergence. We consider the following multi-step iterative method defined for each n = 0, 1, 2,... by u n = v n (0) v n (1) = v n (0) H (v n (0) ) 1 H(v n (0) ) v n (2) = v n (1) H (v n (0) ) 1 H(v n (1) ) (1.3) u n+1 = v (k) n = v n (k 1) H (v n (0) ) 1 H(v (k 1) n ). The semi-local convergence of method (??) was given in [?]. It is well known that as the convergence order increases the convergence region decreases in general. To avoid this problem, we introduce a center-lipschitz-type condition that helps us determine an at least as small region as before containing the iterates {u n }. This way the resulting Lipschitz constants are at least as small. A tighter convergence analysis is obtained this way. The order of convergence was shown using Taylor expansions conditions reaching up to the k + 1 order derivative of H, although these derivatives do not appear in this method. As an academic example: Let X = Y = R, Ω = [ 5 2, 3 2 ]. Define ϕ on Ω by Then ϕ(x) = x 3 log x 2 + x 5 x 4 ϕ (x) = 3x 2 log x 2 + 5x 4 4x 3 + 2x 2, ϕ (x) = 6x log x x 3 12x x, ϕ (x) = 6 log x x 2 = 24x Obviously ϕ (x) is not bounded on Ω. So, the convergence of methods (??) is not guaranteed by the analysis in [?,?,?]. The rest of the article is organized as follows: Section 2 contains the conditions to be used in the semi-local convergence that follows in Section 3. Finally the numerical examples are given in the concluding Section 4. 2 Local convergence Let L 0 > 0, L > 0 L 1 1 be parameters. polynomial p by Define the scalar quadratic p(t) = (2L 0 + L)L 0 t 2 (4L 0 + 4L 0 L 1 + L)t + 2. (2.1) 2

3 The discriminant D of p is given by D = (4L 0 + 4L 0 L 1 + L) 2 8L 0 (2L 0 + L) = 16(L 0 L 1 ) 2 + L L 2 0L 1 + 8L 0 L 1 L > 0, so p has roots s 1 s 2 with 0 < s 1 < s 2 by the Descarte s rule of signs. Define also parameters ( L γ = 2(1 L 0 s 1 ) + 2L ) 0L 1 (1 L 0 s 1 ) 2 s 1 (2.2) r A = 2 2L 0 + L. (2.3) Notice that γ (0, 1], since p(s 1 ) = 0. The local convergence analysis of method (??) uses the conditions (A): (a1) H : Ω Y is a differentiable operator in the sense of Fréchet there exists x Ω such that H(x ) = 0 H (x ) 1 L(Y, X ). (a2) There exists L 0 > 0 such that for each x Ω Set Ω 0 = Ω B(x, 1 L 0 ). H (x ) 1 (H (x) H (x )) L 0 x x. (a3) There exist L = L(L 0 ) > 0 L 1 = L 1 (L 0 ) 1 such that for each x, y Ω 0 H (x ) 1 (H (y) H (x)) L x y H (x ) 1 H (x) L 1 x x. (a4) B(x, s 1 ) Ω. (a5) There exists s 3 s 1 such that s 3 < 2 L 0. Set Ω 1 = Ω B(x, 2 L 0 ). Based on the preceding conditions notations we can show a local convergence result for method (??). THEOREM 2.1 Under the conditions (A), further assume that u 0 B(x, s 1 ) {x }. Then, lim n u n = x, the following estimations hold v 1 n x L v 0 n x 2 2(1 L 0 v 0 0 x ), (2.4) v i n x γ i v 0 n x for each i = 2,..., k (2.5) u n+1 x γ k+n u 0 x. (2.6) Moreover, the point x is the unique solution of equation H(x) = 0 in the set Ω 1 given in (a5). 3

4 Proof. We use an induction based proof to show estimations (??)-(??). Let x B(x, s 1 ) {x }. By (a1) (a2), we obtain that H (x ) 1 (H (x) H (x )) L 0 x x < L 0 s 1 < 1. (2.7) It follows from the Banach lemma on invertible operators [?] (??) that H (x ) 1 L(Y, X ) H (x) 1 H (x ) 1 1 L 0 x x. (2.8) Then, x 0 = v0, 0 v0, 1..., v0 k are well defined by method (??) for n = 0. We can write v0 1 x = v0 0 x H (v0) 0 1 H(v0). 0 (2.9) Then, by using (a1), (a3), (??) (??), we get in turn that v 1 0 x = v 0 0 x H (v 0 0) 1 H(v 0 0) < H (v 0 0) 1 H (x ) 1 0 H (x ) 1 [H (x + θ(v 0 0 x )) H (v 0 0)]dθ(v 0 0 x ) L v0 0 x 2 2(1 L 0 v0 0 x ) Ls 1 2(1 L 0 s 1 ) v0 0 x < v 0 0 x < s 1, (2.10) which shows (??) for n = 0 v 1 0 B(x, s 1 ). Similarly by the second substep for n = 0, k = 2 we also get v 2 0 x = v 1 0 x H (v 0 0) 1 H(v 1 0) = v 1 0 x H (v 1 0) 1 H(v 1 0) +H (v 1 0) 1 [(H (v 0 0) H (x )) + (H (x ) H (v 1 0))] H (v 0 0) 1 H(v 1 0), (2.11) so by (a3), the definition of s 1 (??) (for x = v 1 0, v 0 0), we get in turn that v 2 0 x L v 1 0 x 2 2(1 L 0 v 1 0 x ) L 0 ( v0 1 x + v0 0 x ) + (1 L 0 v0 1 x )(1 L 0 v0 0 x ) ( L 2(1 L 0 s 1 ) + 2L ) 0L 1 (1 : 0 s 1 ) 2 s 1 v0 1 x γs 1 v 1 0 x < s 1, (2.12) 4

5 which shows (??) for n = 0 k = 2. Similarly, from so v0 i x = v0 i 1 x H (v0) 0 1 (H(v0 i 1 )) = v0 i 1 x H (v0 i 1 )H(v0 i 1 ) +H (v0 i 1 ) 1 [(H (v0) 0 H (x )) +(H (x ) H (v0 i 1 ))]H (v0) 0 1 H(v0 i 1 ) (2.13) v i 0 x γ v i 1 0 x γ i v 0 0 x < s 1, (2.14) x 1 x γ k+1 x 0 x < s 1, (2.15) which show (??) (??) for n = 1, i = 2, 3,..., k v i 0, x 1 B(x, s 1 ). Similarly, by induction we have in turn (as in (??)) leading to (as in (??)) (as in (??)) v 1 j x = v 0 j x H (v j 0 ) 1 H(v 0 j ) L v 0 j x 2 2(1 L 0 v 0 j x ) < s 1, (2.16) v i j x = v i 1 j x H (v 0 j ) 1 H(v 1 j ) +H (v i 1 j ) 1 [(H (v i 1 j ) H (x )) +(H (x ) H (vj 0 ))]H (vj 0 ) 1 H(v i 1 j ) v i j x γ i v 0 j x < s 1 (2.17) u n+1 x = v k n x γ k+n u 0 x < s 1, (2.18) which completes the induction for (??)-(??) also show that u n+1 B(x, s 1 ). It also follows from (??) that lim n u n = x, since γ [0, 1). Let x Ω 1 with H(x ) = 0. It then follows from the definition of Ω 1, (a2) T = 1 0 H (x + τ(x x ))dτ that H (x ) 1 (T H(x )) L so T 1 L(Y, X ). Then, from the estimate 0 x x dτ L 0 2 s 3 < 1, 0 = H(x ) H(x ) = T (x x ), (2.19) we get x = x. 5

6 REMARK 2.2 (a) In view of (a2), we can write H (x ) 1 H (x) = H (x ) 1 [(H (x) H (x )) + H (x )] 1 + H (x ) 1 (H (x) H (x )) 1 + L 0 x x, (2.20) so the second condition in (a3) can be dropped, we choose L 1 = 2, since x x s 1 < 1 L 0. (b) It follows from the definition of s 1 r A that s 1 < r A. That is the radius of convergence s 1 cannot be larger than the radius of convergence r A of Newton s method obtained by us [?,?,?,?,?]. (c) The local convergence of method (??) was not studied in [?]. But if it was call; s 1 the smallest positive solution of p(t) = 0, where p(t) = (2L 0 + L)L 0 t 2 (4L 0 + 4L 0 L1 + L)t + 2, (2.21) where L L 1 are the constants for the conditions in (a3) holding on Ω. But, we have L L (2.22) since Ω 0 Ω. Hence, we have L 1 L 1 (2.23) γ γ, (2.24) s 1 s 1. (2.25) Moreover, if strict inequality holds in (??) or (??), then, we have s 1 < s 1. Furthermore, by (??), our error bounds are more precise than the ones using L 0, L, L 1 γ. Hence, we have exped the applicability of method (??) in the local convergence case. In a similar way, we improve the semi-local convergence analysis of method (??) given in [?]. The work is given in the next section. 3 Semi-local convergence We need the following auxiliary result on majorizing sequences for method (??). LEMMA 3.1 Let K 0 > 0, K > 0, r 1 0 be parameters. Denote by δ the unique root in the interval (0, 1) of the polynomial ϕ given by ϕ(t) = 2K 0 t k+1 + K(t k + t k 1 2). Define the sequence {q n } for each n = 0, 1, 2,... i = 1, 2,..., k 1 by r0 0 = 0, rn 0 = r n, rn+1 1 = q n+1 + K(q n+1 q n + rn k 1 q n )(q n+1 rn k 1 ) (3.1) 1 2K 0 q n+1 6

7 r k n = q n+1, r i+1 n Moreover, suppose that = rn i + K(ri n q n + rn i 1 q n )(rn i rn i 1 ). 1 2K 0 q n 0 < K(q 1 + r k 1 0 ) 1 2K 0 q 1 δ < 1 2K 0 r 1 0. (3.2) Then, the sequence {q n } is increasing, bounded from above by q = r1 0 converges to its unique least upper bound q satisfying q 1 q q, 1 α r 1 n r i 1 n δ(r i 1 n r i 2 n ) δ kn+i 1 r 1 0, (3.3) r 1 n+1 r k n δ(r k n r k 1 n ) δ k(n+1) r 1 0 (3.4) q n = r 0 n r 1 n r 2 n... r k 1 n r k n = q n+1. (3.5) Proof. Replace t n, s i n, L 0, L, α in [?] by q n, r i n, K 0, K, δ. Next, we present the semi-local convergence analysis of method (??). THEOREM 3.2 Let H : Ω Y be a continuously differentiable operator in the sense of Fréchet [.,.; H] : Ω L(X, Y) be a divided difference of order one of H. Suppose there exist x 0 Ω K 0 > 0 such that H (x 0 ) 1 L(Y, X ). (3.6) H (x 0 ) 1 H(x 0 ) r 1 0, (3.7) H 1 (x 0 )([x, y; H] H (x 0 )) K 0 ( x x 0 + y x 0 ). (3.8) Set Ω 2 = Ω B(x 0, 1 2K 0 ). Moreover, suppose that for each x, y, z, w Ω 2 H (x 0 ) 1 ([x, y; H] [z, w; H]) K( x z + y w ), (3.9) the hypotheses of Lemma?? hold. Then, {u n } B(v 0, q ), lim n u n = u B(v 0, q ), H(u ) = 0. u u n q v n. (3.10) Moreover, u is the unique solution of equation H(x) = 0 in B(v 0, q ). Proof. Replace x n, y n, s i n, t n, α, L 0, L in [?] by u n, v n, r i n, q n, δ, K 0, K. REMARK 3.3 The condition H (x 0 ) 1 ([x, y; H] [z, w; H]) L( x z + y w ) (3.11) for each x, y, z, w Ω, some L > 0 is used in [?] instead of (??). But we have K 0 = L 0 (3.12) 7

8 K L, (3.13) δ α, (3.14) since Ω 2 Ω. Denote by q n, r i n the majorizing sequences used in [?] defined as sequences q n, r i n but with K 0 = L 0 K replaced by L. Then, we have by a simple induction argument, that q n q n (3.15) r i n r i n (3.16) 0 rn i q n r n i q n (3.17) 0 rn i rn i 1 r n i r n i 1 (3.18) 0 q n+1 rn k 1 q n r n k 1 (3.19) q q. (3.20) Moreover, if K < L, then (??-(??) hold as strict inequalities. Let us consider 1 the set Ω 3 = Ω B(x 1, 2K r1 0) provided that r0 1 < 1 2K. Moreover, suppose for each x, y, z, w Ω 3 H 1 (x 0 )([x, y; H] [z, w; H]) λ( x z + y w ). (3.21) Notice that Ω 3 Ω 2, so λ K. Then, Ω 3, (??), λ can replace Ω 2, (??), K respectively in Theorem??. Clearly, the corresponding to {t n } majorizing sequence call it { t n } is even tighter than {t n }. Hence, we have extended the applicability of method (??) in the semi-local convergence analysis too. These improvements are derived under the same conditions as in [?], since the computation of L is included in the computation of K as a special case. Examples where the new constants are smaller than the older ones can be found in the numerical section that follows in [?,?,?,?,?]. 4 Numerical examples We present the following examples to test the convergence criteria. Define the divided difference by [x, y; H] = 1 0 H (τx + (1 τ)y)dτ. EXAMPLE 4.1 Let X = Y = R 3, Ω = U(0, 1), x = (0, 0, 0) T define H on Ω by H(x) = H(x 1, x 2, x 3 ) = (e x1 1, e 1 2 x x 2, x 3 ) T. (4.1) 8

9 For the points u = (u 1, u 2, u 3 ) T, the Fréchet derivative is given by H (u) = eu (e 1)u Using the norm of the maximum of the rows (a 3 )-(a 4 ) since H (x ) = diag(1, 1, 1), we can define parameters for method (??) by L 0 = e 1, L 1 = L = e 1 e 1, L = L1 = e. Then, s 1 = The old radius is s 1 = EXAMPLE 4.2 Let X = Y = R, Ω = Ū(x 0, 1 ξ), x 0 = 1 ξ [0, 1 2 ). Define function H on Ω by H(x) = x 3 ξ. Then, we get by (??)-(??) (??) that for (i) k = 1, q 1 = r0 1 = 1 3 (1 ξ), L 0 = K 0 = 1 2 (3 ξ), L = 2 ξ, K = K 0. Notice that L 0 < K < L. The conditions of Lemma?? are satisfied for ξ I 1 = [ , 0.5) but the earlier conditions in [?] are satisfied for ξ I 2 = [ , 0.5), I 2 I 1. Moreover, if λ = 1 3(3 ξ) ( 2ξ2 + 5ξ + 6) then conditions of Lemma?? are satisfied for ξ I 3 = [ , 0.5). (ii)for k = 2, the conditions of Lemma?? are satisfied for ξ I 1 = [ , 0.7) but the earlier conditions in [?] are satisfied for ξ I 2 = [ , 0.7), I 2 I 1. Moreover, if λ = 1 3(3 ξ) ( 2ξ2 + 5ξ + 6) then conditions of Lemma?? are satisfied for ξ I 3 = [ , 0.7). 5 Conclusion Our idea of the convergence region in connection to the center Lipschitz condition were utilized to provide a local as well as a semilocal convergence analysis of method (??). Due to the fact that we locate a region at least as small as in earlier works [?] containing the iterates, the new Lipschitz parameters are also at least as small. This technique leads to a finer convergence analysis (see also Remark??, Remark?? the numerical examples). The novelty of the paper not only lies in the introduction of the new idea but also obtained using special cases of Lipschitz parameters appearing in [?]. Hence, no additional work to [?] is needed to arrive at these developments. This idea can be used to extend the applicability of other iterative methods appearing [?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?] along the same lines. References [1] S. Amat, I. K. Argyros, S. Busquier, M. A. Herńez-Veŕon. On two high-order families of frozen newton-type methods. Numerical Linear Algebra with Applications, 25(1),

10 [2] S. Amat, C. Berḿudez, M.A. Herńez-Veŕon, E. Martinez. On an efficient k-step iterative method for nonlinear equations. Journal of Computational Applied Mathematics, 302: , [3] S. Amat, S. Busquier, C. Berḿudez, S. Plaza, On two families of high order newton type methods. Applied Mathematics Letters, 25(12): , [4] S. Amat, S. Busquier, S. Plaza, Review of some iterative root-finding methods from a dynamical point of view. Scientia, 10(3):35, [5] I.K. Argyros, Computational theory of iterative methods, volume 15. Elsevier, [6] I. K. Argyros, On the semilocal convergence of a fast two-step newton method. Revista Colombiana de Matematicas, 42(1):15-24, [7] I. K. Argyros, S. George, N. Thapa, Mathematical Modeling For The Solution Of Equations And Systems Of Equations With Applications, Volume-I, Nova Publishes, NY, [8] I. K. Argyros, S. George, N. Thapa, Mathematical Modeling For The Solution Of Equations And Systems Of Equations With Applications, Volume- II, Nova Publishes, NY, [9] I. K. Argyros, A. Cordero, A. A. Magreñán, J. R. Torregrosa, Thirddegree anomalies of traubs method. Journal of Computational Applied Mathematics, 309: , [10] I. K. Argyros S. Hilout, Weaker conditions for the convergence of newtons method. Journal of Complexity, 28(3): , [11] I. K. Argyros S. Hilout, Computational methods in nonlinear analysis: efficient algorithms, fixed point theory applications. World Scientific, [12] I. K. Argyros, A. A. Magreñán, L. Orcos, J.A Sicilia, Local convergence of a relaxed two-step newton like method with applications. Journal of Mathematical Chemistry, 55(7): , [13] I. K.Argyros, A. A. Magréñan, A contemporary study of iterative methods, Elsevier (Academic Press), New York, [14] I. K.Argyros, A. A. Magréñan, Iterative methods their dynamics with applications, CRC Press, New York, USA, [15] A.Cordero, L. Guasp, J. R. Torregrosa, Choosing the most stable members of kous family of iterative methods. Journal of Computational Applied Mathematics,

11 [16] A. Cordero J. R. Torregrosa, Low-complexity root-finding iteration functions with no derivatives of any order of convergence. Journal of Computational Applied Mathematics, 275: , [17] A. Cordero, J. R. Torregrosa, P. Vindel, Study of the dynamics of thirdorder iterative methods on quadratic polynomials. International Journal of Computer Mathematics, 89(13-14): , [18] J.A. Ezquerro M.A. Herńez-Veŕon, How to improve the domain of starting points for steffensen s method. Studies in Applied Mathematics, 132(4): , [19] J. A. Ezquerro M. A. Herńez-Veŕon, Majorizing sequences for nonlinear fredholm hammerstein integral equations. Studies in Applied Mathematics. [20] J. A. Ezquerro, M. Grau-Śanchez, M. A. Herńez-Veŕon, M. Noguera, A family of iterative methods that uses divided differences of first second orders. Numerical algorithms, 70(3): , [21] J. A. Ezquerro, M.A.Herńez-Veŕon, A. I. Velasco, An analysis of the semilocal convergence for secant-like methods. Applied Mathematics Computation, 266: , [22] M.A. Herńez-Veŕon, E. Martínez, C. Teruel, Semilocal convergence of a k-step iterative process its application for solving a special kind of conservative problems. Numerical Algorithms, 76(2): , [23] L.V. Kantlorovich G.P. Akilov, Functional analysis pergamon press, [24] M. Moccari T. Lofti, Using majorizing sequences for the semi-local convergence of a high order multi-point method along with stability analysis, Intern.J. Non. Sci. Numer. Stimulation, [25] A. A. Magreñán, A.Cordero, J. M. Gutiérrez, J. R. Torregrosa, Real qualitative behavior of a fourth-order family of iterative methods by using the convergence plane. Mathematics Computers in Simulation, 105:49-61, [26] A. A. Magreñán I. K.Argyros, Improved convergence analysis for Newton-like methods. Numerical Algorithms, 71(4): , [27] A. A. Magreñán I. K. Argyros, Two-step newton methods. Journal of Complexity, 30(4): , [28] F.A. Potra V. Pták, Nondiscrete induction iterative processes, volume 103. Pitman Advanced Publishing Program,

12 [29] Rheinboldt, W.C., An adaptive continuation process for solving systems of nonlinear equations, Polish Academy of Science, Banach Ctr. Publ. 3 (1978), no. 1, [30] H.Ren I. K. Argyros, On the convergence of kingwerner-type methods of order free of derivatives. Applied Mathematics Computation, 256: , R volume 1. Cambridge University [31] W.T. Shaw, Complex analysis with Mathematical Press, [32] J. F. Traub, Iterative methods for the solution of equations. American Mathematical Soc.,

The Journal of Nonlinear Sciences and Applications

The Journal of Nonlinear Sciences and Applications J. Nonlinear Sci. Appl. 2 (2009), no. 3, 195 203 The Journal of Nonlinear Sciences Applications http://www.tjnsa.com ON MULTIPOINT ITERATIVE PROCESSES OF EFFICIENCY INDEX HIGHER THAN NEWTON S METHOD IOANNIS

More information

New w-convergence Conditions for the Newton-Kantorovich Method

New w-convergence Conditions for the Newton-Kantorovich Method Punjab University Journal of Mathematics (ISSN 116-2526) Vol. 46(1) (214) pp. 77-86 New w-convergence Conditions for the Newton-Kantorovich Method Ioannis K. Argyros Department of Mathematicsal Sciences,

More information

EXPANDING THE CONVERGENCE DOMAIN FOR CHUN-STANICA-NETA FAMILY OF THIRD ORDER METHODS IN BANACH SPACES

EXPANDING THE CONVERGENCE DOMAIN FOR CHUN-STANICA-NETA FAMILY OF THIRD ORDER METHODS IN BANACH SPACES J. Korean Math. Soc. 5 (15), No. 1, pp. 3 41 http://dx.doi.org/1.4134/jkms.15.5.1.3 EXPANDING THE CONVERGENCE DOMAIN FOR CHUN-STANICA-NETA FAMILY OF THIRD ORDER METHODS IN BANACH SPACES Ioannis Konstantinos

More information

AN ITERATIVE METHOD FOR COMPUTING ZEROS OF OPERATORS SATISFYING AUTONOMOUS DIFFERENTIAL EQUATIONS

AN ITERATIVE METHOD FOR COMPUTING ZEROS OF OPERATORS SATISFYING AUTONOMOUS DIFFERENTIAL EQUATIONS Electronic Journal: Southwest Journal of Pure Applied Mathematics Internet: http://rattler.cameron.edu/swjpam.html ISBN 1083-0464 Issue 1 July 2004, pp. 48 53 Submitted: August 21, 2003. Published: July

More information

Improved Complexity of a Homotopy Method for Locating an Approximate Zero

Improved Complexity of a Homotopy Method for Locating an Approximate Zero Punjab University Journal of Mathematics (ISSN 116-2526) Vol. 5(2)(218) pp. 1-1 Improved Complexity of a Homotopy Method for Locating an Approximate Zero Ioannis K. Argyros Department of Mathematical Sciences,

More information

Convergence of a Third-order family of methods in Banach spaces

Convergence of a Third-order family of methods in Banach spaces International Journal of Computational and Applied Mathematics. ISSN 1819-4966 Volume 1, Number (17), pp. 399 41 Research India Publications http://www.ripublication.com/ Convergence of a Third-order family

More information

Convergence Conditions for the Secant Method

Convergence Conditions for the Secant Method CUBO A Mathematical Journal Vol.12, N ō 01, 161 174. March 2010 Convergence Conditions for the Secant Method Ioannis K. Argyros Department of Mathematics Sciences, Lawton, OK 73505, USA email: iargyros@cameron.edu

More information

On the Midpoint Method for Solving Generalized Equations

On the Midpoint Method for Solving Generalized Equations Punjab University Journal of Mathematics (ISSN 1016-56) Vol. 40 (008) pp. 63-70 On the Midpoint Method for Solving Generalized Equations Ioannis K. Argyros Cameron University Department of Mathematics

More information

An improved convergence theorem for the Newton method under relaxed continuity assumptions

An improved convergence theorem for the Newton method under relaxed continuity assumptions An improved convergence theorem for the Newton method under relaxed continuity assumptions Andrei Dubin ITEP, 117218, BCheremushinsaya 25, Moscow, Russia Abstract In the framewor of the majorization technique,

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational Applied Mathematics 236 (212) 3174 3185 Contents lists available at SciVerse ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam

More information

Extended And Unified Local Convergence For Newton-Kantorovich Method Under w Conditions With Applications

Extended And Unified Local Convergence For Newton-Kantorovich Method Under w Conditions With Applications Extended And Unified Local Convergence For Newton-Kantorovich Method Under w Conditions With Applications IOANNIS K. ARGYROS Cameron University Department of Mathematical Sciences Lawton, OK 7355 USA iargyros@cameron.edu

More information

Sharp estimation of local convergence radius for the Picard iteration

Sharp estimation of local convergence radius for the Picard iteration J. Fixed Point Theory Appl. 2018) 20:28 https://doi.org/10.1007/s11784-018-0518-5 Published online February 8, 2018 c The Authors) This article is an open access publication 2018 Journal of Fixed Point

More information

Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations And Its Applications

Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations And Its Applications Applied Mathematics E-Notes, 17(017), 1-30 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sixth Order Newton-Type Method For Solving System Of Nonlinear Equations

More information

The Newton-Kantorovich (NK) Method

The Newton-Kantorovich (NK) Method 2 The Newton-Kantorovich (NK) Method We study the problem of approximating locally unique solution of an equation in a Banach space. The Newton-Kantorovich method is undoubtedly the most popular method

More information

Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems

Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems algorithms Article Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems Xiaofeng Wang * and Xiaodong Fan School of Mathematics and Physics, Bohai University, Jinzhou 203, China;

More information

An efficient Newton-type method with fifth-order convergence for solving nonlinear equations

An efficient Newton-type method with fifth-order convergence for solving nonlinear equations Volume 27, N. 3, pp. 269 274, 2008 Copyright 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam An efficient Newton-type method with fifth-order convergence for solving nonlinear equations LIANG FANG 1,2, LI

More information

Kantorovich s Majorants Principle for Newton s Method

Kantorovich s Majorants Principle for Newton s Method Kantorovich s Majorants Principle for Newton s Method O. P. Ferreira B. F. Svaiter January 17, 2006 Abstract We prove Kantorovich s theorem on Newton s method using a convergence analysis which makes clear,

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

arxiv: v1 [math.na] 25 Sep 2012

arxiv: v1 [math.na] 25 Sep 2012 Kantorovich s Theorem on Newton s Method arxiv:1209.5704v1 [math.na] 25 Sep 2012 O. P. Ferreira B. F. Svaiter March 09, 2007 Abstract In this work we present a simplifyed proof of Kantorovich s Theorem

More information

THIRD-ORDER FAMILY OF METHODS IN BANACH SPACES

THIRD-ORDER FAMILY OF METHODS IN BANACH SPACES THIRD-ORDER FAMILY OF METHODS IN BANACH SPACES Changbum Chun Department of Mathematics, Sungkyunkwan University Suwon 44-746, Republic of Korea; e-mail: cbchun@skku.edu Tel.: +8-3-99-453, Fax: +8-3-9-733

More information

Convergence analysis of a one-step intermediate Newton iterative scheme

Convergence analysis of a one-step intermediate Newton iterative scheme Revista Colombiana de Matemáticas Volumen 35 (21), páginas 21 27 Convergence analysis of a one-step intermediate Newton iterative scheme Livinus U. Uko* Raúl Eduardo Velásquez Ossa** Universidad de Antioquia,

More information

Research Article A New Fractional Integral Inequality with Singularity and Its Application

Research Article A New Fractional Integral Inequality with Singularity and Its Application Abstract and Applied Analysis Volume 212, Article ID 93798, 12 pages doi:1.1155/212/93798 Research Article A New Fractional Integral Inequality with Singularity and Its Application Qiong-Xiang Kong 1 and

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations

Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations Fayyaz Ahmad Abstract Preconditioning of systems of nonlinear equations modifies the associated Jacobian

More information

Some Third Order Methods for Solving Systems of Nonlinear Equations

Some Third Order Methods for Solving Systems of Nonlinear Equations Some Third Order Methods for Solving Systems of Nonlinear Equations Janak Raj Sharma Rajni Sharma International Science Index, Mathematical Computational Sciences waset.org/publication/1595 Abstract Based

More information

Bulletin of the Transilvania University of Braşov Vol 10(59), No Series III: Mathematics, Informatics, Physics, 63-76

Bulletin of the Transilvania University of Braşov Vol 10(59), No Series III: Mathematics, Informatics, Physics, 63-76 Bulletin of the Transilvania University of Braşov Vol 1(59), No. 2-217 Series III: Mathematics, Informatics, Physics, 63-76 A CONVERGENCE ANALYSIS OF THREE-STEP NEWTON-LIKE METHOD UNDER WEAK CONDITIONS

More information

A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations

A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations A Novel and Precise Sixth-Order Method for Solving Nonlinear Equations F. Soleymani Department of Mathematics, Islamic Azad University, Zahedan Branch, Zahedan, Iran E-mail: fazl_soley_bsb@yahoo.com; Tel:

More information

Basins of Attraction for Optimal Third Order Methods for Multiple Roots

Basins of Attraction for Optimal Third Order Methods for Multiple Roots Applied Mathematical Sciences, Vol., 6, no., 58-59 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.988/ams.6.65 Basins of Attraction for Optimal Third Order Methods for Multiple Roots Young Hee Geum Department

More information

High-order Newton-type iterative methods with memory for solving nonlinear equations

High-order Newton-type iterative methods with memory for solving nonlinear equations MATHEMATICAL COMMUNICATIONS 9 Math. Commun. 9(4), 9 9 High-order Newton-type iterative methods with memory for solving nonlinear equations Xiaofeng Wang, and Tie Zhang School of Mathematics and Physics,

More information

An adaptive version of a fourth-order iterative method for quadratic equations

An adaptive version of a fourth-order iterative method for quadratic equations Journal of Computational and Applied Mathematics 191 2006 259 268 www.elsevier.com/locate/cam An adaptive version of a fourth-order iterative method for quadratic equations Sergio Amat a,, Sonia Busquier

More information

Chebyshev-Halley s Method without Second Derivative of Eight-Order Convergence

Chebyshev-Halley s Method without Second Derivative of Eight-Order Convergence Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 4 2016, pp. 2987 2997 Research India Publications http://www.ripublication.com/gjpam.htm Chebyshev-Halley s Method without

More information

A FIXED POINT PROOF OF THE CONVERGENCE OF A NEWTON-TYPE METHOD

A FIXED POINT PROOF OF THE CONVERGENCE OF A NEWTON-TYPE METHOD Fixed Point Theory, Volume 7, No. 2, 2006, 235-244 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm A FIXED POINT PROOF OF THE CONVERGENCE OF A NEWTON-TYPE METHOD VASILE BERINDE AND MĂDĂLINA PĂCURAR Department

More information

On the Local Convergence of Regula-falsi-type Method for Generalized Equations

On the Local Convergence of Regula-falsi-type Method for Generalized Equations Journal of Advances in Applied Mathematics, Vol., No. 3, July 017 https://dx.doi.org/10.606/jaam.017.300 115 On the Local Convergence of Regula-falsi-type Method for Generalized Equations Farhana Alam

More information

The Inverse Function Theorem via Newton s Method. Michael Taylor

The Inverse Function Theorem via Newton s Method. Michael Taylor The Inverse Function Theorem via Newton s Method Michael Taylor We aim to prove the following result, known as the inverse function theorem. Theorem 1. Let F be a C k map (k 1) from a neighborhood of p

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

ON JARRATT S FAMILY OF OPTIMAL FOURTH-ORDER ITERATIVE METHODS AND THEIR DYNAMICS

ON JARRATT S FAMILY OF OPTIMAL FOURTH-ORDER ITERATIVE METHODS AND THEIR DYNAMICS Fractals, Vol. 22, No. 4 (2014) 1450013 (16 pages) c World Scientific Publishing Company DOI: 10.1142/S0218348X14500133 ON JARRATT S FAMILY OF OPTIMAL FOURTH-ORDER ITERATIVE METHODS AND THEIR DYNAMICS

More information

A new family of four-step fifteenth-order root-finding methods with high efficiency index

A new family of four-step fifteenth-order root-finding methods with high efficiency index Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 3, No. 1, 2015, pp. 51-58 A new family of four-step fifteenth-order root-finding methods with high efficiency index Tahereh

More information

ANTIPODAL FIXED POINT THEORY FOR VOLTERRA MAPS

ANTIPODAL FIXED POINT THEORY FOR VOLTERRA MAPS Dynamic Systems and Applications 17 (2008) 325-330 ANTIPODAL FIXED POINT THEORY FOR VOLTERRA MAPS YEOL JE CHO, DONAL O REGAN, AND SVATOSLAV STANEK Department of Mathematics Education and the RINS, College

More information

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem 56 Chapter 7 Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem Recall that C(X) is not a normed linear space when X is not compact. On the other hand we could use semi

More information

9. Banach algebras and C -algebras

9. Banach algebras and C -algebras matkt@imf.au.dk Institut for Matematiske Fag Det Naturvidenskabelige Fakultet Aarhus Universitet September 2005 We read in W. Rudin: Functional Analysis Based on parts of Chapter 10 and parts of Chapter

More information

A NOTE ON Q-ORDER OF CONVERGENCE

A NOTE ON Q-ORDER OF CONVERGENCE BIT 0006-3835/01/4102-0422 $16.00 2001, Vol. 41, No. 2, pp. 422 429 c Swets & Zeitlinger A NOTE ON Q-ORDER OF CONVERGENCE L. O. JAY Department of Mathematics, The University of Iowa, 14 MacLean Hall Iowa

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

A Family of Iterative Methods for Solving Systems of Nonlinear Equations Having Unknown Multiplicity

A Family of Iterative Methods for Solving Systems of Nonlinear Equations Having Unknown Multiplicity Article A Family of Iterative Methods for Solving Systems of Nonlinear Equations Having Unknown Multiplicity Fayyaz Ahmad 1,2, *, S Serra-Capizzano 1,3, Malik Zaka Ullah 1,4 and A S Al-Fhaid 4 Received:

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 1 : Introduction

Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 1 : Introduction Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 1 : Introduction 1 Introduction In this module, we develop solution techniques for numerically solving ordinary

More information

Superconvergence Results for the Iterated Discrete Legendre Galerkin Method for Hammerstein Integral Equations

Superconvergence Results for the Iterated Discrete Legendre Galerkin Method for Hammerstein Integral Equations Journal of Computer Science & Computational athematics, Volume 5, Issue, December 05 DOI: 0.0967/jcscm.05.0.003 Superconvergence Results for the Iterated Discrete Legendre Galerkin ethod for Hammerstein

More information

Journal of Complexity. New general convergence theory for iterative processes and its applications to Newton Kantorovich type theorems

Journal of Complexity. New general convergence theory for iterative processes and its applications to Newton Kantorovich type theorems Journal of Complexity 26 (2010) 3 42 Contents lists available at ScienceDirect Journal of Complexity journal homepage: www.elsevier.com/locate/jco New general convergence theory for iterative processes

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence

Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence Palestine Journal of Mathematics Vol. 3(1) (2014), 113 119 Palestine Polytechnic University-PPU 2014 Solving Nonlinear Equations Using Steffensen-Type Methods With Optimal Order of Convergence M.A. Hafiz

More information

Numerical Optimization

Numerical Optimization Unconstrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 01, India. NPTEL Course on Unconstrained Minimization Let f : R n R. Consider the optimization problem:

More information

A Fifth-Order Iterative Method for Solving Nonlinear Equations

A Fifth-Order Iterative Method for Solving Nonlinear Equations International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767, P-ISSN: 2321 4759 www.ijmsi.org Volume 2 Issue 10 November. 2014 PP.19-23 A Fifth-Order Iterative Method for Solving

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

The Approximation of Some Invertible Operators Using Newton s Method in Banach Spaces

The Approximation of Some Invertible Operators Using Newton s Method in Banach Spaces Int. Journal of Math. Analysis, Vol. 2, 28, no. 15, 713-72 The Approximation of Some Invertible Operators Using Newton s Method in Banach Spaces Alexandru V. Blaga National College Mihai Eminescu 5 Mihai

More information

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical

More information

Two New Predictor-Corrector Iterative Methods with Third- and. Ninth-Order Convergence for Solving Nonlinear Equations

Two New Predictor-Corrector Iterative Methods with Third- and. Ninth-Order Convergence for Solving Nonlinear Equations Two New Predictor-Corrector Iterative Methods with Third- and Ninth-Order Convergence for Solving Nonlinear Equations Noori Yasir Abdul-Hassan Department of Mathematics, College of Education for Pure Science,

More information

Sergio Amat Sonia Busquier Editors. Advances in Iterative Methods for Nonlinear Equations. S MA e

Sergio Amat Sonia Busquier Editors. Advances in Iterative Methods for Nonlinear Equations. S MA e 1 Sergio Amat Sonia Busquier Editors Advances in Iterative Methods for Nonlinear Equations S MA e SEMA SIMAI Springer Series Series Editors: Luca Formaggia (Editors-in-Chief) Pablo Pedregal (Editors-in-Chief)

More information

Weyl s Theorem and Property (Saw)

Weyl s Theorem and Property (Saw) International Journal of Mathematical Analysis Vol. 12, 2018, no. 9, 433-437 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ijma.2018.8754 Weyl s Theorem and Property (Saw) N. Jayanthi Government

More information

SMALL ZEROS OF QUADRATIC FORMS WITH LINEAR CONDITIONS. Lenny Fukshansky. f ij X i Y j

SMALL ZEROS OF QUADRATIC FORMS WITH LINEAR CONDITIONS. Lenny Fukshansky. f ij X i Y j SALL ZEROS OF QUADRATIC FORS WITH LINEAR CONDITIONS Lenny Fukshansky Abstract. Given a quadratic form and linear forms in N + 1 variables with coefficients in a number field K, suppose that there exists

More information

Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver

Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver Abstract and Applied Analysis Volume 01, Article ID 63893, 8 pages doi:10.1155/01/63893 Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver Mi Young Lee and Changbum

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

Simple Iteration, cont d

Simple Iteration, cont d Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Simple Iteration, cont d In general, nonlinear equations cannot be solved in a finite sequence

More information

ROOT FINDING REVIEW MICHELLE FENG

ROOT FINDING REVIEW MICHELLE FENG ROOT FINDING REVIEW MICHELLE FENG 1.1. Bisection Method. 1. Root Finding Methods (1) Very naive approach based on the Intermediate Value Theorem (2) You need to be looking in an interval with only one

More information

arxiv:math/ v1 [math.fa] 26 Oct 1993

arxiv:math/ v1 [math.fa] 26 Oct 1993 arxiv:math/9310217v1 [math.fa] 26 Oct 1993 ON COMPLEMENTED SUBSPACES OF SUMS AND PRODUCTS OF BANACH SPACES M.I.Ostrovskii Abstract. It is proved that there exist complemented subspaces of countable topological

More information

NUMERICAL ANALYSIS 2 - FINAL EXAM Summer Term 2006 Matrikelnummer:

NUMERICAL ANALYSIS 2 - FINAL EXAM Summer Term 2006 Matrikelnummer: Prof. Dr. O. Junge, T. März Scientific Computing - M3 Center for Mathematical Sciences Munich University of Technology Name: NUMERICAL ANALYSIS 2 - FINAL EXAM Summer Term 2006 Matrikelnummer: I agree to

More information

Dynamical Behavior for Optimal Cubic-Order Multiple Solver

Dynamical Behavior for Optimal Cubic-Order Multiple Solver Applied Mathematical Sciences, Vol., 7, no., 5 - HIKARI Ltd, www.m-hikari.com https://doi.org/.988/ams.7.6946 Dynamical Behavior for Optimal Cubic-Order Multiple Solver Young Hee Geum Department of Applied

More information

x n+1 = x n f(x n) f (x n ), n 0.

x n+1 = x n f(x n) f (x n ), n 0. 1. Nonlinear Equations Given scalar equation, f(x) = 0, (a) Describe I) Newtons Method, II) Secant Method for approximating the solution. (b) State sufficient conditions for Newton and Secant to converge.

More information

CES739 Introduction to Iterative Methods in Computational Science, a.k.a. Newton s method

CES739 Introduction to Iterative Methods in Computational Science, a.k.a. Newton s method CES739 Introduction to Iterative Methods in Computational Science, a.k.a. Newton s method Y. Zinchenko March 27, 2008 Abstract These notes contain the abridged material of CES739. Proceed with caution.

More information

Shih-sen Chang, Yeol Je Cho, and Haiyun Zhou

Shih-sen Chang, Yeol Je Cho, and Haiyun Zhou J. Korean Math. Soc. 38 (2001), No. 6, pp. 1245 1260 DEMI-CLOSED PRINCIPLE AND WEAK CONVERGENCE PROBLEMS FOR ASYMPTOTICALLY NONEXPANSIVE MAPPINGS Shih-sen Chang, Yeol Je Cho, and Haiyun Zhou Abstract.

More information

ζ(u) z du du Since γ does not pass through z, f is defined and continuous on [a, b]. Furthermore, for all t such that dζ

ζ(u) z du du Since γ does not pass through z, f is defined and continuous on [a, b]. Furthermore, for all t such that dζ Lecture 6 Consequences of Cauchy s Theorem MATH-GA 45.00 Complex Variables Cauchy s Integral Formula. Index of a point with respect to a closed curve Let z C, and a piecewise differentiable closed curve

More information

Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point Method for Solving a Generalized Variational Inclusion Problem

Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point Method for Solving a Generalized Variational Inclusion Problem Iranian Journal of Mathematical Sciences and Informatics Vol. 12, No. 1 (2017), pp 35-46 DOI: 10.7508/ijmsi.2017.01.004 Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point

More information

A fourth order method for finding a simple root of univariate function

A fourth order method for finding a simple root of univariate function Bol. Soc. Paran. Mat. (3s.) v. 34 2 (2016): 197 211. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v34i1.24763 A fourth order method for finding a simple

More information

Apollonian Circle Packings: Number Theory II. Spherical and Hyperbolic Packings

Apollonian Circle Packings: Number Theory II. Spherical and Hyperbolic Packings Apollonian Circle Packings: Number Theory II. Spherical and Hyperbolic Packings Nicholas Eriksson University of California at Berkeley Berkeley, CA 942 Jeffrey C. Lagarias University of Michigan Ann Arbor,

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)

Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft) Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 1 2 Contents 1 Error Analysis 5 2 Nonlinear Algebraic Equations 7 2.1 Convergence

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

A new sixth-order scheme for nonlinear equations

A new sixth-order scheme for nonlinear equations Calhoun: The NPS Institutional Archive DSpace Repository Faculty and Researchers Faculty and Researchers Collection 202 A new sixth-order scheme for nonlinear equations Chun, Changbum http://hdl.handle.net/0945/39449

More information

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Boris T. Polyak Andrey A. Tremba V.A. Trapeznikov Institute of Control Sciences RAS, Moscow, Russia Profsoyuznaya, 65, 117997

More information

Cycling in Newton s Method

Cycling in Newton s Method Universal Journal of Computational Mathematics 1(3: 73-77, 2013 DOI: 10.13189/ujcmj.2013.010302 http://www.hrpub.org Cycling in Newton s Method Mikheev Serge E. Faculty of Applied Mathematics & Control

More information

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1 53 17. Lecture 17 Nonlinear Equations Essentially, the only way that one can solve nonlinear equations is by iteration. The quadratic formula enables one to compute the roots of p(x) = 0 when p P. Formulas

More information

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,

More information

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,

More information

Definable Extension Theorems in O-minimal Structures. Matthias Aschenbrenner University of California, Los Angeles

Definable Extension Theorems in O-minimal Structures. Matthias Aschenbrenner University of California, Los Angeles Definable Extension Theorems in O-minimal Structures Matthias Aschenbrenner University of California, Los Angeles 1 O-minimality Basic definitions and examples Geometry of definable sets Why o-minimal

More information

A LEFSCHETZ FIXED POINT THEOREM FOR ADMISSIBLE MAPS IN FRÉCHET SPACES

A LEFSCHETZ FIXED POINT THEOREM FOR ADMISSIBLE MAPS IN FRÉCHET SPACES Dynamic Systems and Applications 16 (2007) 1-12 A LEFSCHETZ FIXED POINT THEOREM FOR ADMISSIBLE MAPS IN FRÉCHET SPACES RAVI P. AGARWAL AND DONAL O REGAN Department of Mathematical Sciences, Florida Institute

More information

SOME REMARKS ON KRASNOSELSKII S FIXED POINT THEOREM

SOME REMARKS ON KRASNOSELSKII S FIXED POINT THEOREM Fixed Point Theory, Volume 4, No. 1, 2003, 3-13 http://www.math.ubbcluj.ro/ nodeacj/journal.htm SOME REMARKS ON KRASNOSELSKII S FIXED POINT THEOREM CEZAR AVRAMESCU AND CRISTIAN VLADIMIRESCU Department

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, 2547-2565 An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational

More information

Analysis II: The Implicit and Inverse Function Theorems

Analysis II: The Implicit and Inverse Function Theorems Analysis II: The Implicit and Inverse Function Theorems Jesse Ratzkin November 17, 2009 Let f : R n R m be C 1. When is the zero set Z = {x R n : f(x) = 0} the graph of another function? When is Z nicely

More information

Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur

Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur 28 International Journal of Advance Research, IJOAR.org Volume 1, Issue 1, January 2013, Online: Two Point Methods For Non Linear Equations Neeraj Sharma, Simran Kaur ABSTRACT The following paper focuses

More information

Convergence Rates on Root Finding

Convergence Rates on Root Finding Convergence Rates on Root Finding Com S 477/577 Oct 5, 004 A sequence x i R converges to ξ if for each ǫ > 0, there exists an integer Nǫ) such that x l ξ > ǫ for all l Nǫ). The Cauchy convergence criterion

More information

AN EXTENSION OF THE LAX-MILGRAM THEOREM AND ITS APPLICATION TO FRACTIONAL DIFFERENTIAL EQUATIONS

AN EXTENSION OF THE LAX-MILGRAM THEOREM AND ITS APPLICATION TO FRACTIONAL DIFFERENTIAL EQUATIONS Electronic Journal of Differential Equations, Vol. 215 (215), No. 95, pp. 1 9. ISSN: 172-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu AN EXTENSION OF THE

More information

On Fixed Point Theorems for Contraction Mappings in n-normed Spaces

On Fixed Point Theorems for Contraction Mappings in n-normed Spaces Appl Math Inf Sci Lett 2, No 2, 59-64 (2014) 59 Applied Mathematics & Information Sciences Letters An International Journal http://dxdoiorg/1012785/amisl/020205 On Fixed Point Theorems for Contraction

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

ON THE CONVERGENCE OF MODIFIED NOOR ITERATION METHOD FOR NEARLY LIPSCHITZIAN MAPPINGS IN ARBITRARY REAL BANACH SPACES

ON THE CONVERGENCE OF MODIFIED NOOR ITERATION METHOD FOR NEARLY LIPSCHITZIAN MAPPINGS IN ARBITRARY REAL BANACH SPACES TJMM 6 (2014), No. 1, 45-51 ON THE CONVERGENCE OF MODIFIED NOOR ITERATION METHOD FOR NEARLY LIPSCHITZIAN MAPPINGS IN ARBITRARY REAL BANACH SPACES ADESANMI ALAO MOGBADEMU Abstract. In this present paper,

More information

EXISTENCE RESULTS FOR NONLINEAR FUNCTIONAL INTEGRAL EQUATIONS VIA NONLINEAR ALTERNATIVE OF LERAY-SCHAUDER TYPE

EXISTENCE RESULTS FOR NONLINEAR FUNCTIONAL INTEGRAL EQUATIONS VIA NONLINEAR ALTERNATIVE OF LERAY-SCHAUDER TYPE ANALELE ŞTIINŢIFICE ALE UNIVERSITĂŢII AL.I. CUZA IAŞI Tomul LII, s.i, Matematică, 26, f.1 EXISTENCE RESULTS FOR NONLINEAR FUNCTIONAL INTEGRAL EQUATIONS VIA NONLINEAR ALTERNATIVE OF LERAY-SCHAUDER TYPE

More information

Solutions Preliminary Examination in Numerical Analysis January, 2017

Solutions Preliminary Examination in Numerical Analysis January, 2017 Solutions Preliminary Examination in Numerical Analysis January, 07 Root Finding The roots are -,0, a) First consider x 0 > Let x n+ = + ε and x n = + δ with δ > 0 The iteration gives 0 < ε δ < 3, which

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

DIFFERENT ITERATIVE METHODS. Nazli Karaca and Isa Yildirim

DIFFERENT ITERATIVE METHODS. Nazli Karaca and Isa Yildirim AN EXTENDED NEWTON-TYPE METHOD IN DIFFERENT ITERATIVE METHODS AND POLYNOMIOGRAPHY VIA Nazli Karaca and Isa Yildirim Abstract: The aim of this paper is to introduce a new Newton-type iterative method and

More information

A Preconditioned Iterative Method for Solving Systems of Nonlinear Equations Having Unknown Multiplicity

A Preconditioned Iterative Method for Solving Systems of Nonlinear Equations Having Unknown Multiplicity Article A Preconditioned Iterative Method for Solving Systems of Nonlinear Equations Having Unknown Multiplicity Fayyaz Ahmad 1,2,3, *, Toseef Akhter Bhutta 4, Umar Shoaib 4, Malik Zaka Ullah 1,5, Ali

More information

YURI LEVIN AND ADI BEN-ISRAEL

YURI LEVIN AND ADI BEN-ISRAEL Pp. 1447-1457 in Progress in Analysis, Vol. Heinrich G W Begehr. Robert P Gilbert and Man Wah Wong, Editors, World Scientiic, Singapore, 003, ISBN 981-38-967-9 AN INVERSE-FREE DIRECTIONAL NEWTON METHOD

More information

Geometrically constructed families of iterative methods

Geometrically constructed families of iterative methods Chapter 4 Geometrically constructed families of iterative methods 4.1 Introduction The aim of this CHAPTER 3 is to derive one-parameter families of Newton s method [7, 11 16], Chebyshev s method [7, 30

More information