CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS
|
|
- Myron Francis
- 6 years ago
- Views:
Transcription
1 MATHEMATICS OF COMPUTATION Volume 68, Number 228, Pages S (99) Article electronically published on March 1, 1999 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS BENEDETTA MORINI Abstract. In this paper we investigate local convergence properties of inexact Newton and Newton-lie methods for systems of nonlinear equations. Processes with modified relative residual control are considered, and new sufficient conditions for linear convergence in an arbitrary vector norm are provided. For a special case the results are affine invariant. 1. Introduction We consider the system of nonlinear equations (1) F (x) =, where F is a given function from R N to R N, and we let F (x) denote the Jacobian of F at x. Locally convergent iterative procedures commonly used to solve (1) have the general form: For =step 1 until convergence do Find the step which satisfies (2) B = F (x ) Set x +1 = x + where x is a given initial guess and B is a N N nonsingular matrix. The process is Newton s method if B = F (x ), and it represents a Newton-lie method if B = B(x ) is an approximation to F (x ) ([1], [2]). In [3] and [4] iterative processes of the following general form were formulated: For =step 1 until convergence do Find some step s which satisfies r (3) B s = F (x )+r, where F (x ) η Set x +1 = x + s. Here {η } is a sequence of forcing terms such that η < 1. These iterative processes are called inexact methods. In particular, we obtain inexact Newton methods taing B = F (x ) and inexact Newton-lie methods if B approximates the Jacobian F (x ). We remar that inexact methods include the class of Newton iterative methods ([1]), where an iterative method is used to approximate the solution of the linear systems (2). Received by the editor January 23, 1997 and, in revised form, January 6, Mathematics Subject Classification. Primary 65H1. Key words and phrases. Systems of nonlinear equations, inexact methods, affine invariant conditions. 165 c 1999 American Mathematical Society
2 166 BENEDETTA MORINI For inexact Newton methods, local and rate of convergence properties can be characterized in terms of the forcing sequence {η }. Let denote any vector norm on R N and the matrix subordinate norm on R N N. In [3] it is shown that, if the usual assumptions for Newton s method hold and {η } is uniformly less than one, we can define a sequence {x } linearly convergent to a solution x of (1) in the norm y = F (x )y. Recently, several authors (see e.g. [5], [6], [7], [8]) have proposed applications of inexact methods in different fields of numerical analysis and pointed out difficulties in applying linear convergence results of [3]. In fact, such results are normdependent and is not computable. Then, they focused on the analysis of the stopping relative residual control r / F (x ) η and its effect on convergence properties. In this paper we consider inexact methods where a scaled relative residual control is performed at each iteration. Further, we determine conditions that assure linear convergence of inexact methods in terms of a sequence of forcing terms uniformly less than one and for an arbitrary norm on R N. Such schemes include as a special case inexact methods (3). The results obtained are valid under the assumption of widely used hypotheses on F and merge into the theory of Newton s and Newtonlie methods in the limiting case of vanishing residuals, i.e. η =foreach. Further, for a special case, such conditions of convergence are affine invariant and in agreement with the theory of [9]. 2. Preliminaries In this section we analyze local convergence results given in [3]. The following definition of rate of convergence can be found in [1]. Definition 2.1. Let {x } R N be any convergent sequence with limit x.then x x with Q-order at least p, p 1, if x +1 x = O( x x p ),. (4) Moreover, consider the quantity Q p = Q p ({x },x ), called the Q-factor: Q p ({x },x ) = lim sup x +1 x x x p. In [1] it is noted that if Q p <, then for any ɛ> there exists a such that (5) x +1 x α x x p,, where α = Q p + ɛ. With regard to p = 1 and for a given norm, the process has Q-linear convergence if <Q 1 <1, Q-sublinear convergence if Q 1 1. If, for p = 1, there exist α<1and such that (5) holds, convergence is Q-linear in that norm ([1]). In [3] the following result is proved: Theorem 2.1 ([3], Th. 2.3)). Let F : R N R N be a nonlinear mapping with the following properties: (i) there exists an x R N with F (x )=; (ii) F is continuously differentiable in a neighborhood of x,andf (x )is nonsingular.
3 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS 167 Assume that η η max <t<1. Then there exists ɛ>such that, if x x ɛ, the sequence {x } of inexact Newton methods converges to x. Moreover, the convergence is linear in the sense that x +1 x t x x,. This thesis holds for inexact Newton-lie methods too, if we assume that B is a good approximation to F (x ), i.e. B F (x ) γ and B 1 F (x ) 1 γ, where γ is a suitable small quantity. Theorem 2.1 assures that inexact Newton methods converge Q-linearly in norm and (6) lim sup x +1 x x x t. Convergence conditions in norm can be easily derived with slight changes in the proof of Theorem 2.1. In particular, since y F (x ) 1 y F (x ) y, we obtain (7) Q 1 ({x },x ) = lim sup x +1 x x x cond(f (x )) t, where cond(f (x )) = F (x ) F (x ) 1 is the condition number of F (x ). This observation points out an interesting relation between the Q 1 -factor in norm and the number cond(f (x )) t, and shows that convergence in norm may be sublinear if cond(f (x )) t 1. Since (7) employs x, it does not allow us to evaluate the upper bound t on {η } such that convergence is assured to be Q-linear in norm. On the other hand, in the following section we will show that, for a well nown class of functions F and for an arbitrary norm on R N, new linear convergence conditions leading to computable restrictions on the forcing terms can be derived. As in Theorem 2.1, the forcing sequence is uniformly less than one. 3. Inexact methods with scaled residual control In this section we assume that F belongs to the class F(ω, Λ ) of nonlinear mappings that satisfy the following properties ([6], [9]-[13]): (i) F : D(F ) R N, D(F ) an open set, is continuously differentiable in D(F ); (ii) there exists an x D(F ) such that F (x )=andf (x ) is nonsingular; (iii) S(x,ω)={x x R N x x ω} D(F), and x is the only solution of F (x) =ins(x,ω); (iv) for x, y S(x,ω), F (x ) 1 (F (y) F (x)) Λ y x. From [1], [2] and [11] we have the following lemmata: Lemma 3.1 ([1]). Let F F(ω, Λ ). Then there exists σ<min{ω, 1/Λ } such that F (x) is invertible in S(x,σ) and for all x, y S(x,σ) we have (8) E(y, x) =F (x) 1 (F (y) F (x)), E(y, x) Λ σ y x, where Λ σ =Λ /(1 Λ σ).
4 168 BENEDETTA MORINI Lemma 3.2 ([2], Lemma 4.1.9). Let F : R N R N be continuously differentiable in an open convex set D R N. For any x, x + z D, F (x + z) F (x) = F (x+tz)z dt. Lemma 3.3 ([11], Lemma 3.1). Let x, y S(x,σ).Then E(x +t(x x ),y) dt Λ σ ( x x (9) + m), 2 where m =min{ x y, x y }. In the following, for brevity, we write Λ = Λ σ, S = S(x,σ). Now let us consider inexact methods of the form: For =step 1 until convergence do Find some step ŝ which satisfies (1) ˆB ŝ = F (ˆx )+ˆr, where P ˆr P F (ˆx ) θ Set ˆx +1 =ˆx +ŝ where ˆx is a given initial guess and P is an invertible matrix for each. IfP =I for each, (1) reduces to (3). It is worth noting that residuals of this form are used in iterative Newton methods if preconditioning is applied, and that P changes with index if B does. First we examine inexact Newton and modified inexact Newton methods that correspond to ˆB = F (ŷ )withŷ =ˆx and ŷ =ˆx,foreach, respectively. Theorem 3.1. Assume ˆB = F (ŷ ),, in (1). Let ˆx S, ˆx x δ, ν =θ cond(p F (ŷ )) with ν ν<ν,andµ= Λ 2δ(1 + ν). Then the iterates of (1) are well-defined and {ˆx } converges to x if (11) for inexact Newton methods and (12) α = µ(µ + ν)+ν<1, α = µ(µ + ν)+2µ+ν<1, for modified inexact Newton methods. Further, for sufficiently large we have (13) (14) ˆx +1 x ν ˆx x, ˆx +1 x (2µ + ν) ˆx x, for inexact Newton and modified inexact Newton methods, respectively. Proof. First, we point out that (11) or (12) implies ν<1, (µ + ν) < 1, and (12) yields (2µ + ν) < 1. Assume that ˆx and ŷ are in S, and determine an upper bound for ˆx +1 x. From (1) and Lemma 3.2 with ξ = x + t(ˆx x )weobtain ˆx +1 x =ˆx x F (ŷ ) 1 (F(ˆx ) F(x )) + F (ŷ ) 1ˆr = E(ξ,ŷ )dt (ˆx x )+F (ŷ ) 1 P 1 P ˆr.
5 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS 169 Further, ˆx +1 x E(ξ,ŷ ) dt ˆx x + θ (P F (ŷ )) 1 P F (ˆx ) E(ξ,ŷ ) dt ˆx x + θ (P F (ŷ )) 1 P F (ŷ ) E(ξ,ŷ ) dt ˆx x + θ cond(p F (ŷ )) [(1 + ν ) and from Lemma 3.3 we derive (15) F (ŷ ) 1 F (ξ )dt(ˆx x ) (I + E(ξ, ŷ ))dt(ˆx x ) E(ξ, ŷ ) dt + ν ] ˆx x, ˆx +1 x [Λ(1 + ν )( 1 2 ˆx x + m )+ν ] ˆx x, where m =min{ ˆx ŷ, x ŷ }. Concerning inexact Newton methods, we observe that m =foreach,andif ˆx S(x,δ) from (15) we have ˆx +1 x [ Λ 2 (1 + ν )δ + ν ] ˆx x < (µ + ν) ˆx x. Then, from (11) it follows that ˆx +1 x < ˆx x. Moreover it can be shown that (16) ˆx +1 x <α (µ+ν)δ,. In fact, for =wehave ( ) Λ ˆx 1 x 2 (1 + ν )δ + ν ˆx x < (µ + ν)δ, and (16) holds. Suppose now ˆx x <α 1 (µ+ν)δfor a given 1. If α<1, then ˆx S(x,δ), and from (15) we obtain ˆx +1 x < [Λ(1 + ν )( 1 2 α 1 (µ + ν)δ)+ν ]α 1 (µ+ν)δ <(µ(µ+ν)+ν)α 1 (µ+ν)δ=α (µ+ν)δ, which proves (16). To prove convergence for modified inexact Newton methods we show that (16) holds with α given in (12). In particular, m =,m =min{ ˆx ˆx, ˆx x } δ for >, and (15) yields ˆx 1 x ( 1 2 Λ(1 + ν )δ + ν ) ˆx x < (µ + ν)δ. Then ˆx 1 S(x,δ), and (16) holds for =. If as inductive hypothesis we assume ˆx S(x,δ)and ˆx x <α 1 (µ+ν)δ, then from (15) we have ˆx +1 x <α ˆx x, and (16) follows. Finally, if we consider (15) and if is large enough so that Λ (17) 2 (1 + ν ) ˆx x ν ν, (13) and (14) easily follow.
6 161 BENEDETTA MORINI It is worth noting that, for a given ν<1, at each iteration (11) and (12) yield the upper bound ν/cond(p F (ŷ )) for θ. This theorem also gives an estimate of the radius of convergence δ σ. In particular, from α<1wehave 2(1 ν) (ν +2)+ ν2 +8 δ<, δ <, Λ(1 + ν) Λ(1 + ν) for inexact Newton and modified inexact Newton methods, respectively, and we can conclude that δ decreases while ν approaches 1. From Theorem 3.1, stated with Λ = Λ σ, we have that the sequence {ˆx } belongs to S(x,δ). Moreover, from [1] we now that, for x, y S(x,δ)andδ σ,(8) holds with Λ δ =Λ /(1 Λ δ). Then we can conclude that the thesis of Theorem 3.1 is still valid if we replace Λ with Λ δ. Further, if (11) and (12) are formulated in terms of Λ δ,thenforν= δ< 2 3Λ, δ < (2 2 1)Λ, given in [12] for Newton s method and modified Newton s method, respectively. In particular, the estimate for the radius of convergence of Newton s method is nown to be sharp ([12]). Then, we can conclude that for vanishing residuals, Theorem 3.1 merges into the theory of Newton s method. A result analogous to Theorem 3.1 can be proven also for inexact Newton-lie methods where ˆB = ˆB(ˆx ) approximates F (ˆx ). Theorem 3.2. Let ˆB(x) be an approximation to the Jacobian F (x) for x D(F ), satisfying for x S the following properties: ˆB(x) is invertible; ˆB(x) 1 F (x) I τ 1 ; ˆB(x) 1 F (x) τ 2. Let ˆx S, ˆx x δ,ν =θ cond(p ˆB ) and ν ν<ν. Then the sequence {ˆx } of (1) is well-defined, and {ˆx } converges to x if (18) α = ρ(ρ + τ 1 + ντ 2 )+τ 1 +ντ 2 <1, where ρ = Λ 2 δ(1 + ν)τ 2. Moreover, for sufficiently large, (19) ˆx +1 x (τ 1 +ντ 2 ) ˆx x. Proof. To begin, note that (18) implies (τ 1 + ντ 2 )<1and(ρ+τ 1 +ντ 2 )<1. We assume that ˆx S and determine an upper bound for ˆx +1 x. From (1) and Lemma 3.2 with ξ = x + t(ˆx x )weobtain ˆx +1 x =ˆx x ˆB 1 (F(ˆx ) F(x )) + =ˆx x = 1 ˆB ˆr ˆB 1 F (ξ )dt(ˆx x )+ ˆB 1 F (ˆx )E(ξ, ˆx ))dt(ˆx x ) 1 ˆB (F (ˆx ) ˆB )(ˆx x )+ 1 ˆB ˆr 1 ˆB P 1 P ˆr.
7 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS 1611 Therefore, applying Lemma 3.3, we derive ˆx +1 x (τ 2 E(ξ,ˆx ) dt + τ 1 ) ˆx x + θ ˆB 1 ( 1 2 Λτ 2 ˆx x + τ 1 ) ˆx x + ν τ 2 F (ˆx ) 1 F (ξ ) dt ˆx x ( 1 2 Λτ 2 ˆx x + τ 1 ) ˆx x + ν τ 2 (I + E(ξ, ˆx )) dt ˆx x, P 1 and [ ] Λ (2) ˆx +1 x 2 (1 + ν )τ 2 ˆx x + τ 1 + ν τ 2 ˆx x. To prove convergence, note that ˆx 1 x (ρ+τ 1 +ν τ 2 ) ˆx x <(ρ+τ 1 +ντ 2 )δ, and, by induction on >1, suppose that (21) ˆx x <α 1 (ρ+τ 1 +ντ 2 )δ, P F (ˆx ) with α defined in (18). We can conclude that ˆx S(x,δ) and, from (2), [ ] Λ ˆx +1 x < 2 (1 + ν )τ 2 α 1 (ρ + τ 1 + ντ 2 )δ+τ 1 +ν τ 2 ˆx x <[ ρ(ρ+τ 1 +ντ 2 )+τ 1 +ντ 2 ] ˆx x =α ˆx x, which proves the induction hypothesis. Further, for sufficiently large, (17) holds and from (2) we derive (19). The results we proved state inverse proportionality between each forcing term θ and cond(p ˆB ). Such conditions are sufficient for convergence, and may be overly restrictive for the upper bounds on {θ } if P ˆB are bad conditioned matrices. We now consider the sequence {ˆx } given by inexact methods (1) with P I, and the sequence {x } obtained for P = I,, and matrices B = ˆB(x ) in (3). From Theorem 3.2 it easily follows that if x = ˆx and the sequences {θ cond(p ˆB )} and {η cond(b )} are bounded by the same quantity ν, then {x }and {ˆx } converge to x satisfying (19). Further, consider the -th equation (3) B s = F (x )+r, and assume that both residual controls r F (x ) η, P r P F (x ) θ, are applied. If we suppose η cond(b )=θ cond(p B ), i.e. θ = cond(b ) cond(p B ) η = γη, and if P is such that 1 cond(p B ) cond(b ), we obtain γ [1, cond(b )] and θ [η,ν ]. We point out that θ does not depend on cond(p ) but only on the conditioning of P B. Further, it follows that the choice of a suitable scaling
8 1612 BENEDETTA MORINI matrix P such that cond(p B ) < cond(b ) leads to a relaxation on the forcing terms, and while cond(p B ) decreases, θ approaches the maximum ν. For P = B 1, called natural scaling ([13]), we have the residual control B 1 r (22) B 1 F (x ) θ, and the extremal properties cond(p B )=1andθ =ν, hold. Moreover, regarding inexact Newton and modified inexact Newton methods with control (22), from Theorem 3.1 it is easy to derive Corollary 3.1. Let the hypotheses of Theorem 3.1 hold. Then, given a sequence {θ } uniformly less than 1, there exists a ˆδ such that for ˆx S(x,δ), δ<ˆδ,the sequence {ˆx } converges Q-linearly. Proof. Since ν = θ < 1 and {θ } is bounded away from 1, for a suitable ν<1 we have ν ν < ν. Further, there exists ˆδ > such that (11) holds. Then, from Theorem 3.1 the thesis follows for inexact Newton methods. In the same way, if ˆδ is such that (12) is satisfied, the thesis holds for modified inexact Newton methods. We remar that θ < 1 is the only condition needed to define the process correctly ([3]), and that condition (22) is affine invariant, as it is insensitive with respect to transformations of the mapping F (x) of the form: F (x) AF (x), A an invertible matrix, as long as the same affine transformation is also valid for B(x) (see e.g. inexact Newton, modified inexact Newton and inexact finite difference methods). Since Newton s iterates are affine invariant, in [13] convergence conditions were determined in affine invariant terms. Ypma provided an affine invariant study of local convergence also for inexact Newton methods, although they are formally no longer affine invariant unless the residuals vanish. Specifically, in [9] it was noted that even if the method F (x )s = F (x )+r, =,1,..., x +1 = x + s, is affine invariant, the condition r / F (x ) is not. As a consequence the alternative form (22) with B = F (x ) 1 was proposed, and affine convergence theory was provided for the resulting process. We point out that for P = B 1 and for methods such that (22) is affine invariant, Theorems 3.1 and 3.2 represent an affine convergence analysis of inexact Newtonlie methods, and in the case of inexact Newton methods Corollary 3.1 agrees with the main theorem of [9] (see [9], Theorem 3.1). Acnowledgments The author would lie to than Prof. M.G. Gasparo, M. Macconi, and the referee for detailed and helpful suggestions. References [1] J.M. Ortega, W.C. Rheinboldt, Iterative solution of nonlinear equation in several variables, Academic Press, New Yor, 197. MR 42:8686 [2] J.E. Dennis, R.B. Schnabel, Numerical methods for unconstrained optimization and nonlinear equations, Prentice Hall, Englewood Cliff, NJ, MR 85j:651
9 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS 1613 [3] R.S. Dembo, S.C. Eisenstat, T. Steihaug, Inexact Newton methods, SIAM J. Numer. Anal.,19, 1982, pp MR 83b:6556 [4] T. Steihaug, Quasi-Newton methods for large scale nonlinear problems, Ph.D.Thesis,School of Organization and Management, Yale University, [5] J.M. Martinez, L.Qi, Inexact Newton methods for solving nonsmooth equations, J.Comput. Appl. Math., 6, 1995, pp MR 96h:6576 [6] P. Deuflhard, Global inexact Newton methods for very large scale nonlinear problems, Impact Comput. Sci. and Engrg., 3, 1991, pp MR 92i:6593 [7] P.N. Brown, A.C. Hindmarsh, Reduced storage matrix methods in stiff ODE systems, Appl. Math. Comput., 31, 1989, pp MR 9f:6514 [8] K.R. Jacson, The numerical solution of large systems of stiff IVPs for ODEs, Appl. Numer. Math., 2, 1996, pp MR 97a:6561 [9] T.J.Ypma, Local convergence of inexact Newton methods, SIAM, J. Numer. Anal., 21, 1984, pp MR 85:6543 [1] T.J.Ypma, Local convergence of difference Newton-lie methods, Math. Comp. 41, 1983, pp MR 85f:6553 [11] J.L.M. van Dorsselaer, M.N. Spijer, The error committed by stopping the Newton iteration in the numerical solution of stiff initial value problems, IMA J. of Numer. Anal., 14, 1994, pp MR 95c:6597 [12] T.J.Ypma, Affine invariant convergence for Newton s methods, BIT, 22, 1982, pp MR 84a:5818 [13] P.Deuflhard, G. Heindl, Affine invariant convergence theorem for Newton methods and extension to related methods, SIAM J. Numer. Anal., 16, 1979, pp MR 8i:6568 Dipartimento di Energetica Sergio Stecco, via C. Lombroso 6/17, 5134 Firenze, Italia address: morini@riscmat.de.unifi.it
THE INEXACT, INEXACT PERTURBED AND QUASI-NEWTON METHODS ARE EQUIVALENT MODELS
THE INEXACT, INEXACT PERTURBED AND QUASI-NEWTON METHODS ARE EQUIVALENT MODELS EMIL CĂTINAŞ Abstract. A classical model of Newton iterations which takes into account some error terms is given by the quasi-newton
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationOn the Local Convergence of an Iterative Approach for Inverse Singular Value Problems
On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory
More informationNonlinear equations. Norms for R n. Convergence orders for iterative methods
Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector
More informationResearch Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations
Applied Mathematics Volume 2012, Article ID 348654, 9 pages doi:10.1155/2012/348654 Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations M. Y. Waziri,
More informationA SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION
A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 11-14 On the convergence of inexact Newton methods R. Idema, D.J.P. Lahaye, and C. Vuik ISSN 1389-6520 Reports of the Department of Applied Mathematical Analysis Delft
More informationQuasi-Newton Methods
Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references
More informationA family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations
Journal of Computational Applied Mathematics 224 (2009) 11 19 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: www.elsevier.com/locate/cam A family
More informationTwo improved classes of Broyden s methods for solving nonlinear systems of equations
Available online at www.isr-publications.com/jmcs J. Math. Computer Sci., 17 (2017), 22 31 Research Article Journal Homepage: www.tjmcs.com - www.isr-publications.com/jmcs Two improved classes of Broyden
More information1. Introduction. In this paper we derive an algorithm for solving the nonlinear system
GLOBAL APPROXIMATE NEWTON METHODS RANDOLPH E. BANK AND DONALD J. ROSE Abstract. We derive a class of globally convergent and quadratically converging algorithms for a system of nonlinear equations g(u)
More informationMaria Cameron. f(x) = 1 n
Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that
More informationc 2007 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.
More informationCubic regularization of Newton s method for convex problems with constraints
CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized
More informationNumerical Methods for Large-Scale Nonlinear Equations
Slide 1 Numerical Methods for Large-Scale Nonlinear Equations Homer Walker MA 512 April 28, 2005 Inexact Newton and Newton Krylov Methods a. Newton-iterative and inexact Newton methods. Slide 2 i. Formulation
More informationTHE INVERSE FUNCTION THEOREM
THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)
More information1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0
Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =
More informationLinear algebra issues in Interior Point methods for bound-constrained least-squares problems
Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek
More informationAn Accelerated Block-Parallel Newton Method via Overlapped Partitioning
An Accelerated Block-Parallel Newton Method via Overlapped Partitioning Yurong Chen Lab. of Parallel Computing, Institute of Software, CAS (http://www.rdcps.ac.cn/~ychen/english.htm) Summary. This paper
More informationAn improved convergence theorem for the Newton method under relaxed continuity assumptions
An improved convergence theorem for the Newton method under relaxed continuity assumptions Andrei Dubin ITEP, 117218, BCheremushinsaya 25, Moscow, Russia Abstract In the framewor of the majorization technique,
More informationKeywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence.
STRONG LOCAL CONVERGENCE PROPERTIES OF ADAPTIVE REGULARIZED METHODS FOR NONLINEAR LEAST-SQUARES S. BELLAVIA AND B. MORINI Abstract. This paper studies adaptive regularized methods for nonlinear least-squares
More informationAN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION
AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION JOEL A. TROPP Abstract. Matrix approximation problems with non-negativity constraints arise during the analysis of high-dimensional
More informationHandout on Newton s Method for Systems
Handout on Newton s Method for Systems The following summarizes the main points of our class discussion of Newton s method for approximately solving a system of nonlinear equations F (x) = 0, F : IR n
More informationYURI LEVIN AND ADI BEN-ISRAEL
Pp. 1447-1457 in Progress in Analysis, Vol. Heinrich G W Begehr. Robert P Gilbert and Man Wah Wong, Editors, World Scientiic, Singapore, 003, ISBN 981-38-967-9 AN INVERSE-FREE DIRECTIONAL NEWTON METHOD
More informationStatistics 580 Optimization Methods
Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of
More informationNumerical Method for Solving Fuzzy Nonlinear Equations
Applied Mathematical Sciences, Vol. 2, 2008, no. 24, 1191-1203 Numerical Method for Solving Fuzzy Nonlinear Equations Javad Shokri Department of Mathematics, Urmia University P.O.Box 165, Urmia, Iran j.shokri@mail.urmia.ac.ir
More informationNewton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions
260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationAn Efficient Solver for Systems of Nonlinear. Equations with Singular Jacobian. via Diagonal Updating
Applied Mathematical Sciences, Vol. 4, 2010, no. 69, 3403-3412 An Efficient Solver for Systems of Nonlinear Equations with Singular Jacobian via Diagonal Updating M. Y. Waziri, W. J. Leong, M. A. Hassan
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationMultipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,
More informationA new nonmonotone Newton s modification for unconstrained Optimization
A new nonmonotone Newton s modification for unconstrained Optimization Aristotelis E. Kostopoulos a George S. Androulakis b a a Department of Mathematics, University of Patras, GR-265.04, Rio, Greece b
More informationThe Journal of Nonlinear Sciences and Applications
J. Nonlinear Sci. Appl. 2 (2009), no. 3, 195 203 The Journal of Nonlinear Sciences Applications http://www.tjnsa.com ON MULTIPOINT ITERATIVE PROCESSES OF EFFICIENCY INDEX HIGHER THAN NEWTON S METHOD IOANNIS
More informationPartitioned Methods for Multifield Problems
Partitioned Methods for Multifield Problems Joachim Rang, 10.5.2017 10.5.2017 Joachim Rang Partitioned Methods for Multifield Problems Seite 1 Contents Blockform of linear iteration schemes Examples 10.5.2017
More informationFIXED POINT ITERATIONS
FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in
More informationKey words. convex quadratic programming, degeneracy, Interior Point methods, Inexact Newton methods, global convergence.
AN INTERIOR POINT NEWTON-LIKE METHOD FOR NONNEGATIVE LEAST SQUARES PROBLEMS WITH DEGENERATE SOLUTION STEFANIA BELLAVIA, MARIA MACCONI AND BENEDETTA MORINI Abstract. An interior point approach for medium
More informationStep lengths in BFGS method for monotone gradients
Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly
More informationNew hybrid conjugate gradient methods with the generalized Wolfe line search
Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:
More informationA DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION
1 A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION G E MANOUSSAKIS, T N GRAPSA and C A BOTSARIS Department of Mathematics, University of Patras, GR 26110 Patras, Greece e-mail :gemini@mathupatrasgr,
More informationJournal of Computational and Applied Mathematics
Journal of Computational Applied Mathematics 236 (212) 3174 3185 Contents lists available at SciVerse ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam
More informationNewton s Method and Efficient, Robust Variants
Newton s Method and Efficient, Robust Variants Philipp Birken University of Kassel (SFB/TRR 30) Soon: University of Lund October 7th 2013 Efficient solution of large systems of non-linear PDEs in science
More informationTrust-region methods for rectangular systems of nonlinear equations
Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini
More informationA PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
More informationSearch Directions for Unconstrained Optimization
8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All
More informationA NOTE ON Q-ORDER OF CONVERGENCE
BIT 0006-3835/01/4102-0422 $16.00 2001, Vol. 41, No. 2, pp. 422 429 c Swets & Zeitlinger A NOTE ON Q-ORDER OF CONVERGENCE L. O. JAY Department of Mathematics, The University of Iowa, 14 MacLean Hall Iowa
More informationImproved Damped Quasi-Newton Methods for Unconstrained Optimization
Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified
More informationOn affine scaling inexact dogleg methods for bound-constrained nonlinear systems
On affine scaling inexact dogleg methods for bound-constrained nonlinear systems Stefania Bellavia, Sandra Pieraccini Abstract Within the framewor of affine scaling trust-region methods for bound constrained
More informationA New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods
From the SelectedWorks of Dr. Mohamed Waziri Yusuf August 24, 22 A New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods Mohammed Waziri Yusuf, Dr. Available at:
More informationYURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL
Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions
More informationAcademic Editors: Alicia Cordero, Juan R. Torregrosa and Francisco I. Chicharro
Algorithms 2015, 8, 774-785; doi:10.3390/a8030774 OPEN ACCESS algorithms ISSN 1999-4893 www.mdpi.com/journal/algorithms Article Parallel Variants of Broyden s Method Ioan Bistran, Stefan Maruster * and
More informationEQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES
EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES JEREMY J. BECNEL Abstract. We examine the main topologies wea, strong, and inductive placed on the dual of a countably-normed space
More informationIdentification of Parameters in Neutral Functional Differential Equations with State-Dependent Delays
To appear in the proceedings of 44th IEEE Conference on Decision and Control and European Control Conference ECC 5, Seville, Spain. -5 December 5. Identification of Parameters in Neutral Functional Differential
More informationStep-size Estimation for Unconstrained Optimization Methods
Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations
More informationConvergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization
Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9
More informationAn Inexact Cayley Transform Method For Inverse Eigenvalue Problems
An Inexact Cayley Transform Method For Inverse Eigenvalue Problems Zheng-Jian Bai Raymond H. Chan Benedetta Morini Abstract The Cayley transform method is a Newton-like method for solving inverse eigenvalue
More informationIterative Solution of a Matrix Riccati Equation Arising in Stochastic Control
Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the
More informationMathematical Programming Involving (α, ρ)-right upper-dini-derivative Functions
Filomat 27:5 (2013), 899 908 DOI 10.2298/FIL1305899Y Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Mathematical Programming Involving
More informationConvergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize
Bol. Soc. Paran. Mat. (3s.) v. 00 0 (0000):????. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v38i6.35641 Convergence of a Two-parameter Family of Conjugate
More informationA Hybrid of the Newton-GMRES and Electromagnetic Meta-Heuristic Methods for Solving Systems of Nonlinear Equations
J Math Model Algor (009) 8:45 443 DOI 10.1007/s1085-009-9117-1 A Hybrid of the Newton-GMRES and Electromagnetic Meta-Heuristic Methods for Solving Systems of Nonlinear Equations F. Toutounian J. Saberi-Nadjafi
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationA NOTE ON PAN S SECOND-ORDER QUASI-NEWTON UPDATES
A NOTE ON PAN S SECOND-ORDER QUASI-NEWTON UPDATES Lei-Hong Zhang, Ping-Qi Pan Department of Mathematics, Southeast University, Nanjing, 210096, P.R.China. Abstract This note, attempts to further Pan s
More informationDECAY AND GROWTH FOR A NONLINEAR PARABOLIC DIFFERENCE EQUATION
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 133, Number 9, Pages 2613 2620 S 0002-9939(05)08052-4 Article electronically published on April 19, 2005 DECAY AND GROWTH FOR A NONLINEAR PARABOLIC
More informationA Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization
A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton
More informationQuadrature based Broyden-like method for systems of nonlinear equations
STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 6, March 2018, pp 130 138. Published online in International Academic Press (www.iapress.org) Quadrature based Broyden-like
More informationEuler Equations: local existence
Euler Equations: local existence Mat 529, Lesson 2. 1 Active scalars formulation We start with a lemma. Lemma 1. Assume that w is a magnetization variable, i.e. t w + u w + ( u) w = 0. If u = Pw then u
More informationRITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY
RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity
More informationGENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim
Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this
More informationAn Implicit Multi-Step Diagonal Secant-Type Method for Solving Large-Scale Systems of Nonlinear Equations
Applied Mathematical Sciences, Vol. 6, 2012, no. 114, 5667-5676 An Implicit Multi-Step Diagonal Secant-Type Method for Solving Large-Scale Systems of Nonlinear Equations 1 M. Y. Waziri, 2 Z. A. Majid and
More informationThe Inverse Function Theorem via Newton s Method. Michael Taylor
The Inverse Function Theorem via Newton s Method Michael Taylor We aim to prove the following result, known as the inverse function theorem. Theorem 1. Let F be a C k map (k 1) from a neighborhood of p
More informationTwo-Step Iteration Scheme for Nonexpansive Mappings in Banach Space
Mathematica Moravica Vol. 19-1 (2015), 95 105 Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space M.R. Yadav Abstract. In this paper, we introduce a new two-step iteration process to approximate
More informationNamur Center for Complex Systems University of Namur 8, rempart de la vierge, B5000 Namur (Belgium)
On the convergence of an inexact Gauss-Newton trust-region method for nonlinear least-squares problems with simple bounds by M. Porcelli Report naxys-01-011 3 January 011 Namur Center for Complex Systems
More informationA NUMERICAL ITERATIVE SCHEME FOR COMPUTING FINITE ORDER RANK-ONE CONVEX ENVELOPES
A NUMERICAL ITERATIVE SCHEME FOR COMPUTING FINITE ORDER RANK-ONE CONVEX ENVELOPES XIN WANG, ZHIPING LI LMAM & SCHOOL OF MATHEMATICAL SCIENCES, PEKING UNIVERSITY, BEIJING 87, P.R.CHINA Abstract. It is known
More informationA perturbed damped Newton method for large scale constrained optimization
Quaderni del Dipartimento di Matematica, Università di Modena e Reggio Emilia, n. 46, July 00. A perturbed damped Newton method for large scale constrained optimization Emanuele Galligani Dipartimento
More informationNew w-convergence Conditions for the Newton-Kantorovich Method
Punjab University Journal of Mathematics (ISSN 116-2526) Vol. 46(1) (214) pp. 77-86 New w-convergence Conditions for the Newton-Kantorovich Method Ioannis K. Argyros Department of Mathematicsal Sciences,
More informationA Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems
A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based
More informationPrediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate
www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation
More informationAn Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems
Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical
More informationCHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate
CONVERGENCE ANALYSIS OF THE LATOUCHE-RAMASWAMI ALGORITHM FOR NULL RECURRENT QUASI-BIRTH-DEATH PROCESSES CHUN-HUA GUO Abstract The minimal nonnegative solution G of the matrix equation G = A 0 + A 1 G +
More informationThe aim of this paper is to obtain a theorem on the existence and uniqueness of increasing and convex solutions ϕ of the Schröder equation
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 125, Number 1, January 1997, Pages 153 158 S 0002-9939(97)03640-X CONVEX SOLUTIONS OF THE SCHRÖDER EQUATION IN BANACH SPACES JANUSZ WALORSKI (Communicated
More informationAccelerating the cubic regularization of Newton s method on convex problems
Accelerating the cubic regularization of Newton s method on convex problems Yu. Nesterov September 005 Abstract In this paper we propose an accelerated version of the cubic regularization of Newton s method
More informationNotes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)
Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 1 2 Contents 1 Error Analysis 5 2 Nonlinear Algebraic Equations 7 2.1 Convergence
More informationNumerical Methods for Large-Scale Nonlinear Systems
Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationResidual iterative schemes for largescale linear systems
Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz
More informationsystem of equations. In particular, we give a complete characterization of the Q-superlinear
INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica
More informationREPORTS IN INFORMATICS
REPORTS IN INFORMATICS ISSN 0333-3590 A class of Methods Combining L-BFGS and Truncated Newton Lennart Frimannslund Trond Steihaug REPORT NO 319 April 2006 Department of Informatics UNIVERSITY OF BERGEN
More informationOn intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings
Int. J. Nonlinear Anal. Appl. 7 (2016) No. 1, 295-300 ISSN: 2008-6822 (electronic) http://dx.doi.org/10.22075/ijnaa.2015.341 On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous
More informationOPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION
OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION J.A. Cuesta-Albertos 1, C. Matrán 2 and A. Tuero-Díaz 1 1 Departamento de Matemáticas, Estadística y Computación. Universidad de Cantabria.
More informationConvergence of a Third-order family of methods in Banach spaces
International Journal of Computational and Applied Mathematics. ISSN 1819-4966 Volume 1, Number (17), pp. 399 41 Research India Publications http://www.ripublication.com/ Convergence of a Third-order family
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More informationInexact Newton Methods Applied to Under Determined Systems. Joseph P. Simonis. A Dissertation. Submitted to the Faculty
Inexact Newton Methods Applied to Under Determined Systems by Joseph P. Simonis A Dissertation Submitted to the Faculty of WORCESTER POLYTECHNIC INSTITUTE in Partial Fulfillment of the Requirements for
More informationKantorovich s Majorants Principle for Newton s Method
Kantorovich s Majorants Principle for Newton s Method O. P. Ferreira B. F. Svaiter January 17, 2006 Abstract We prove Kantorovich s theorem on Newton s method using a convergence analysis which makes clear,
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES ABSTRACT
ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES T. DOMINGUEZ-BENAVIDES, M.A. KHAMSI AND S. SAMADI ABSTRACT In this paper, we prove that if ρ is a convex, σ-finite modular function satisfying
More informationRenormings of c 0 and the minimal displacement problem
doi: 0.55/umcsmath-205-0008 ANNALES UNIVERSITATIS MARIAE CURIE-SKŁODOWSKA LUBLIN POLONIA VOL. LXVIII, NO. 2, 204 SECTIO A 85 9 ŁUKASZ PIASECKI Renormings of c 0 and the minimal displacement problem Abstract.
More informationMULTIGRID METHODS FOR NONLINEAR PROBLEMS: AN OVERVIEW
MULTIGRID METHODS FOR NONLINEAR PROBLEMS: AN OVERVIEW VAN EMDEN HENSON CENTER FOR APPLIED SCIENTIFIC COMPUTING LAWRENCE LIVERMORE NATIONAL LABORATORY Abstract Since their early application to elliptic
More informationM ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SMOOTH HYPERSURFACE SECTIONS CONTAINING A GIVEN SUBSCHEME OVER A FINITE FIELD
M ath. Res. Lett. 15 2008), no. 2, 265 271 c International Press 2008 SMOOTH HYPERSURFACE SECTIONS CONTAINING A GIEN SUBSCHEME OER A FINITE FIELD Bjorn Poonen 1. Introduction Let F q be a finite field
More information