TOWARDS A GENERAL CONVERGENCE THEORY FOR INEXACT NEWTON REGULARIZATIONS

Size: px
Start display at page:

Download "TOWARDS A GENERAL CONVERGENCE THEORY FOR INEXACT NEWTON REGULARIZATIONS"

Transcription

1 TOWARDS A GENERAL CONVERGENCE THEOR FOR INEXACT NEWTON REGULARIZATIONS ARMIN LECHLEITER AND ANDREAS RIEDER July 13, 2009 Abstract. We develop a general convergence analysis for a class of inexact Newtontype regularizations for stably solving nonlinear ill-posed problems. Each of the methods under consideration consists of two components: the outer Newton iteration and an inner regularization scheme which, applied to the linearized system, provides the update. In this paper we give a novel and unified convergence analysis which is not confined to a specific inner regularization scheme but applies to a multitude of schemes including Landweber and steepest decent iterations, iterated Tikhonov method, and method of conjugate gradients. Key words. Nonlinear ill-posed problems, inexact Newton iteration. AMS subject classifications. 65J20, 65J Introduction. During the last two decades a broad variety of Newton-like methods for regularizing nonlinear ill-posed problems have been suggested and analyzed, see, e.g., [2, 13, 20] for an overview and original references. So similar some of the methods are so different are their analyses, even when the same structural assumptions on the nonlinearity are required. This situation is in contrast to the linear setting. Here, a general theory is known when the linear regularization scheme is generated by a regularizing filter function, see, e.g., [6, 14, 17, 20]. Properties of the scheme can be directly read off from properties of the generating filter function. The present paper was driven by the wish to develop a similar general theory for a class of regularization schemes of inexact Newton-type for nonlinear ill-posed problems. This class has been introduced and named REGINN REGularization based on INexact Newton iteration by the second author [18, 19, 21]. Each of the REGINN-methods consists of two components, the outer Newton iteration and the inner scheme providing the increment by regularizing the local linearization. Although the methods differ in their inner regularization schemes we are able to present a common convergence analysis. To this end we compile five features which not only guarantee convergence but are also shared by various inner regularization schemes which are so different as, for instance, Landweber iteration, steepest decent iteration, implicit iteration, and method of conjugate gradients. Centre de Mathématiques Appliqués CMAP, Ecole Polytechnique, Palaiseau Cedex, France, alechle@cmapx.polytechnique.fr Institut für Angewandte und Numerische Mathematik and Institut für Wissenschaftliches Rechnen und Mathematische Modellbildung, Universität Karlsruhe, Karlsruhe, Germany, andreas.rieder@math.uni-karlsruhe.de 1

2 2 A. LECHLEITER AND A. RIEDER Let us now set the stage for REGINN. We like to solve the nonlinear illposed problem Fx = y δ 1.1 where F : DF X operates between the real Hilbert spaces X and. Here, DF denotes the domain of definition of F. The right hand side y δ is a noisy version of the exact but unknown data y = Fx + satisfying y y δ δ. 1.2 The nonnegative noise level δ is assumed to be known. Algorithm REGINN for solving 1.1 is a Newton-type algorithm which updates the actual iterate x n by adding a correction step s N n obtained from solving a linearization of 1.1: x n+1 = x n + s N n, n N 0, 1.3 with an initial guess x 0. For obvious reasons we like to have s N n as close as possible to the exact Newton step s e n = x + x n. Assuming F to be continuously Fréchet differentiable with derivative F : DF LX, the exact Newton step satisfies the linear equation where F x n s e n = y Fx n Ex +,x n =: b n 1.4 Ev,w := Fv Fw F wv w is the linearization error. In the sequel we will use the notation A n = F x n. Unfortunately, the above right hand side b n is not available, however, we know a perturbed version b ε n := yδ Fx n with b n b ε n δ + Ex +,x n. 1.5 Therefore, we determine the correction step s N n as a stable approximate solution of A n s = b ε n 1.6 by applying a regularization scheme, for instance, Landweber iteration, Showalter method, iterated Tikhonov regularization, method of conjugate gradients, etc. Therefore, let {s n,m } m X be the sequence of regularized approximations generated by a chosen regularization scheme applied to 1.6.

3 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 3 REGINNx Nδ,R, {µ n } n := 0; x 0 := x Nδ ; while Fx n y δ > Rδ do { m := 0; repeat m := m + 1; compute s n,m from 1.6; } until F x n s n,m + Fx n y δ < µ n Fx n y δ x n+1 := x n + s n,m ; n := n + 1; x Nδ := x n ; Figure 1.1: REGINN: REGularization based on INexact Newton iteration. We now explain how we pick the Newton step s N n out of {s n,m}: For an adequately chosen tolerance µ n ]0,1[ see Lemma 2.3 below define and set m n = min { m N : A n s n,m b ε n < µ n b ε n }, 1.7 s N n := s n,mn. 1.8 In other words: the Newton step is the first element of {s n,m } for which the residual A n s n,m b ε n is less or equal to µ n b ε n. Finally, we stop the Newton iteration 1.3 by a discrepancy principle: choose R > 0 and accept the iterate x Nδ as approximation to x + if y δ Fx Nδ Rδ < y δ Fx n, n = 0,1,...,Nδ 1, 1.9 see Figure 1.1. The remainder of the paper is organized as follows. In the next section we present a residual and level set based analysis of REGINN requiring only three rather elementary properties of the regularizing sequence {s n,m } together with a structural restriction on the nonlinearity F. In a certain sense, this restriction is equivalent to the meanwhile well-established tangential cone condition, see, e.g., [13, 20]. Under our assumptions REGINN is well defined and terminates. Moreover, all iterates stay in the level set L = {x DF: Fx y δ Fx 0 y δ }. Unfortunately, L cannot be assumed bounded, thus prohibiting the use of a weak-compactness argument to verify weak convergence at least. Local convergence, however, is our topic in Section 3. Provided the regularizing sequence {s n,m } exhibits a specific monotone error decrease up

4 4 A. LECHLEITER AND A. RIEDER to a stopping index all REGINN-iterates will stay in a ball about x +. Finally, we prove strong convergence of {x Nδ } δ as δ 0. Several regularization methods applied to 1.6 generate sequences {s n,m } meeting our assumptions. Some of the respective proofs, which do not fit comfortably in the body of the text, are given in two appendices. We did not include a section on numerical experiments as we do not present new methods but a new and unified convergence analysis for these methods. The interested reader should consult our paper [15] where we solved the inverse problem of impedance tomography using REGINN furnished with the conjugate gradients iteration for computing the Newton update Residual and level set based analysis. For the analysis of REGINN we require three properties of the regularizing sequence {s n,m }, namely A n s n,m,b ε n > 0 m 1 whenever A n bε n 0, 2.1 and lim A ns n,m = P m RAn bε n. 2.2 The latter convergence guarantees existence of a number ϑ n 1 such that A n s n,m ϑ n b ε n for all m. We, however, require also uniformity in n: There is a Θ 1 with Typically, {s n,m } is generated by A n s n,m Θ b ε n m,n. 2.3 s n,m = g m A na n A nb ε n where g m : [0, A n 2 ] R is a so-called filter function. If 0 < λg m λ C g, λ > 0, and lim m g mλ = 1/λ, λ > 0, then all requirements 2.1, 2.2, and 2.3 are fulfilled where Θ C g. Here are some concrete examples: Landweber iteration: g m λ = λ ωλ m where ω ]0, A n 2 [ and C g = 1. Tikhonov regularization: g m λ = 1/λ + α m where {α m } m is a positive sequence converging strongly monotone to zero. Thus, C g = 1. Iterated Tikhonov regularization implicit iteration: g m λ = λ 1 1 m k=1 1 + α kλ 1 where the positive sequence {α k } k is bounded away from zero, typically {α k } [α min,α max ] where 0 < α min < α max. Here, C g = 1.

5 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 5 Showalter regularization: λ 1 1 exp α 1 m λ : λ > 0, g m λ = α 1 m : λ = 0, where the positive sequence {α m } m converges strongly monotone to zero. Again, C g = 1. Semi-iterative ν-methods ν > 0 due to Brakhage [3]: Let A n be scaled, that is, A n 1. Then, g m λ = 1 P m ν λ /λ where P m ν λ = P m 2ν 1/2, 1/2 1 2λ/P m 2ν 1/2, 1/2 1 with P m α,β denoting the Jacobi-polynomials. As P m ν attains negative values in ]0,1[ all roots are within this interval we have C g > 1. Sharp estimates for C g or Θ are hard to obtain. Also nonlinear regularization schemes, which cannot be represented by filter functions, satisfy 2.1, 2.2, and 2.3: steepest decent method where Θ 2, and method of conjugate gradients cg-method where Θ = 1, provided the staring iterate is 0, see Appendix A for the respective proofs. Remark 2.1. Recently, Jin and Tautenhahn [12] presented a subtle convergence analysis of generalized iteratively regularized Gauß-Newton methods, x n+1 = x n + g mn A n A n A n b ε n + I g mn A n A n A n A n x0 x n, 2.4 stopped by the discrepancy principle 1.9. The iteratively regularized Gauß- Newton method with Tikhonov regularization has been introduced by Bakushinsky in his pioneering work [1]. The differences of 2.4 to REGINN consist in the rightmost term and in the a priori choice of the sequence {m n } n which is assumed to be monotonically increasing by a certain rate. For a large class of filter functions including Landweber and Showalter filters Jin and Tautenhahn proved deep and far-reaching convergence results. Under weaker assumptions, not covered by Theorems 1, 2 or 3 in [12], we obtain weaker convergence results. However, the technique of Jin and Tautenhahn does not apply to REGINN [12, Remark 3] and cannot be extended to other nonlinear regularization schemes in a straightforward way. below. We strongly conjecture that Θ = 1 for the steepest decent method, see Remark A.1

6 6 A. LECHLEITER AND A. RIEDER Now we present first results. By the first property of 2.1 any direction s n,m is a descent direction for the functional ϕ = 1 2 yδ F 2. Lemma 2.2. We have that ϕx n,s n,m X < 0 for m 1 whenever A n bε n 0. Proof. By ϕ = F y δ F we find that and the lemma is verified. ϕx n,s n,m X = b ε n,a n s n,m 2.1 < 0 If µ n is not too small then the Newton step s N n = s n,m n is well defined indeed. Lemma 2.3. Assume 2.2 and P RAn bε n < b ε n. Then, for any tolerance ] PRAn µ n bε n ] b ε n,1 the stopping index m n 1.7 is well defined. Proof. By 2.2 A n s n,m b ε n lim m b ε n which completes the proof. = P RA n bε n b ε n < µ n. Remark 2.4. If the assumption in above lemma is violated then REGINN fails as well as other Newton schemes: under P RAn bε n = b ε n we have that s n,m = 0 for all m. Now we provide a framework that guarantees termination of REGINN Figure 1.1, that is, we prove existence of x Nδ. For x 0 DF such that Fx 0 y δ > δ define the level set Lx 0 := { x DF: Fx y δ Fx 0 y δ }. Note that x + Lx 0. Further, we restrict the structure of nonlinearity. Throughout we work with the following bound for the linearization error: Ev,w L F wv w for one L < 1 and for all v,w Lx 0 with v w N F w. 2.5

7 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 7 From 2.5 we derive that Ev,w ω Fw Fv where ω = L 1 L > L 2.6 which is the tangential cone condition introduced by Scherzer [22]. In the convergence analysis of Newton methods for ill-posed problems, both 2.5 and 2.6 are adequate to replace the Lipschitz continuity of the Fréchet derivative which is typically used to bound the linearization error in the framework of well-posed problems, see, e.g., [13, Section 2.1] for a detailed explanation. Remark 2.5. Actually, 2.5 and 2.6 are equivalent in the following sense: 2.6 for one ω < 1 implies 2.5 with L = ω 1 ω. Moreover, we assume the existence of a [0,1[ such that P RF u Fx + Fu Fx + Fu for all u Lx Assumption 2.7 is quite natural as it characterizes those nonlinear problems which can be tackled by local linearization compare Remark 2.4: As 2.7 is equivalent to 1 2 Fx + Fu PRF u Fx + Fu, the right hand side of the linearized system 1.6 has a component in the closure of the range of A n and the magnitude of this component is uniformly bounded from below by 1 2. We give an example of a nonlinear operator where both 2.5 and 2.7 are satisfied globally in the domain of definition. Example 2.6. Let f : R R be a continuously differentiable function with a derivative bounded from below: f t f min > 0. We define the operator F : X by Fx = n=1 1 n f x,v n X un where {v n } and {u n } are orthonormal bases in the separable Hilbert spaces X and, respectively. The Fréchet derivative of F is the compact operator F xh = n=1 1 n f x,v n X h,vn X u n with range RF x = { y : {n y,u n } n l 2}. Clearly, RF x =. Hence, 2.7 holds true with = 0.

8 8 A. LECHLEITER AND A. RIEDER Now we further restrict the nonlinearity by imposing a bound from above on the derivative of f: f t f max with f max < 2f min. For instance, ft = t arctant + 1 or ft = 6t + cost. By the mean value theorem there is a ξ ]s,t[ such that ft fs = f ξt s. Therefore, for all s,t R ft fs f st s = f ξ f s f f st s s implying f max f min f min f st s }{{} =: L < 1 Ev,w L F wv w for all v,w X. For L small enough, 2.5 implies 2.7. Lemma 2.7. Assume 2.5 to hold with L < 1/2. Then, 2.7 holds for = Proof. We have that P RF u Fx + Fu L 1 L < 1. Further, yielding first = PRF u Fx + Fu F ux + u L F ux + u. F ux + u Ex +,u + Fx + Fu and then the assertion. L F ux + u + Fx + Fu F ux + u 1 1 L Fx+ Fu Theorem 2.8. Let DF be open and choose x 0 DF such that Lx 0 DF. Assume 2.1, 2.2, 2.3, 2.5, and 2.7 to hold true with Θ, L, and satisfying ΘL + < Λ for one Λ <

9 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 9 Further, choose 1 + R > Λ ΘL. 2.9 Finally, select all tolerances {µ n } such that µ n ] µ min,n, Λ ΘL ], with µ min,n := 1 + δ b ε n +. Then, there exists an Nδ such that all iterates {x 1,...,x Nδ } of REGINN are well defined and stay in Lx 0. Moreover, only final iterate satisfies the discrepancy principle, that is, y δ Fx Nδ Rδ, 2.10 and the nonlinear residuals decrease linearly at an estimated rate y δ Fx n+1 y δ Fx n < µ n + θ n L Λ, n = 0,...,Nδ 1, 2.11 where θ n = A n s N n / b ε n Θ. Proof. Before we start with the proof let us discuss our assumptions on L,, Λ, and R. Condition 2.8 guarantees that the denominator of the lower bound on R is positive. The lower bound on R is needed to have a welldefined nonempty interval for selecting µ n. Indeed, as long as b ε n > Rδ we get µ min,n = 1 + δ b ε n + < 1 + R < Λ ΘL We will argue inductively and therefore assume the iterates {x 1,...,x n } to be well defined in Lx 0. If b ε n Rδ then REGINN will terminate with Nδ = n. Otherwise, b ε n > Rδ and µ n ]µ min,n,λ ΘL] will provide Newton step s N n: P RAn bε n b ε n δ + P RA n Fx+ Fx n b ε n 2.7 δ + Fx+ Fx n b ε n δ + bε n b ε n = µ min,n < µ n. By Lemma 2.3 the Newton step s N n and hence x n+1 = x n + s N n X are well defined. We next show that x n+1 is in Lx 0. First, s N n is a decent direction: Indeed A nb ε n = 0 gives b ε n RA n contradicting Hence, A nb ε n 0 and Lemma 2.2 applies. As DF is assumed to be open there exists a λ > 0 such that x n,λ := x n + λs N n is in DF and y δ Fx n,λ < y δ Fx n y δ Fx 0.

10 10 A. LECHLEITER AND A. RIEDER Thus, x n,λ Lx 0. Further, x n,λ x n = λs N n RA n NA n. Accordingly we may proceed by estimating y δ Fx n,λ = y δ Fx n λa n s N n Fx n,λ Fx n λa n s N n 2.5 y δ Fx n λa n s N n + Lλ A n s N n Define 1 λb ε n + λbε n A ns N n + Lλθ n b ε n < 1 λ b ε n + µ n λ b ε n + Lλθ n b ε n λ1 Λ b ε n. λ max := sup { λ [0,1] : x n,λ Lx 0 }. Assume λ max < 1, that is, x n,λmax Lx 0 DF. By continuity we obtain from 2.14 that y δ Fx n,λmax 1 λ max 1 Λ b ε n < b ε n b ε 0 contradicting x n,λmax Lx 0. Hence, λ max = 1 and x n+1 = x n,λmax Lx 0. Finally, b ε n+1 < µ n + θ n L b ε n by plugging λ = 1 into A few comments are in order. Remark 2.9. Deuflhard, Engl, and Scherzer [5, formula 2.11] have basically introduced the following Newton-Mysovskikh-like condition F v F w F w + L for all v,w Lx where F w + denotes the Moore-Penrose inverse of F w. They discovered interesting relations to other structural assumptions used in the convergence analysis of iterative methods for the solution of nonlinear ill-posed problems [5, Lemma 2.3]. If Lx 0 is convex then 2.15 implies 2.5. Indeed, for v,w Lx 0 with v w NF w we have resulting in Ev,w F w + F wv w = P NF w v w = v w = F w + tv w F w v w dt F w + tv w F w F w + F wv w dt L F wv w.

11 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 11 Remark An assumption similar to 2.7 is PRF u η Fu η Fu for one < 1 and for all u Lx 0 and all η with η Fx + δ max Under above property the hypotheses of Theorem 2.8 can be relaxed: Let δ δ max. Since P RAn bε n b ε n the assertion of Theorem 2.8 remains true whenever + L < Λ, {µ n } ],Λ L] and R > 0 no other restriction on R, compare 2.9. The mapping from Example 2.6 satisfies 2.16 with = 0 for any δ max 0. Nevertheless, 2.16 is quite restrictive. While 2.7 holds trivially for any linear mapping with = 0, 2.16 can only hold for a linear mapping with a dense range. Indeed, let F : X be a linear and bounded mapping with a non-closed range. Assume 2.16 as well as RF. Let y δ RF a natural assumption for noisy data. There is a sequence {u n } X such that lim n Fu n P RF y δ = 0. Since lim Fu n y δ = P n RF y δ = P RF Fx + y δ δ < Fx 0 y δ we may assume the whole sequence {u n } is in Lx 0. Now, PRF contradicts y δ Fu n y δ Fu n = PRF y δ y δ Fu n n 1 3. Local convergence. After establishing termination of REGINN the next question to answer is: Does the family {x Nδ } 0<δ δmax converge to a solution of F = y as the noise level δ approaches 0? Since 2.10 y Fx Nδ < R + 1δ 3.1 the images of {x Nδ } under F converge to y. This, however, implies by no means convergence of {x Nδ }. Indeed, {x Nδ } might explode as δ 0. There is no reason to suppose compactness or boundedness of the level set Lx 0. Contrary, for an ill-posed problem Lx 0 is expected to be unbounded. In this section we will show boundedness and then some kind of convergence of {x Nδ } provided the regularizing sequence {s n,m } 0 m mn exhibits

12 12 A. LECHLEITER AND A. RIEDER a fourth property in addition to those from 2.1, 2.2, and 2.3. We require the following: For any n {1,...,m n } there is a v n,m 1 such that s n,m = s n,m 1 + A nv n,m 1. Further, let there be a continuous and monotonically increasing function Ψ: R R with t Ψt for t [0,1] such that if γ n = b ε n A n s e n / b ε n < 1 then 3.2 s n,m s e n 2 X s n,m 1 s e n 2 X < C M b ε n v n,m 1 for m = 1,...,m n where C M > 0 is a constant. Ψγ n bε n A n s n,m 1 b ε n A direct consequence of 3.2 is monotonicity, i.e., b ε n A n s n,m 1 b ε n Ψγ n = s n,m s e n X < s n,m 1 s e n X. 3.3 Examples of methods with property 3.2 are Landweber iteration and steepest decent: Ψt = 2t, Implicit iteration: Ψt = 2 α max + s t where s = sup n A n 2 and {α k } k [α min,α max ], cg-method: Ψt = 2t, α min the respective proofs are given in Appendix B Monotone error decay. Under 3.2 we formulate a version of Theorem 2.8 where all assumptions are related to a ball about x +, that is, the implicitely defined, generally unbounded level set Lx 0 is replaced by B r x +. Especially, 2.5 is replaced by Ev,w L F wv w for one L < 1 and for all v,w B r x + DF. 3.4 Theorem 3.1. Assume 2.1, 2.2, 2.3, 3.2. Additionally, let 2.7 hold true in B r x + and assume 3.4 with L satisfying L Ψ + ΘL < Λ for one Λ < L L As + L L + ΘL Ψ` L 1 L 1 L 1 L + ΘL < 1 we have the necessary condition L < 3 5/

13 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 13 Further, define and choose R so large that 1 1 µ min := Ψ R + L 1 L µ min + ΘL < Λ. 3.6 Restrict all tolerances {µ n } to µ n [µ min, Λ ΘL] and start with x 0 B r x +. Then, there exists an Nδ such that all iterates {x 1,...,x Nδ } of REGINN are well defined and stay in B r x +. We even have a strictly monotone error reduction: x + x n X < x + x n 1 X, n = 1,...,Nδ. 3.7 Moreover, only the final iterate satisfies the discrepancy principle 2.10 and the nonlinear residuals decrease linearly at the estimated rate Proof. Let us first discuss our assumptions. If 3.5 applies then, by continuity of Ψ, there exists a R such that µ min satisfies 3.6 and the interval for selecting the tolerances is nonempty. As before we use an inductive argument: Assume the iterates x 1,...,x n to be well defined in B ρ x +. If b ε n < Rδ REGINN will be stopped with Nδ = n. Otherwise, b ε n Rδ and µ n [µ min,λ ΘL] will provide a new Newton step. Indeed, in view of 2.13 and 2.12 we have that P RAn bε n b ε < 1 + n R Ψ R + µ min where the latter estimate holds true due to L/1 L Lemma 2.7 and the monotonicity of Ψ. By Lemma 2.3 the Newton step s N n and hence x n+1 = x n + s N n X are well defined. It remains to verify the strictly monotone error reduction 3.7. We will rely on 3.3. By 1.5 and 3.4, we have yielding first and then b n b ε n δ + L b n 1 R bε n + L b n b ε n + b ε n γ n = b n b ε n b ε n Ψγ n µ min µ n. 1 R + L 1 1 L Accordingly, b ε n A ns n,m 1 µ min b ε n, m = 1,...,m n, and we have by repeatedly applying the monotonicity 3.3 x + x n+1 X = s e n s n,m n X < s e n s n,mn 1 X < s e n s n,mn 2 X 3.8 < < s e n s n,0 X = s e n X = x + x n X

14 14 A. LECHLEITER AND A. RIEDER which is 3.7. Remark 3.2. Some nonlinear ill-posed problems such as a model in electrical impedance tomography, see [16] satisfy a slightly stronger version of 3.4 where L is replaced by C v w X. In view of 3.7 we expect in this situation the reduction rate 2.11 to approach µ n as the Newton iteration progresses. Remark 3.3. A stronger assumption than 3.4 is Ev,w L F wv w 1+κ for one κ > 0 and for all v,w B r x Here, L is allowed to be arbitrarily large. If r is sufficiently small we have 3.4 with L := 2 κ r κ L max F u κ < 1. u B rx + Now, let r be so small that all assumptions of Theorem 3.1 apply with L as above. Additionally, choose x 0 B r x + satisfying y δ Fx 0 κ L/ L. Then, all assertions of Theorem 3.1 remain valid with the stronger rate y δ Fx n+1 y δ Fx n µ n + θ 1+κ n Λ κn L Λ, n = 0,...,Nδ We only need to verify the rate. We have b ε n+1 = b ε n A n s N n + Ex n+1,x n 3.9 µ n + 1+κ Lθ n which inductively implies b ε n κ b ε n µ n b ε n + L An s N n 1+κ Remark 3.4. Both bounds 3.4 and 3.9 for the linearization error may be derived from the following affine contravariant Lipschitz condition: F v F w v w L κ F wv w 1+κ for one κ [0,1] and for all v, w B r x where L κ > 0 and in case κ = 0 we require L 0 < 1. Indeed, Ev,w = 1 0 F w + tv w F w v wdt L κ 1 + κ F wv w 1+κ. This bound implicitly forces y y δ κ < L/ e L.

15 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 15 For a general discussion of the importance of affine contravariance for Newtonlike algorithms we refer to Section of Deuflhard s book [4]. In particular, Section 4.2 of the same book treats Gauß-Newton methods for wellposed finite dimensional least squares problems under 3.11 globally in DF and with κ = Convergence. We now turn to the convergence of {x Nδ } 0<δ δmax as δ 0. Corollary 3.5. Adopt all assumptions and notations of Theorem 3.1. Additionally let F be weakly sequentially closed and let {δ j } j N be a positive zero sequence. Then, any subsequence of {x Nδj } j N contains a subsequence which converges weakly to a solution of Fx = y. Proof. Any subsequence of the bounded family {x Nδj } j N B r x + is bounded and, therefore, has a weakly convergent subsequence. Let ξ be its weak limit. By 3.1 the images under F of this weakly convergent subsequence converge weakly to y. Due to the weak closedness of F we have that y = Fξ. The whole family {x Nδj } j N converges weakly to x + if x + is the unique solution of Fx = y in B r x +. This follows, for instance, from Proposition in [24]. However, under the assumptions of Theorem 3.1, the latter can only happen if NA, the null space of A = F x +, is trivial. In fact, if 0 v NA then Fx + + tv y = Fx + + tv Fx L + 1 t Av = 0 for any t [0,r/ v X ]. On the other hand, if NA is trivial we even have semi-norm convergence. Corollary 3.6. Under the assumptions of Theorem 3.1 we have that x + x Nδ A < 1 + R 1 L δ where A = A is a semi-norm in general. Proof. From 3.4 we obtain that x + x Nδ A 1 1 L y Fx Nδ which, in view of 3.1, implies the assertion. The above corollary yields norm convergence whenever NA = {0}. In general, this norm is weaker than the standard norm in X.

16 16 A. LECHLEITER AND A. RIEDER However, strong convergence in X can be verified under an additional stability assumption on the regularization scheme applied to the locally linearized system 1.6. To formulate this assumption we introduce new notation: We subsequently need to differ clearly between the noisy δ > 0 and the noiseless situation δ = 0. From now on in this section, quantities referring to the noisy setting i.e., y δ y will be marked by a superscript δ. Quantities without superscript indicate exact data i.e., y = y 0. We require the following stability of the regularizing sequence {s n,m } 0 m mn : lim δ 0 sδ n,m = s n,m for any fixed m m n All four examples satisfying 3.2 share also stability: Landweber and implicit iteration are linear iterative schemes. Thus, stability 3.12 can be shown straightforwardly by an inductive argument which we shortly present for the implicit iteration: Let T, T γ LX, with lim γ 0 T T γ = 0. Further, let g, g γ with lim γ 0 g g γ = 0. Define implicit iteration with respect to T,g and T γ,g γ by and f m+1 = α m I + T T 1 α m f m + T g f γ m+1 = α mi + T γ T γ 1 α m f γ m + T γ g γ, respectively, where f 0 = f γ 0. Since lim α mi + Tγ T γ 1 α m I + T T 1 = 0 γ 0 convergence of f γ m to f m as γ 0 follows inductively for any m. Proof of stability is more complicated for the nonlinear steepest decent and cg iterations. Fortunately, we only need to refer to previous work of Scherzer [23, Lemma 3.2] and Hanke [10, Lemma 3.4] for steepest decent and cg, respectively. Both lemmas apply to our setting because early termination of both iterations does not occur before reaching the stopping index m n. Theorem 3.7. Assume 3.12 and adopt all assumptions and notations of Theorem 3.1, however, restrict all tolerances {µ n } to [µ,λ ΘL] for a µ > µ min. Additionally let x + be the only solution of Fx = y in B r x +. Then, lim δ 0 x+ x δ Nδ X = 0. For the proof of above theorem we basically generalize results of Hanke [10, Sect. 4 and 5] and [9] who, in turn, generalized ideas of Hanke, Neubauer and Scherzer [11].

17 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 17 We start with some preparatory lemmas and validate first convergence of REGINN in the noiseless setting. In the remainder of this section we tacitly presume the assumptions of Theorem 3.7. Lemma 3.8. In the noiseless situation, that is, δ = 0, we have that lim n x+ x n X = 0. Proof. Please note that REGINN is well-defined for δ = 0 under the hypotheses of Theorem 3.1, that is, if x n 1 B r x + does not solve Fx = y then x n B r x + exists and satisfies 3.7 as well as Therefore, if REGINN terminates early with x N then Fx N = y implying x N = x + due to the uniqueness of x + in B r x +. Otherwise, {x n } n N0 is a Cauchy sequence which we will show now. Let l,p N with l > p. We observe that and x l x p 2 X = s e l se p 2 X = 2 s e l se p,s e l X + s e p 2 X s e l 2 X 3.13 s e l l 1 se 3.2 p = s i,mi = i=p l 1 A i ṽ i i=p m i where ṽ i := v i,k 1. k=1 Hence, s e l se p,se l l 1 X = ṽ i,a i s e l l 1 ṽ i A i s e l. We proceed with i=p i=p A i s e l = F x i x + x l F x i x + x i + F x i x l x i y Fx i + Fx i Fx l 1 L 1 2 y Fx i + y Fx l 1 L 3 1 L y Fx i where the last estimate holds true due to the monotonicity of the residuals 2.11 and yields that s e l se p,s e l X 3 l 1 ṽ i y Fx i. 1 L i=p

18 18 A. LECHLEITER AND A. RIEDER For bounding ṽ i y Fx i we will apply 3.2. Setting n = i in the inequality in 3.2 and summing up both sides from m = 1 to m i results in s i,mi s e i 2 X se i 2 X < C M b ε i m i m=1 v i,m 1 Ψγi µ i where we have taken into account that b ε i A is i,m 1 µ i b ε i. Recall that b ε i = y Fx i, s e i+1 = se i s i,m i, and µ i µ > µ min Ψγ i. Thus, and ṽ i y Fx i s e l se p,s e l X m i m=1 v i,m 1 y Fx i se i 2 X se i+1 2 X C M µ µ min 3 s e 1 LC M µ µ min p 2 X s e l 2 X. We plug the latter bound into 3.13 to obtain x l x p 2 6 s X 1 LC M µ µ min + 1 e p 2 X s e l X 2. The monotonicity s e n+1 X < s e n X forces the convergence of s e n X as n. So, x l x p X can be made arbitrarily small by increasing p. Due to the uniqueness of x + in B r x + the limit of {x n } n, which is in B r x +, has to be x +. Below we will prove kind of stability of x δ n, n Nδ, as δ 0. To this end we introduce sets {X n } n N0 defined recursively from REGINN-iterates for exact data y. Definition 3.9. Set X 0 := {x 0 } and determine X n+1 from X n in the following way: for any ξ n X n compute the Newton step s N n = s n,m n as explained in 1.8 and 1.7 where, however, A n is replaced by F ξ n and b ε n by y Fξ n. Then, ξ n + s N n belongs to X n+1. If F ξ n s n,mn i y Fξ n = µ n y Fξ n 3.14 for i = 1,...,k n < m n, then ξ n + s n,mn i, i = 1,...,k n, are also elements of X n+1. We call ξ n the predecessor of ξ n + s n,mn i, i = 0,...,k n, and, in turn, call the latter successors of ξ n. Obviously, x n X n and, generically, X n contains only this element. Assume that X j = {x j } for j = 0,...,n and that s n,mn 1 as well as s n,mn 2 satisfy 3.14 where ξ n = x n of course. Then, X n+1 = {x n+1,x n +s n,mn 1,x n +

19 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 19 s n,mn 2} and from now on all sets X j with j n+1 will have three elements at least. By monotonicity 3.3, see also 3.8, we have that x + ξ n+1 X < x + ξ n X whenever ξ n+1 X n+1 is a successor of ξ n X n Remark For the four iterative methods Landweber, implicit iteration, steepest decent, and cg the number k n in 3.14 cannot exceed 1 for the following reason: the residuals of these methods decrease strictly monotonically up to the iteration index m n. The need for introducing the sets {X n } n N0 becomes clear in the proof of the next lemma. Lemma Algorithm REGINN is stable in the following sense: for fixed n N 0 with n Nδ for all δ sufficiently small, the iterate x δ n converges to X n as δ 0, that is, for any zero sequence {δ j } j N the sequence {x δ j n } j N splits into convergent subsequences, all of which converge to elements of X n. Proof. Before we start let us emphasize the difference of m n and m δ n. Both are defined by 1.7, however, the former with respect to exact data and the latter with respect to noisy data. We employ an inductive argument. For n = 0 the only element in X 0 is x 0 independent of δ 0. Hence, the statement is true for n = 0. Assume now that x δ n converges to X n as δ 0 and that n+1 Nδ for δ sufficiently small. Let {δ i } be a subsequence with lim i x δ i n = ξ n for one ξ n X n. By 3.12 and continuity the following limit holds true: lim i A δ i n sδ i n,k bε i n = F ξ n s n,k b n, k {0,...,m n }, where b ε i n = y δ i Fx δ i n and b n = y Fξ n. Since F ξ n s n,mn b n < µ n b n we conclude for i sufficiently large that A δ i n s δ i n,m n b ε i n < µ n b ε i n yielding m δ i n m n for large enough i. In case µ n b n < F ξ n s n,mn 1 bn we also have µ n b ε i < A δ i n s δ i n n,m n 1 bε i n provided large enough i. Thus, m n 1 < m δ i n and m δ i n = m n for sufficiently large i, i.e., lim i s δ i = s n,m δ i n,mn which gives x δ i n + s δ i ξ n n,m δ i n + s n,mn n X n+1 as i. In case 3.14 applies, we have µ n b n < F ξ n s n,mn k n+1 b n. Arguing as above we obtain m n k n + 1 < m δ i n implying the inclusion

20 20 A. LECHLEITER AND A. RIEDER m n k n m δ i n m n for i sufficiently large. Accordingly, {m δ i n } might have the k n +1 limit points m n k n,...,m n. In any case, all possible limit points of x δ i n + s δ i are in X n,m δ i n+1 by construction of this set. n The sets X n converge uniformly to x +. Lemma For any η > 0 there is an Mη N 0 such that x + ξ n X < η for all n Mη and all ξ n X n. Proof. Assume the contrary. Then, there exists an η > 0 and a strictly increasing sequence {j n } n N N 0 such that for any j n there is a ζ jn X jn with x + ζ jn X η. Without loss of generality we may assume that j n = n. Indeed, if this not true then there is one j n such that j n 1 j n 1. However, the predecessor ζ jn 1 X jn 1 of ζ jn satisfies also x + ζ jn 1 X > x + ζ jn X η, see 3.15, and we can add j n 1 to {j n } n N. So, for any n N 0 we can assume existence of ζ n X n with x + ζ n X η. Without loss of generality we may moreover assume, for any n, that ζ n+1 is a predecessor of ζ n. Otherwise consider ζ n, the actual predecessor of ζ n+1. By 3.15, x + ζ n > x + ζ n+1 η and we can replace ζ n by ζ n and even ζ 0,...,ζ n 1 by the respective predecessors of ζ n. Thus, the sequence {ζ n } n originates from a run ofreginn with a modified rule for picking m n : in 1.7 replace the less-than sign by the less-than-orequal sign. Since this modification of REGINN does not alter its convergence, Lemma 3.8 applies and {ζ n } n converges to x + contradicting x + ζ n X η for all n. Now we are able to verify strong convergence. Proof of Theorem 3.7: Let {δ j } j N be a zero sequence. Assume first that Nδ j = n as j. By Lemma 3.11 we may, without loss of generality, assume that x δ j n converges to an element ξ n of X n as j. In view of 3.1 we conclude that Fξ n = y. Since X n B r x + we have ξ n = x + due to the uniqueness of x + in B r x +. Next, let Nδ j be bounded as j. Then, {Nδ j } j splits into convergent subsequences and we can argue as in the case of constant Nδ j. Finally, we consider Nδ j as j. For any η > 0 there is an n = nη such that for every ξ n X n we have x + ξ n X η/2 Lemma Further, according to Lemma 3.11 and due to the finiteness of X n there is a Jη N such that for any j Jη we can find a ξ n j X n satisfying ξ n j x δ j n X < η/2. Thus, for j Jη so large that Nδ j > nη we obtain from monotonicity the bound x + x δ j Nδ j X < x + x δ j n X x + ξ n j X + ξ n j x δ j n X < η

21 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 21 for a suitably chosen ξ n j X n. Remark Without uniqueness of x + one can prove that {x δ Nδ } splits into convergent subsequences as δ 0, each of which converges to a solution of Fx = y. However, the arguments are more involved, compare [10]. Remark Convergence of {x δ Nδ } with sub-optimal rates has been shown under the usual abstract smoothness source conditions and under restrictions on the nonlinearity being stronger than 3.4, see [19] for some linear inner regularizations and [21] for the cg-method as inner regularization. The next challenge, of course, is to explore how far convergence rates results can be obtained in our general setting of this paper. To master this challenge fresh ideas are needed. Appendix A. Proof of 2.1 and 2.2 for cg and steepest decent. Let T LX, and 0 g. The cg-method is an iteration for solving the normal equation T Tf = T g. Starting with f 0 X the cg-method produces a sequence {f m } m N0 with the following minimization property g Tf m = min { g Tf f X, f f0 U m }, m 1, where U m is the m-th Krylov space, U m := span { T r 0,T TT r 0,T T 2 T r 0,...,T T m 1 T r 0} NT with r 0 := g Tf 0. Here, NT denotes the orthogonal complement of the null space NT of T. Since g Tf m,tu = 0 for all u U m, A.1 see formula 5.19 in [20], we have that Therefore, g Tf m,tf m = 0 for all m N 0 provided f 0 = 0. A.2 which is 2.3 with Θ = 1. Further, 0 g Tf m 2 = g 2 Tf m 2 g,tf m A.2 = Tf m 2 To establish 2.1 we validate that Tf m 0 under T g 0. Assume Tf m = 0 then g = r m r k g for any k = 0,...,m. So, f k = 0, k = 0,...,m, but f 1 is a non-zero multiple of T g and cannot be zero. Thus, 2.1 holds true for cg. It is a well-known property of cg-iteration that lim Tf m = P m RT g

22 22 A. LECHLEITER AND A. RIEDER whenever f 0 NT, see, e.g., page 135 ff. in [20]. Hence, 2.2 holds for cg-iteration. Let us now consider steepest decent. Starting with f 0 X steepest decent produces the sequence {f m } m N0 by f m+1 = f m + λ m T r m where r m = g Tf m and T r m 2 X λ m = TT r m 2 T 2 : T r m 0, : otherwise. We first validate monotonicity of the residuals: r m+1 r m. A.3 Define f L m+1 := f m + ωt r m with 0 < ω < 2/ T 2 and observe g Tf L m+1 = I ωtt r m. Due to the optimality of the step size λ m we have r m+1 g Tf L m+1 = I ωtt r m r m. Whence A.3 holds true. Let f 0 = 0. Then, Tf m 2 2 Tf m,g + g 2 = r m 2 A.3 r 0 2 = g 2 leading to Tf m 2 2 Tf m,g. By Cauchy-Schwarz inequality we deduce that A.4 which yields 2.3 with Θ 2. Tf m 2 g Remark A.1. We strongly suspect that Θ = 1. Indeed, Tf 1 = T g 2 X TT g 2 TT g = T g 2 X TT = TT g,g g TT g. g Further, from A.3 we obtain Tf 2 2 < 2 T g,λ 1 T r 1 + Tf 1 2 = Tf 1 2. Thus, Tf 2 < Tf 1 g.

23 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 23 However, we are not able to give a complete proof of our conjecture. As we do not know an adequate reference for the convergence lim Tf m = P m RT g A.5 we give a short proof. First we replace g by P RT g which does not change the steepest decent method. The monotonicity A.3 now reads Thus, P RT g Tf m+1 P RT g Tf m. lim P m RT g Tf m = ε. It remains to confirm that ε = 0. Assume the contrary: ε > 0. Then, there exists an f ε X with Straightforward calculations yield P RT g Tf ε < ε 4. f m+1 f ε 2 X f m f ε 2 X = 2λ m rm,p RT g Tf ε Let T r m 0. Then, λ m T r m 2 X = λ m TT r m,r m 2λ m r m 2 + λ 2 m T r m 2 X. λ m TT r m r m = TT r m,r m TT r m r m r m 2. The latter inequality remains true for T r m = 0 and, hence, implies f m+1 f ε 2 X f m f ε 2 X < 2λ m r m ε 4 λ m r m 2 As r m > ε for all m we have = λ m r m ε 2 r m. f m+1 f ε 2 X f m f ε 2 X < ε 2 λ m r m. Adding both sides of the above inequality from m = 0 to m = k 1 gives f k f ε 2 X f ε 2 X < ε 2 k 1 m=0 λ m r m.

24 24 A. LECHLEITER AND A. RIEDER Since λ m T 2 we end up with k 1 m=0 r m < 2 T 2 ε f ε 2 X f k f ε 2 2 T X 2 f ε 2 ε X. The upper bound does not depend on k contradicting r m > ε > 0. Property 2.1 follows from A.4 as soon as we have verified that Tf m 0, m 1. Assume Tf m = 0. Then, Tf m = 0 and Tf m+1 = Tf 1 which yields Tf m+i = Tf i, i N 0. Especially, Tf km = 0, k N, contradicting A.5 under the requirement T g 0. Appendix B. Proof of 3.2 for Landweber, steepest decent, implicit iteration and cg. We profit from results of Hämarik and Tautenhahn [7]. Applied to the normal equation T Tf = T g notation as in Appendix A the four methods under consideration produce iterates {f m } m N by where f m+1 = f m + T z m, f 0 = 0, B.1 Landweber: z m = ωr m, ω ]0, T 2 [, steepest decent: z m = λ m r m, implicit iteration: z m = α m I + TT 1 r m, and cg: z m = w m+1 TT g for a polynomial w m+1 of degree m + 1, see Hanke [8, formula 2.7]. Observe that B.1 proves the first part of assumption 3.2. For any f X we have that f m f 2 X f m 1 f 2 X = 2 g T f,z m 1 r m 1 +r m,z m 1, B.2 see [7, formula 3.2]. Let γ = g T f / g denote the relative residual of f. B.1. Landweber and steepest decent. Plugging in z m = β m r m with β m {ω,λ m } we obtain from B.2 f m f 2 X f m 1 f 2 X = β m 1 2 g T f,rm 1 r m 1 2 r m,r m 1. By r m,r m 1 = I β m 1 TT r m 1,r m 1 > 0

25 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 25 we end up with f m f 2 X f m 1 f 2 X < β m 1 r m 1 2 g T f r m 1 = z m 1 g Ψγ r m 1 g where Ψt = 2t. Thus, we have established 3.2 with C M = 1 for Landweber as well as steepest decent. B.2. Implicit iteration. Next we address implicit iteration. Since z m 1 = α 1 m 1 r m we deduce r m,z m 1 > 0. Further, r m 1,z m 1 α min z m 1 2. By B.2, f m f 2 X f m 1 f 2 X < z m 1 2 g T f α min z m 1. The lower bound z m 1 α max + T 2 1 r m 1 yields f m f 2 X f m 1 f 2 X with Ψt = 2 iteration. αmax+ T 2 α min < z m 1 g α min α max + T 2 Ψγ r m 1 g t and 3.2 with C M α min α max follows for implicit B.3. cg-method. We follow arguments by Hanke [10, Theorem 3.1]. Here B.2 reads f m f 2 X f m 1 f 2 X = 2 g T f,w m TT g r m 1 + r m,w m TT g, To proceed we rewrite w m as w m t = w m 0 + tqt where q Π m 1 and w m 0 > 0. Hence, w m TT g = w m 0g+Tu with u = T qtt g U m 1. Applying A.1 and A.2 we obtain r m 1,w m TT g = w m 0 g Tf m 1,g + g Tf m 1,Tu Analogously, Thus, f m f 2 X f m 1 f 2 X = w m 0 r m 1 2. r m,w m TT g = w m 0 r m 2. w m TT g 2 g T f w m 0 w m TT r m 1 2. g As m is less or equal to the ultimate stopping index of cg, see, e.g. [8, Sec. 2.1] or [20, Sec. 5.3], we have that w mtt g 0.

26 26 A. LECHLEITER AND A. RIEDER The normalized polynomial w m /w m 0 is denoted p [2] m by Hanke [8]. By his Theorem 3.2 we have so that w m TT g w m 0 < w 0TT g w 0 0 = g, f m f 2 X f m 1 f 2 X< w m TT g 2 g T f r m 1 2 g = z m 1 g Ψγ 2 r m 1 2 g 2 with Ψt = 2t and we have established 3.2 for the cg-method where C M = Ψγ + r m 1 g Ψ REFERENCES [1] A. B. Bakushinsky, The problem of the convergence of the iteratively regularized Gauss-Newton method, Comput. Math. Phys., , pp [2] A. B. Bakushinsky and M.. Kokurin, Iterative Methods for Approximate Solution of Inverse Problems, vol. 577 of Mathematics and Its Applications, Springer, Dordrecht, The Netherlands, [3] H. Brakhage, On ill-posed problems and the method of conjugate gradients, in Inverse and Ill-Posed Problems, H. W. Engl and C. W. Groetsch, eds., vol. 4 of Notes and Reports in Mathematics in Science and Engineering, Boston, 1987, Academic Press, pp [4] P. Deuflhard, Newton Methods for Nonlinear Problems, vol. 35 of Springer Series in Computational Mathematics, Springer, Berlin, [5] P. Deuflhard, H. W. Engl, and O. Scherzer, A convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions, Inverse Problems, , pp [6] H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, vol. 375 of Mathematics and Its Applications, Kluwer Academic Publishers, Dordrecht, [7] U. Hämarik and U. Tautenhahn, On the monotone error rule for parameter choice in iterative and continuous regularization methods, BIT, , pp [8] M. Hanke, Conjugate Gradient Type Methods for Ill-Posed Problems, vol. 327 of Pitman Research Notes in Mathematics, Longman Scientific & Technical, Harlow, UK, [9], A regularizing Levenberg-Marquardt scheme, with applications to inverse groundwater filtration problems, Inverse Problems, , pp

27 GENERAL CONVERGENCE THEOR FOR NEWTON REGULARIZATIONS 27 [10], Regularizing properties of a truncated Newton-CG algorithm for nonlinear inverse problems, Numer. Funct. Anal. Optim., , pp [11] M. Hanke, A. Neubauer, and O. Scherzer, A convergence analysis of the Landweber iteration for nonlinear ill-posed problems, Numer. Math , pp [12] Q. Jin and U. Tautenhahn, On the discrepancy principle for some Newton type methods for solving nonlinear ill-posed problems, Numer. Math, , pp [13] B. Kaltenbacher, A. Neubauer, and O. Scherzer, Iterative Regularization Methods for Nonlinear Ill-Posed Problems, Radon Series on Computational and Applied Mathematics, de Gruyter, Berlin, [14] A. Kirsch, An Introduction to the Mathematical Theory of Inverse Problems, vol. 120 of Applied Mathematical Sciences, Springer-Verlag, New ork, [15] A. Lechleiter and A. Rieder, Newton regularizations for impedance tomography: a numerical study, Inverse Problems, , pp [16], Newton regularizations for impedance tomography: convergence by local injectivity, Inverse Problems, , p [17] A. K. Louis, Inverse und schlecht gestellte Probleme, Studienbücher Mathematik, B. G. Teubner, Stuttgart, Germany, [18] A. Rieder, On the regularization of nonlinear ill-posed problems via inexact Newton iterations, Inverse Problems, , pp [19], On convergence rates of inexact Newton regularizations, Numer. Math., , pp [20], Keine Probleme mit Inversen Problemen, Vieweg, Wiesbaden, Germany, [21], Inexact Newton regularization using conjugate gradients as inner iteration, SIAM J. Numer. Anal., , pp [22] O. Scherzer, The use of Morozov s discrepancy principle for Tikhonov regularization for solving nonlinear ill-posed problems, Computing, , pp [23], A convergence analysis of a method of steepest descent and a two-step algorithm for nonlinear ill-posed problems, Numer. Funct. Anal. and Optimiz., , pp [24] E. Zeidler, Nonlinear Functional Analysis and its Applications I: Fixed-Point Theorems, Springer-Verlag, New ork, 1993.

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY ARMIN LECHLEITER AND ANDREAS RIEDER January 8, 2007 Abstract. The Newton-type regularization

More information

INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION

INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION ANDREAS RIEDER August 2004 Abstract. In our papers [Inverse Problems, 15, 309-327,1999] and [Numer. Math., 88, 347-365, 2001]

More information

Functionalanalytic tools and nonlinear equations

Functionalanalytic tools and nonlinear equations Functionalanalytic tools and nonlinear equations Johann Baumeister Goethe University, Frankfurt, Germany Rio de Janeiro / October 2017 Outline Fréchet differentiability of the (PtS) mapping. Nonlinear

More information

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.

More information

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

Convergence rates of the continuous regularized Gauss Newton method

Convergence rates of the continuous regularized Gauss Newton method J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2 Mathematical Modelling and Numerical Analysis Modélisation Mathématique et Analyse Numérique Will be set by the publisher REGULARIZATION OF NONLINEAR ILL-POSED PROBLEMS BY EXPONENTIAL INTEGRATORS Marlis

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear

More information

arxiv: v1 [math.na] 16 Jan 2018

arxiv: v1 [math.na] 16 Jan 2018 A FAST SUBSPACE OPTIMIZATION METHOD FOR NONLINEAR INVERSE PROBLEMS IN BANACH SPACES WITH AN APPLICATION IN PARAMETER IDENTIFICATION ANNE WALD arxiv:1801.05221v1 [math.na] 16 Jan 2018 Abstract. We introduce

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 2191-2195 1 On a new notion of regularizer A.G. Ramm LMA/CNRS, 31 Chemin Joseph Aiguier, Marseille 13402, France and Mathematics

More information

Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations

Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Ronny Ramlau Gerd Teschke April 13, 25 Abstract We shall be concerned with the construction of Tikhonov based iteration

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Stephan W Anzengruber Ronny Ramlau Abstract We derive convergence rates for Tikhonov-type regularization with conve

More information

On a family of gradient type projection methods for nonlinear ill-posed problems

On a family of gradient type projection methods for nonlinear ill-posed problems On a family of gradient type projection methods for nonlinear ill-posed problems A. Leitão B. F. Svaiter September 28, 2016 Abstract We propose and analyze a family of successive projection methods whose

More information

Linear convergence of iterative soft-thresholding

Linear convergence of iterative soft-thresholding arxiv:0709.1598v3 [math.fa] 11 Dec 007 Linear convergence of iterative soft-thresholding Kristian Bredies and Dirk A. Lorenz ABSTRACT. In this article, the convergence of the often used iterative softthresholding

More information

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:

More information

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis Real Analysis, 2nd Edition, G.B.Folland Chapter 5 Elements of Functional Analysis Yung-Hsiang Huang 5.1 Normed Vector Spaces 1. Note for any x, y X and a, b K, x+y x + y and by ax b y x + b a x. 2. It

More information

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik Accelerated Landweber iteration in Banach spaces T. Hein, K.S. Kazimierski Preprint 2009-17 Fakultät für Mathematik Impressum: Herausgeber: Der Dekan der Fakultät für Mathematik an der Technischen Universität

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated

More information

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage: Nonlinear Analysis 71 2009 2744 2752 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A nonlinear inequality and applications N.S. Hoang A.G. Ramm

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional

Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional arxiv:183.1757v1 [math.na] 5 Mar 18 Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional Simon Hubmer, Ronny Ramlau March 6, 18 Abstract In

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that

More information

Banach Spaces V: A Closer Look at the w- and the w -Topologies

Banach Spaces V: A Closer Look at the w- and the w -Topologies BS V c Gabriel Nagy Banach Spaces V: A Closer Look at the w- and the w -Topologies Notes from the Functional Analysis Course (Fall 07 - Spring 08) In this section we discuss two important, but highly non-trivial,

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Convergence rates in l 1 -regularization when the basis is not smooth enough

Convergence rates in l 1 -regularization when the basis is not smooth enough Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Local semiconvexity of Kantorovich potentials on non-compact manifolds

Local semiconvexity of Kantorovich potentials on non-compact manifolds Local semiconvexity of Kantorovich potentials on non-compact manifolds Alessio Figalli, Nicola Gigli Abstract We prove that any Kantorovich potential for the cost function c = d / on a Riemannian manifold

More information

Iterative Regularization Methods for Inverse Problems: Lecture 3

Iterative Regularization Methods for Inverse Problems: Lecture 3 Iterative Regularization Methods for Inverse Problems: Lecture 3 Thorsten Hohage Institut für Numerische und Angewandte Mathematik Georg-August Universität Göttingen Madrid, April 11, 2011 Outline 1 regularization

More information

A range condition for polyconvex variational regularization

A range condition for polyconvex variational regularization www.oeaw.ac.at A range condition for polyconvex variational regularization C. Kirisits, O. Scherzer RICAM-Report 2018-04 www.ricam.oeaw.ac.at A range condition for polyconvex variational regularization

More information

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS Olivier Scaillet a * This draft: July 2016. Abstract This note shows that adding monotonicity or convexity

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

On Friedrichs inequality, Helmholtz decomposition, vector potentials, and the div-curl lemma. Ben Schweizer 1

On Friedrichs inequality, Helmholtz decomposition, vector potentials, and the div-curl lemma. Ben Schweizer 1 On Friedrichs inequality, Helmholtz decomposition, vector potentials, and the div-curl lemma Ben Schweizer 1 January 16, 2017 Abstract: We study connections between four different types of results that

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Boris T. Polyak Andrey A. Tremba V.A. Trapeznikov Institute of Control Sciences RAS, Moscow, Russia Profsoyuznaya, 65, 117997

More information

Parameter Identification

Parameter Identification Lecture Notes Parameter Identification Winter School Inverse Problems 25 Martin Burger 1 Contents 1 Introduction 3 2 Examples of Parameter Identification Problems 5 2.1 Differentiation of Data...............................

More information

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces YUAN-HENG WANG Zhejiang Normal University Department of Mathematics Yingbing Road 688, 321004 Jinhua

More information

NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM

NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM JIAYI GUO AND A.S. LEWIS Abstract. The popular BFGS quasi-newton minimization algorithm under reasonable conditions converges globally on smooth

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

Functional Analysis F3/F4/NVP (2005) Homework assignment 3

Functional Analysis F3/F4/NVP (2005) Homework assignment 3 Functional Analysis F3/F4/NVP (005 Homework assignment 3 All students should solve the following problems: 1. Section 4.8: Problem 8.. Section 4.9: Problem 4. 3. Let T : l l be the operator defined by

More information

Non-degeneracy of perturbed solutions of semilinear partial differential equations

Non-degeneracy of perturbed solutions of semilinear partial differential equations Non-degeneracy of perturbed solutions of semilinear partial differential equations Robert Magnus, Olivier Moschetta Abstract The equation u + FV εx, u = 0 is considered in R n. For small ε > 0 it is shown

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 21, 2003, 211 226 SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS Massimo Grosi Filomena Pacella S.

More information

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2 ESAIM: M2AN 43 (29) 79 72 DOI:.5/m2an/292 ESAIM: Mathematical Modelling and Numerical Analysis www.esaim-m2an.org REGULARIZATION OF NONLINEAR ILL-POSED PROBLEMS BY EXPONENTIAL INTEGRATORS Marlis Hochbruck,

More information

FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz

FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 33, 2009, 335 353 FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES Yılmaz Yılmaz Abstract. Our main interest in this

More information

The small ball property in Banach spaces (quantitative results)

The small ball property in Banach spaces (quantitative results) The small ball property in Banach spaces (quantitative results) Ehrhard Behrends Abstract A metric space (M, d) is said to have the small ball property (sbp) if for every ε 0 > 0 there exists a sequence

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Rolle s Theorem for Polynomials of Degree Four in a Hilbert Space 1

Rolle s Theorem for Polynomials of Degree Four in a Hilbert Space 1 Journal of Mathematical Analysis and Applications 265, 322 33 (2002) doi:0.006/jmaa.200.7708, available online at http://www.idealibrary.com on Rolle s Theorem for Polynomials of Degree Four in a Hilbert

More information

I teach myself... Hilbert spaces

I teach myself... Hilbert spaces I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition

More information

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces Emil Ernst Michel Théra Rapport de recherche n 2004-04

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators

Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators Bernd Hofmann Abstract The object of this paper is threefold. First,

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto

More information

Some lecture notes for Math 6050E: PDEs, Fall 2016

Some lecture notes for Math 6050E: PDEs, Fall 2016 Some lecture notes for Math 65E: PDEs, Fall 216 Tianling Jin December 1, 216 1 Variational methods We discuss an example of the use of variational methods in obtaining existence of solutions. Theorem 1.1.

More information

Non-degeneracy of perturbed solutions of semilinear partial differential equations

Non-degeneracy of perturbed solutions of semilinear partial differential equations Non-degeneracy of perturbed solutions of semilinear partial differential equations Robert Magnus, Olivier Moschetta Abstract The equation u + F(V (εx, u = 0 is considered in R n. For small ε > 0 it is

More information

SYMMETRY OF POSITIVE SOLUTIONS OF SOME NONLINEAR EQUATIONS. M. Grossi S. Kesavan F. Pacella M. Ramaswamy. 1. Introduction

SYMMETRY OF POSITIVE SOLUTIONS OF SOME NONLINEAR EQUATIONS. M. Grossi S. Kesavan F. Pacella M. Ramaswamy. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 12, 1998, 47 59 SYMMETRY OF POSITIVE SOLUTIONS OF SOME NONLINEAR EQUATIONS M. Grossi S. Kesavan F. Pacella M. Ramaswamy

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS

PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS Annales Academiæ Scientiarum Fennicæ Mathematica Volumen 28, 2003, 207 222 PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS Fumi-Yuki Maeda and Takayori Ono Hiroshima Institute of Technology, Miyake,

More information

An improved convergence theorem for the Newton method under relaxed continuity assumptions

An improved convergence theorem for the Newton method under relaxed continuity assumptions An improved convergence theorem for the Newton method under relaxed continuity assumptions Andrei Dubin ITEP, 117218, BCheremushinsaya 25, Moscow, Russia Abstract In the framewor of the majorization technique,

More information

Semi-discrete equations in Banach spaces: The approximate inverse approach

Semi-discrete equations in Banach spaces: The approximate inverse approach Semi-discrete equations in Banach spaces: The approximate inverse approach Andreas Rieder Thomas Schuster Saarbrücken Frank Schöpfer Oldenburg FAKULTÄT FÜR MATHEMATIK INSTITUT FÜR ANGEWANDTE UND NUMERISCHE

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES ARCHIVUM MATHEMATICUM (BRNO) Tomus 42 (2006), 167 174 SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES ABDELHAKIM MAADEN AND ABDELKADER STOUTI Abstract. It is shown that under natural assumptions,

More information

Fourier Series. 1. Review of Linear Algebra

Fourier Series. 1. Review of Linear Algebra Fourier Series In this section we give a short introduction to Fourier Analysis. If you are interested in Fourier analysis and would like to know more detail, I highly recommend the following book: Fourier

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Chapter 3: Baire category and open mapping theorems

Chapter 3: Baire category and open mapping theorems MA3421 2016 17 Chapter 3: Baire category and open mapping theorems A number of the major results rely on completeness via the Baire category theorem. 3.1 The Baire category theorem 3.1.1 Definition. A

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

Centre d Economie de la Sorbonne UMR 8174

Centre d Economie de la Sorbonne UMR 8174 Centre d Economie de la Sorbonne UMR 8174 On alternative theorems and necessary conditions for efficiency Do Van LUU Manh Hung NGUYEN 2006.19 Maison des Sciences Économiques, 106-112 boulevard de L'Hôpital,

More information

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent Chapter 5 ddddd dddddd dddddddd ddddddd dddddddd ddddddd Hilbert Space The Euclidean norm is special among all norms defined in R n for being induced by the Euclidean inner product (the dot product). A

More information

Modified Landweber iteration in Banach spaces convergence and convergence rates

Modified Landweber iteration in Banach spaces convergence and convergence rates Modified Landweber iteration in Banach spaces convergence and convergence rates Torsten Hein, Kamil S. Kazimierski August 4, 009 Abstract Abstract. We introduce and discuss an iterative method of relaxed

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

C 1 regularity of solutions of the Monge-Ampère equation for optimal transport in dimension two

C 1 regularity of solutions of the Monge-Ampère equation for optimal transport in dimension two C 1 regularity of solutions of the Monge-Ampère equation for optimal transport in dimension two Alessio Figalli, Grégoire Loeper Abstract We prove C 1 regularity of c-convex weak Alexandrov solutions of

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information