Modified Landweber iteration in Banach spaces convergence and convergence rates

Size: px
Start display at page:

Download "Modified Landweber iteration in Banach spaces convergence and convergence rates"

Transcription

1 Modified Landweber iteration in Banach spaces convergence and convergence rates Torsten Hein, Kamil S. Kazimierski August 4, 009 Abstract Abstract. We introduce and discuss an iterative method of relaxed Landweber type for the regularization of the solution operator of the operator equation Fx = y, where X and Y are Banach spaces and F is a non-linear, continuous operator mapping between them. We assume that the Banach space X is smooth and convex of power type. We will show that under the so-called approximate source conditions convergence rates may be achieved. We will close our discussion with the presentation of a numerical example. Key words: Iterative Regularization, Landweber iteration, Banach spaces, smooth of power type, convex of power type, Bregman distance AMS subject classifications: 47A5, 65F0, 46E5, 46B0 Running title: Regularized Landweber in Banach spaces Introduction Let X and Y be both Banach spaces. We consider the non-linear operator equation Fx = y, x DF, where F : DF X Y describes a continuous non-linear mapping from the domain DF into the space Y. In many applications turns out to be ill-posed. In particular a solution x DF satisfying Fx = y needs not to exist nor to be unique, and if it exists then it does not necessarily depend continuously on the data y. This means that small perturbations of the data may cause large perturbations of the solution. Usually F models some measurement system. Therefore it can be assumed that a noise cluttered Technische Universität Chemnitz, Fakultät für Mathematik, 0907 Chemnitz, Deutschland. Torsten.Hein@mathematik.tu-chemnitz.de Center for Industrial Mathematics ZeTeM in collaboration with SFB 747, University of Bremen, 8334 Bremen, Germany kamilk@math.uni-bremen.de;

2 version y δ of y is available where only the estimate y δ y δ is known. This rises the need of the application of a regularization method. In the case that both spaces X and Y are Hilbert many regularization methods have been established [, 5, 3], see also the references therein. The most prominent classes of regularization strategies are regularization by minimizers of functionals, i.e. Tikhonov regularization and iterative regularization methods, like Landweber iteration etc. Motivated by their successful application in image restoration, sparsity reconstruction the development and investigation of regularization methods for inverse problems in Banach spaces have become a field of modern research, cf. e.g. [8, 6, 0]. In particular, the Tikhonov regularization in Banach spaces has been established theoretically in e.g. [0] whereas their numerical treatment for linear problems was considered in e.g. [8, 5]. As described in [0] and references therein, many convergence rate results for Tikhonov regularization are available for linear and non-linear operators. We also refer to [5] for some newer results. For the case that F in is replaced by a linear operator A with adjoint operator A an iterative regularization method which is given via x n+ x n µ n A J p A x n y δ, x n+ := J q x n+ was recently introduced in [7]. Strong convergence of the iterates to the minimum-norm solution of was proven under mild assumptions on the space X. However, from the theoretical point of view the lack of quantitative results, i.e. convergence rates, is a major drawback. In this paper we introduce an alternative iterative method for regularization of linear and non-linear operator equations in Banach spaces. We assume F to be Gâteaux- differentiable in DF with derivative F x. Then this method reads as x n+ := x n µf x n J p Fx n y δ β n x n, x n+ := J q x n+, 3 where J q : Y Y and J q : X X denote duality mappings in the spaces under consideration. In Hilbert spaces this method is also known as modified Landweber iteration which was introduced and investigated in [6]. However, as opposite to the approach there we suggest here an a-posteriori choice of the parameter β n. We will show that for properly chosen parameters µ and {β n } and under assumptions on the space X similar to those of [7] not only strong convergence but also convergence rates can be obtained. We point out, that to our best knowledge, this is the first time that convergence rates for Landweber-like algorithms could be proven in Banach spaces. Moreover, we can give an extension of the convergence rates results of [6] in Hilbert spaces. The iteration of [7] can be understood as a steepest descent approach along the functional A x p yδ p, whereas the new iteration can be understood as the steepest descent along the Tikhonov functional Fx p yδ p + β n µ p x p.

3 Therefore one can interpret our iteration as a regularized version of the iteration of [7] for nonlinear problems. Since we also aim to prove convergence rates we need some source conditions describing the smoothness of a minimum-norm solution x DF of equation. Here we deal with so-called approximative source condition of the form J p x = F x ω + υ, ω S, υ ds, for S > 0 where d : [0, [0, denotes the non-negative decreasing distance function of the element J p x with respect to the set RF x. The idea of using the decay rate of this distance functions for proving convergence rates was developed by [9, 8] in Hilbert spaces. Since this method does not depend on Hilbert space techniques such as spectral calculus it can be applied in Banach spaces, see also [6]. Consequently the paper is organized as follows: In the next section, we introduce basic notions of geometry of Banach spaces. In Section 3 we recapitulate the idea and definitions of approximate source conditions and distance functions and derive a corresponding variational inequality which will be applied in the proofs of our main convergence rates result. In Section 4 the iteration 3 is introduced more precisely. As main result we will show that for this iteration together with approximate source condition convergence rate results can be obtained. In Section 5 we show that a slightly altered version of the iteration can be used if no approxiamte source conditions are available. We will close our discussion with theoretical and numerical analysis of an nonlinear example which arises in option pricing theory. Preliminaries Throughout the paper let < p < and < q < be conjugate exponents, i.e. + =. p q Let X and Y be Banach spaces with duals X and Y. Their norms will be denoted by. We omit indices indicating the space since it will become clear from the context which one is meant. For x X and z X we denote by x, z or z, x the duality paring, i.e. x, z = z, x = z x. For p > the in general set-valued mapping J p : X X defined by J p x = {x X : x, x = x x, x = x p } is called a duality mapping of X with gauge function t t p. We remark that J p is in general non-linear. One can show [7, Th.I.4.8] that J is linear if and only if X is a Hilbert space. In the case of a Hilbert space J is just the identity mapping. The space X is called smooth if J p x is single valued for all x X. Then for the sake of convenience we identify J p with it s only single-valued selection. In the convergence analysis we deal with Bregman distances. The definition is stated below. 3

4 Definition.. Let X be a smooth Banach space and p >. The Bregman distance p x, of the convex functional x p x p at x X is defined as p x, x := p x p p x p J p x, x x, x X, where J p : X X denotes the duality mapping of X with gauge function t t p. Due to the definition of duality mappings we can also rewrite the definition as p x, x = p x p p x p J p x, x + x p = p x p + q x p J p x, x By the convexity of the functional x p x p we have p x, x 0 for all x, x X. One can further show that the Bregman distance is continuous in both arguments [7, Theorem.]. The presented convergence rate analysis is based essentially on the following assumptions. A There exists a solution x DF of, i.e. Fx = y holds. A The Banach space X is smooth and p-convex for some p. A3 The exists a ball B x around x with radius > 0 and a constant 0 η < such that for each x DF B x we can find a linear bounded operator F x with F x K for some constant K > 0 and for all x DF B x. F x Fx F x x x η F x Fx 4 We shortly discuss the assumptions A and A3. The space X is called convex of power type p or p-convex if there exists a constant C > 0 such that δ X ǫ Cǫ p, where the modulus of convexity of X δ X : [0, ] [0, ] is defined via δ X := inf{ x + x : x = x =, x x ǫ}. Moreover, the space X is said to be smooth of power type q or q-smooth for q > if there exists a constant G > 0 such that ρ X τ Gτ q, where ρ X : [0, [0, called the modulus of smoothness of X is defined via ρ X = sup{ x + x + x x : x =, x τ}. As a consequence of the Lindenstrauss duality formula X is p-convex if and only if X is q-smooth and X is q-convex if and only if X is p-smooth, cf. [9, IV..7 and IV..]. Due to the polarization identity every Hilbert space is -smooth and -convex. Further for < p < the sequence spaces l p, Lebesgue spaces L p and Sobolev spaces Wp m are min{, p}-smooth and max{, p}-convex [3, 30, 7]. 4

5 By A and due to the Xu-Roach inequalities [30], we conclude from the p-convexity of X that there exists a constant C p > 0 such that the estimate p x, x = p x p p x p J p x, x x C p p x x p 5 holds for all x, x X. From the q-smoothness of the dual space X we derive the existence of a constant G q > 0 such that q x q q x q J q x, x x G q q x x q 6 for all x, x X. We will use the relations 5 and 6 in our convergence analysis. In particular, the constants G q and C p will play the role of fixed parameters in the algorithm under consideration. The condition 4 is also known as η-condition which is in fact a restriction of the nonlinearity of the operator F. It was originally introduced in [] for proving convergence and convergence rates for Landweber iteration for nonlinear ill-posed problems in Hilbert spaces. There, the authors supposed η < for their analysis. For our considerations the weaker requirement η < is sufficient. The same condition was applied in [6] for proving convergence of the modified Landweber iteration. In order to prove convergence rates for Landweber iteration the additional structural condition F x = R x F x with R x I C x x, R x : Y Y with x x sufficiently small and constant C > 0 was introduced in []. On the other hand, in [6] only Litschitz continuity of the Fréchet-derivative was supposed which is a weaker restriction than A3. In order to understand the necessity of the different strength of the applied assumptions we briefly recall convergence rates results for iteratively regularized Gauß-Newton methods, see e.g. [] and [4] as well as [] and [3] for a further discussion of conditions for iterative regularization methods of nonlinear problems in Hilbert spaces. There, for an a-priori guess x 0 X the power-type source condition x x 0 R F x F x ν for some exponent 0 < ν is considered. Then it is essential to distinguish between ν, see [], and 0 < ν <, see [4]. For ν the Lipschitz continuity of the Fréchetderivative F x, x DF, is sufficient to prove accordant convergence rates. But it has been well-established that this condition is not sufficient in the case 0 < ν <. As consequence, in [4] was dealt with the more general structural condition F x = R x, xf x + Q x, x I R x, x C R Q x, x C Q F x x x with sufficiently small parameters C R and C Q and operators R,, Q, depending on x, x X. In particular, C R + C Q x x < implies automatically the condition A3. Since in [6] only the case ν = was considered, the Lipschitz continuity of the Fréchet-derivative was sufficient in the corresponding convergence rates analysis. The 5

6 presented convergence rates analysis here deals with the situation 0 < ν and show that condition A3 is sufficient to prove accordant convergence results. Moreover, the presented convergence rates analysis includes also logarithmic source conditions which was treated in [] separately for iteratively regularized Gauß-Newton methods. For linear operators we can set η = 0. So linear inverse problems can be considered as special case of the subsequent analysis. With F x : Y X we denote the adjoint operator of F x which is defined by y, F x x = F x y, x, x X, y Y. 3 Approximate source conditions Since our aim is also to prove convergence rates we need a source condition. Therefore, we introduce the notions of approximate source conditions and distance functions. Originally, approximate source conditions were established in [3] for proving error estimates for linear regularization methods in Hilbert spaces. In combination with distance functions in [8] a concept was developed for proving convergence rates based on approximate source conditions. Since no Hilbert space tools such as spectral calculus is needed the idea can be transferred to regularization approaches in Banach spaces, see e.g. [4] for some initial convergence rates results. We define distance functions. Definition 3.. The distance function d ; ξ : [0, R of an element ξ X with respect to the set RF x is defined as ds; ξ := inf { ξ F x ω : ω S }, S 0. 7 Since the space X is supposed to be smooth of power type and hence X is convex of power type and therefore in particular strictly convex we can replace the infimum by the minimum. In particular, for each S 0 there exist elements ω = ωs and υ = υs such that ξ = F x ω + υ, ω S, υ = ds; ξ, 8 holds. We refer to conditions of form 8 as approximate source conditions. We will emphasize the following properties of distance functions: distance functions d ; ξ are non-negative and non-increasing. They are strictly positive, if ξ RF x. If ξ RF x, in particular if the classical source condition ξ = F x ω, ω Y, 9 holds, then ds; ξ = 0 for all S ω. On the other hand, if ξ RF x but ξ RF x then ds; ξ tends to zero as S. Of interest for our considerations is the following result. Lemma 3.. Assume A-A3 and J p x has distance function ds = ds; J p x, S 0. Then the estimate J p x, x x S + η Fx y δ + S + ηδ + q p x, x + p p q C p ds q. 6 0

7 holds for all S 0 and all x X. The same estimate holds if the Bregman distance p x, x is replaced by p x, x. Proof. We set Tx, x := Fx Fx F x x x. By assumption the approximate source condition 8 holds for all S 0 with ξ = J p x. We derive J p x, x x = ws, F x x x υs, x x = ws, Fx y δ + y δ y Tx, x υs, x x S Fx y δ + S δ + S Tx, x + ds x x S + η Fx y δ + S + ηδ + ds p p C p p x, x p and finally 0 by applying Young s inequality to the last term. The concept of introducing source conditions as variational inequalities was introduces in [0], see also the discussion of some consequences therein. The variational inequality 0 holds also in more general situations than 8. So, approximate source conditions can be considered as kind of motivation for formulating the condition 0. In order to achieve convergence of the algorithm we further have to suppose that ds; J p x tends to zero as S, i.e. J p x RF x holds. On the first moment it looks like only a little extension of convergence rates results assuming the source condition 9. On the other hand, if F x is injective then RF x = X, see e.g. [9, Satz II.4.5]. Moreover, from [7, Lemma.0] we observe that at least for linear operator equations the minimum-norm-solution x always satisfies. 4 Convergence and Convergence Rates Results We consider the following algorithm. Algorithm 4.. Let y δ be the given data with y δ y δ and d 0 : [0, [0, some non-negative non-increasing function with d 0 S 0 as S and η, and K be constants of A3. Let the iterative regularization algorithm be given via S 0 Init. Choose µ > 0 such that η µ + q Gq q Kq µ q 0, start point x 0, x 0 := J q x 0 such that p x 0, x R 0 and p x 0, x p C p p. We choose S = Sδ so big, that d 0 S q δ S and we further choose τ > 0. Set C := +η p q q µ η + +η q η p µ p τq + + η + q p p C p τ q. 7

8 S Compute STOP, if τβ n S δ p and β n < q. S Compute the new iterate via γ n := q G q x n q + C Sq β n := min { q, R n /qγ n p } R n+ := βn R q n + γn q βq n. x n+ := x n µf x n J p Fx n y δ β n x n x n+ := J q x n+ Set n n + and go to step S. Then the following convergence results hold: Theorem 4.. Assume A-A3, δ > 0 and J p x has distance function ds; J p x, S 0. We further assume that the iterates {x n } remain in DF. If the function d 0 is chosen such that ds; J p x d 0 S for all S 0 then the following hold: i The iterates {x n } remain in B x. ii The iteration terminates after a finite number of iterations, i.e. the stopping index kδ = kδ, y δ is well-defined. iii There exists a number C = Cδ max > 0 such that for all δ δ max the estimate holds. p x kδ, x C δ Sδ 3 If in particular Sδ is chosen such that δ Sδ 0 as δ 0 then we have convergence x kδ x as δ 0. Proof. To shorten the notation we set n := p x n, x, A n := F x n and A := F x. By definition of the Bregman distance we have By the q-smoothness of X we have n+ := q x n+ q x n+, x + p x p. This gives q x n+ q q x n q + x n, µ A nj p Fx n y δ β n x n + Gq q µ A n J pfx n y δ + β n x n q. n+ n + x n x, µ A nj p Fx n y δ β n x n + Gq q µ A n J pfx n y δ + β n x n q. 8

9 Furthermore, we have with Tx, x n := Fx Fx n A n x x n x n x, µ A n J pfx n y δ β n x n = µ Fx n y δ, J p Fx n y δ + µ Tx, x n, J p Fx n y δ µ y δ y, J p Fx n y δ β n x n x, x n J px β n x n x, J p x The first term generates a negative term since The second term provides µ Fx n y δ, J p Fx n y δ = µ Fx n y δ p. µ Tx, x n, J p Fx n y δ µη Fx n y Fx n y δ p The third term can be estimated as µη Fx n y δ p + µηδ Fx n y δ p. µ y δ y, J p Fx n y δ µδ Fxn y δ p. The forth term allows us to generate a second negative term, since β n x n x, x n J px = β n x n x, J p x n J p x = β n n β n p x, x n β n n Finally, for the last term we use the source condition 0 to get β n x n x, J p x S + η Fx n y δ β n + S + ηδ β n + q n β n + p p q C p d 0 S q β n. With the estimations above we have n+ βn q n µ η Fx n y δ p + µ + ηδ Fx n y δ p + S + η Fx n y δ β n + S δ + ηβ n + q + Gq q µ A n J pfx n y δ + β n x n q. p p C p d 0 S q β n Assume now, that the iteration did not stop in the last step. Since the stopping criterion is not fulfilled for x n we have δ p < τβ n S. Moreover we have chosen S such that Sδ d 0 S q, therefore we derive S + ηδ β n + p p q C p d 0 S q β n = S β n + ηδ + p q C p + η + p p q C p S β n δ + η + p p q C p = + η + p p q C p 9 p d 0 S q S τ p S β n + p τ q S β n q.

10 Due to the Young inequality we get µ + η Fx n y δ p δ µ + η Fx n y δ p τ β n S p µ η q Fx n y δ p + µ p +η q η p τ βn S q and + η Fx n y δ β n S µ η p Fx n y δ p + q +η p µ η p β n S q. The last term we estimate by Jensen inequality G q µ q A n J pfx n y δ + β n x n q q Gq µ q A n J pfx n y δ + β nx n q We arrive at n+ βn q n + Fxn yδ p + q +η p µ η p + +η q q Gq q µq A n q Fx n y δ p q Gq + q x n q βn q. µ η + q Gq q µq A n q η p µ p τq + + η + q q Gq + q x n q βn q τ q p p C p S q β q n. Now the term in the first bracket is negative or zero by the initial assumption on µ. Therefore, the last estimation simplifies to n+ βn q Gq q n + q x n q βn q + C Sq βn q = βn q n + γn q βq n. What we have proven may be seen as a part of an induction to show that n R n for all n. We have by assertion 0 R 0 and assuming n R n we have proven that n+ R n+. We also have R n+ R n by the choice of β n. Hence the sequence {R n } is monotonically decreasing and bounded, therefore convergent. We show next that {R n } is a zero sequence. The very same trick as used below to show convergence may also be found in e.g. [0, 6]. We define q Gq Γ := sup{ J q px q + C Sq : p x, x R 0 }. By the coercivity of Bregman distances we have Γ <, therefore R p n+ Rn p β n γn q R n βn q Rn p q min{r p 0, /q q Γ p } > 0. We have Therefore R p n n k=0 R p k+ R p k n q min{r p 0, /q q Γ p }. R n q p max{r 0, Γq q } n p. 0

11 We define another auxiliary number γ := C Sq. It is clear that 0 < γ γ n Γ < for all n. We have then β n /q q γ n p Rn p q/q q γ n p max{r 0, Γ q q } p n q max{r 0 /qγ p, Γ/γ p q}n Hence {β n } is a zero sequence and the iteration is well defined. Let kδ = kδ, y δ be the index, where the iteration terminates. Then we have that β kδ = R kδ /qγ kδ p and τβ kδ S δ p. Further there exists a constant c = cδ max > 0 such that Sδ c for all δ δ max. Hence there exists a number C C such that Γ C Sδ q for all δ δ max. Therefore p kδ R kδ = q γ kδ βkδ qγ τ which proves the theorem. p S p δ C q τ p δ Sδ, We assume now that the distance function is known. Based on the convexity estimate 5 we are now able to present the following conclusion which can be considered as an a-priori convergence rates result. Corollary 4.3. Under the condition of Theorem 4. we assume d 0 S = ds; J p x. Then the following holds: i If J p x RF x, i.e. J p x = F x ω for some ω Y, then the choice of S as S ω independent of the noise level δ leads to the convergence rate x kδ x C δ. 4 ii If J p x RF x \ RF x then the choice Sδ := Φ δ with function ΦS := d 0 S q S leads to the optimal convergence rate x kδ x C δ Φ δ p = C d 0 Φ δ p. 5 Remark 4.4. Notice that Φ δ as δ 0 if the source condition 9 is violated. Therefore the best rate that can be achieved with this iteration is O δ and it is achieved in the case that the condition 9 is not violated. In order to understand these convergence rates we compare them with known results for Tikhonov regularization in Banach spaces. Example 4.5. Instead of the iterative procedure we consider Tikhonov regularization Fx p yδ p + α p x p min subject to x DF. We again assume that there exists an linear bounded operator F x which satisfies the η-condition 4 for arbitrary η > 0. We have then the following convergence rates results

12 [6, Theorem 5. and 5.3]: If J p x RF x then an a-priori parameter choice α δ p leads to the convergence rate 4. On the other hand, for J p x RF x \ RF x with distance function ds = ds; J p x we have to introduce the functions ΘS := ds p S p, Ψα := α p d Θ α p and ΦS := ds p p S = ds q S. Then an a-priori choice α := Ψ δ gives the convergence rate 5. We remark, that due to the monotonicity of the function ds the inverse functions Θ α, Ψ δ and Φ δ are well-defined. In particular, Tikhonov regularization and the suggested iterative regularization approach provide the same convergence rates for proper chosen parameters. Moreover, it also turns out, that these convergence rates can be considered as optimal if X and Y are assumed to be Hilbert spaces at least if we can suppose power-type distance functions. Example 4.6. Let X and Y be Hilbert spaces, i.e. we have p = q =. Moreover, we assume x RF x = R F x F x but the element x satisfies a weaker source condition, i.e. x = ff x F x ω for some index function ft, t 0, and some ω X. Then we can rewrite this source condition also as approximate source condition with respect to the stronger violated source condition with corresponding distance function ds; x, S 0. Then according [, Theorem 5.9] we can give an upper bound ds, S 0, for the distance function which allows us to apply the results of corollary 4.3ii. We present two examples. a If ft = t ν for some 0 < ν < then we have the estimate ds κ S ν ν, κ := ω, see also [9, Theorem ] in the compact case. This and p = imply the choice S 4ν ν δ S δ S ν+ ν for the parameter S. Then we get from 5 that which is optimal in this context. x kδ,y δ x C δ ν ν+ S δ ν ν+ b We assume a logarithmic source condition, i.e. ft = ln t µ for some µ > 0 and F x. Then we cannot state this upper bound ds for the distance function explicitly. However, from the consideration in [] we know the following property: introducing a regularization parameter α then the well-defined choice S = Sα such that αsα = dsα gives the equality αsα = dsα = κ ln α µ. For logarithmic source conditions the choice α δ for the regularization parameter is suggested. We set α := δ Then we get the choice ln δ µ δ S S δ ln δ µ which ends up at the known convergence rate x δ α x C ln δ µ.

13 We also want to point out that the right choice of d 0 S and hence S is important for the convergence and the speed of convergence of the algorithm. In particular, if S is chosen too small then the algorithm might not work. Therefore we present the following indicator which helps to decide if the parameter S was chosen properly. Lemma 4.7. Under the assumptions of Theorem 4. we have with C µ := µ η q G q q µq K q 0. C p x p n+ x n p + C µ Fx n y δ p p p R p n. 6 Proof. By assumption n R n holds. Following the proof of Theorem 4. we start with C p x p n+ x n p Cp xn+ x + x p n x p p n+ + n [ ] p R βn γ q n + β nq n β p n C µ Fx n y δ p, since q =. By construction we have β p p n Rn q γ n and β n q. Hence, we continue [ C p x p n+ x n p + C µ Fx n y δ p p β ] n R n + βnrn q q p βn R q p n p R p n = p p R p n. This proves the estimate. The estimate 6 gives a necessary condition for the validity of the assumptions of Theorem 4. and hence on the proper choice of the function d 0 S and the parameter S. A violation of 6 shows that the the decay rate of d 0 S is too high and hence S is chosen to small. On the other hand, if the right side of 6 is much smaller than the left hand side it might be an indication that the decay rate of d 0 S is too low and consequently S is chosen to large. This causes lower speed of convergence. So, checking this inequality 6 during the iteration and readjusting the parameter S in a proper way can help to ensure convergence and improve speed of convergence. It is still a topic on ongoing research of developing an a-posteriori choice of the function d 0 S leading to the optimal convergence rates 4 and 5, respectively. 5 A convergence analysis without source condition The convergence rates results essentially base on the proper choice of the function d 0 S, S 0. We therefore present here a convergence analysis which does not depend on such assumption. Basically, it combines the convergence analysis of [6] and [7]. In order to prove convergence we need the following slight modifications of the algorithm: 3

14 i Choose µ such that C µ := ηµ + q Gq q Kq µ q > 0. ii Choose C s > 0 sufficiently large such that C s d 0 0 J p x q = x p and choose S as S := C s δ. iii Choose a decreasing sequence { β j } such that β j > 0 for all j 0 and β j=0 j <. Calculate γ n { } := max q G q x n + C Sq, R n /q β p n, β n := min { q, R n /qγ n p } and R n+ := βn R q n Fxn yδ p C µ + γn q βq n. The suggested modifications do not annihilate the results of Theorem 4.. Moreover, the modified calculation of the parameters has the following consequence: we have β n β n for all n 0 and β n = β n if and only if γ n = R n /q β p n. In particular, we bound the parameter β n from above in each iteration step by β n. Since by ii we have Sδ = C s /δ the estimate 3 is only a boundedness estimate. To establish convergence x kδ x as δ 0 we have to carry out completely new analysis. First we prove the following: Lemma 5.. Under the above modifications we have kδ n=0 for some constant C > 0. Fx n y δ p R 0 C µ and kδ + C δ p Proof. By construction we have R n+ := βn R q n Fxn yδ p C µ + γn q βq n R n Fxn yδ p C µ for all n kδ which proves the first part. From the stopping criterion we derive for all n < kδ that β n C n and hence δ p τ β n S τ C Cs δ n or equivalently n τ C C s δ p which proves the second part by choosing n = kδ. We now discuss the noiseless case δ = 0. For δ 0 we have γ n and hence β n 0. Hence we set β n 0 for given noiseless data. Theorem 5.. Assume δ = 0. Then the algorithm stops after a finite number k of iterations with x := x k or we have convergence x n x as n. In both cases the element x denotes any solution of equation in B x, i.e. we have F x = y. 4

15 Proof. Let the iteration process does not stop after a finite number of steps. From the proof of theorem 4. with β n = 0 we derive n+ n Fxn y p C µ for all n 0. Hence we proved n= Fx n y p < and consequently Fx n y as n. Now we can choose indices k > l such that Fx n y Fx k y for all l n k. Then we derive Jp x k J p x l, x k x = J p x n+ J p x n, x k x µ J p Fx n y, F x n x k x n + x n x µ Fx n y p + η Fx n y + Fx n Fx k µ Fx n y p + η Fx n y + Fx k y 3 µ Fx n y p + η Fx n y = 3 µ + η Fx n y p. For l the right hand side goes to zero. Following the argumentation in [7] we have {x n } to be a Cauchy sequence and hence x n x X. By the continuity of F and Fx n y we have F x = y which shows that x is a solution. We now can prove convergence. Theorem 5.3. We have x kδ x where x denotes any solution of equation in B x, i.e. we have F x = y. Proof. Let {δ j } be a sequence of noise levels with δ j 0 as j. Without loss of generality we suppose kδ j as j. Otherwise we see that the iterates depend continuously on y δ j. From lemma 5. we conclude Fx kδj y δ j 0 as j and hence Fx kδj y as j. Let be j fixed and δ = δ j. Thus we can find l < k kδ 5

16 with Fx n y δ Fx k y δ for all l n k. This gives J p x k J p x δ l, x k x = Jp x n+ J p x n, x k x = µf x n J p Fx n y δ + β n J p x n, x k x µ J p Fx n y δ, F x n x k x + β n Jp x n, x k x. We show that the right hand side goes to zero as kδ and l. Therefore we estimate J p Fx n y δ, F x n x k x 3 + η Fx n y δ p Fx n y k 3 µ + η Fx n y δ p Fx n y δ + δ k 3 µ + η Fx n y δ p + Fx n y δ p δ. The first sum tends to zeros as l. For the second sum we apply Young s inequality to derive Fx n y δ p δ C l q C l q Fx n y δ p + p C p q l Fx n y δ p + C p C p l k δ p with arbitrary constant C l > 0. Here we additionally used that k kδ C δ p. We now choose k C l := Fx n y δ p. Then we see that both summands on the right hand side of the last inequality vanish as 6

17 l. On the other hand we derive with some generic constant C > 0 that β n Jp x n, x k x β n J p x n x k x C p C p p k β n p k C β n C β n 0 as l. We can again apply the argumentation of [7] to prove the assertion. 6 A nonlinear example We start with the following example which arises in option pricing theory, see e.g. [4]. The corresponding inverse problem was deeply studied in [7], see also the references therein for an overview about further aspects in the mathematical foundation of inverse option pricing. We also refer to [0] for some newer results. A European call option on a traded asset is a contract which gives the holder the right to buy the asset at time maturity t > 0 for a fixed strike price K > 0 independent on the actual asset price at time t > 0. For fixed current asset price X > 0 and time t 0 = 0 we denote with ct the fair price of such call option with maturity t 0. Following the generalization of the classical Black-Scholes analysis with time-dependent volatility function σt, t 0, and constant riskless short-term interest rate r 0 we introduce the Black-Scholes function U BS for the variables X > 0, K > 0, r 0 and s 0 as { X Φd K e r t Φd U BS X, K, r, t, s :=, s > 0, max { X K e r t, 0 }, s = 0, with d := ln X K + r t + s, d := d s s and Φξ, ξ R, denotes the cumulative density function of the standard normal distribution. Then the price of the option as function of the maturity t [0, T] is given by the formula t ct := U BS X, K, r, t, σ τ dτ, t [0, T], where T > 0 denotes the maximal time horizon of interest. 0 From the investigations in [7] we know that for t 0 some additional effects occurs which need a separate treatment. In order to keep the considerations here more simple we introduce a small time t ε > 0 and assume the volatility to be known and constant on the interval [0, t ε ], i.e. σt σ 0 > 0, t [0, t ε ]. Then, for given < a, b < we define the nonlinear operator F : DF L a t ε, T L b t ε, T as t [Fx]t := U BS X, K, r, t, σ0t ε + xτ dτ, t [t ε, T], t ε 7

18 with domain DF := {x L a t ε, T : xt c a.e. on [t ε, T]}. We further use the notation kt, s := U BS X, K, r, t, s and assume x 0 DF. For some radius > 0 and arbitrary given x DF B x 0 we set x := x x 0 and st := t t ε xτ dτ, t [t ε, T]. Then we have for t [t ε, T] that Moreover we set st L a/a t ε,t x L a = t t ε a a x L a t t ε a a. s 0 t := σ 0t ε + t t ε x 0 τ dτ, t [t ε, T]. Hence we observe that for t [t ε, T] we can estimate { } ct := max σo t ε + c t t ε, s 0 t t t ε a a [s 0 + τ s]t s 0 t + t t ε a a for all τ [0, ]. Furthermore we see with ν := ln X K that X k s t, s = π s exp ν + r t ν + r t s > 0 s 8 and k ss t, s = X 4 π s ν + r t + s 4 + ν + r t exp ν + r t s. s s 8 We can estimate and k ss t, [s 0 + τ s]t k s t, [s 0 + τ s]t X ν + r t 4 + π ct ct 4 + ct ν + r t =: C t exp X πs 0 t + t t ε a =: C t. exp s 0 t + t t ε a a ν + r t ct a ν + r t ν + r t s 0t t t ε a a 8 s 0t + t t ε a a 8 8

19 We consider the quotient C t/c t. Here we obtain C t C t = I ν + r t exp I + t t ε a a 4 with I := s 0 t + t t ε a a ct + t t ε a a ct + T t ε a a σ0t ε ν + r t + ct 4 + ct ν + r t + ct 4 + ct ν + r T σ 4 0t ε σ0t ε and I := s 0t + t t ε a a a ct cts 0 t + t t ε a a t t ε a σ0 t εs 0 t + t t ε a a T t a a ε. σ0t 4 ε This gives C C L := max I exp t [t ε,t] ν + r t I + t t ε a a 4 <. Due to the mean value theorem and the positivity of k s we have [Fx 0 + x Fx 0 ]t = [kt, [s 0 + s]t kt, s 0 t = k s t, [s 0 + τ s]t st C t st a.e. on [t ε, T]. This leads with Tx 0 := Fx 0 + x Fx 0 F x 0 x to [Tx 0 ]t = k s t, [s 0 + τ s]t k s t, s 0 t st dτ 0 C t st C tt t ε a a x L a [Fx 0 + x Fx 0 ]t C t a.e. on [t ε, T]. This proves Tx 0 L b T t ε a a C C L x L a Fx 0 + x Fx 0 L b. In particular, condition A3 holds for > 0 sufficiently small. 9

20 7 Numerical results In this section we present numerical results for the the nonlinear operator considered in the last section. We choose the parameters X =, K = 0.85, r = 0.05, t ε = 0. and T =. Moreover, we assume that the exact solution is given by x = t , t [0., ]. Finally we set σ 0 := x 0. = 0.6. As spaces we take X = L. t ε, T and Y = L t ε, T, where we have chosen X = L. arbitrary as an example of a non-hilbert space. For discretization we introduce t j := 0. + j/0.9 N, 0 j N = 500, and use the values y j := [Fx ]t j, j N, as given discretized data y = y,...,y N T R N. The solutions x are discretized as piecewise constant functions on the the subintervals t j, t j, j N. The initial guess x 0 DF was chosen such that we can set η := 0.5. For our experiments we have chosen relative error in the range In a first variant we used the algorithm without assuming an approximate source condition. Hence we set S δ. In order to ensure convergence we additionally introduced the bounds β n := c n with c := 0.99 for the parameters β n, n 0. For the second calculations we applied some knowledge about the approximate source condition and the corresponding distance function. We set S δ 3 due to following arguments: The operator F is differentiable in x and the derivative F x : X Y is given via [F x h]t := k s t, [A x ]t [A h]t, [A h]t := t t ε hτ dτ, t [0., ], for all h X. We also observe that by the specific choice of the exact solution we have x, J p x L 0., and the structure of the derivatives imply that all iterates x n belongs to L 0.,. In particular, x n L 0., holds. In that case we can apply lemma 3. also with the Hilbert space setting X = Y = L 0.,. In the Hilbert space setting A : L 0., L 0.,, we know from [9] that x RA A ν for all ν < 4. Since we have k s, A x L 0., and the function k s, A x is bounded away from zero the mapping z k s, A x z, z L 0., is continuously invertible. Therefore we can assume that the source condition is determined by the operator A. Keeping in mind Example 4.6a with ν = 4 we obtain the already mentioned choice S δ 3. Moreover, from the theoretical considerations we expect the convergence rate x kδ x δ 3. The numerical results are presented in Figure and Table. By comparing both calculations we see that the assumed approximate source condition in fact improves the achieved reconstruction of the exact solution x. The convergence rate in that case is close to the expected rate δ 3, too. On the other hand, by introducing the upper bounds β n, n 0, we can expect from the results of [6] a similar convergence rate in that situation at least in a Hilbert space setting. 0

21 S δ /3 S δ δ /3 0 x α δ x δ rel Figure : Relative reconstruction error depending on the noise level δ δ rel S δ S δ 3 x kδ x x x kδ x x Table : Relative reconstruction error depending on the noise level δ 8 Conclusions We point out, that the main results of this paper are the theoretical results contained in Section 6 iteration is computable. If we have the information that exact source condition J q x = F x ω for some ω Y is fulfilled and we can give some reasonable bound on ω then the method in this paper should be used to ensure that this information really translates into convergence rates. If only the weaker generalized source conditions are fulfilled, but at the same time some more information i.e. the information on ds is given then again our method may be used for regularization, since again the information on the source condition is translated into convergence rates. However, since the method of this paper can be considered as gradient method for minimizing Tikhonov functionals with fixed step width but varying regularization parameter α the convergence of the algorithm turns out to be rather slow in some examples. In particular the method introduced by [7] or the newly presented accelerated versions of [8] SESOP and RESOP promise a faster convergence. However, no convergence rates for δ 0 are presented therein. So it part of ongoing research to improve the speed of convergence of the suggested algorithm.

22 References [] A. Bakushinskii, The problem of the convergence of the iteratively regularized Gauss-Newton method, Comput.Math.Math.Phys., 3 99, pp [] A. Bakushinskii and M. Kokurin, Iterative Methods for Approximate Solutions of Inverse Problems, Springer, Dortrecht, 004. [3] J. Baumeister, Stable Solutions of Inverse Problems, Vieweg, Braunschweig, 987. [4] B. Blaschke, A. Neubauer, and O. Scherzer, On convergence rates for the iteratively regularized Gauss-Newton method, IMA J.Numer.Anal., 7 997, pp [5] T. Bonesky, K. Kazimierski, P. Maass, F. Schöpfer, and T. Schuster, Minimization of Tikhonov functionals in Banach spaces, to appear in Abstract and Applied Analysis, 007. [6] K. Bredies and D. A. Lorenz, Iterated hard shrinkage for minimization problems with sparsity constraints. To appear in SIAM Journal on Scientific Computing, 007. [7] I. Cioranescu, Geometry of Banach spaces, duality mappings and nonlinear problems, Kluwer, Dordrecht, 990. [8] I. Daubechies, M. Defrise, and C. D. Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Communications in Pure and Applied Mathematics, , pp [9] R. Deville, G. Godefroy, and V. Zizler, Smoothness and Renormings in Banach spaces, Pitman Monographs and Surveys in Pure and Applied Mathematics, Longman Scientific & Technical, Harlow, UK, 993. [0] J. C. Dunn, Global and asymptotic convergence rate estimates for a class of projected gradient processes, SIAM Journal on Control and Optimization, 9 98, pp [] H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, vol. 375 of Mathematics and its Applications, Kluwer Academic Publishers Group, Dordrecht, 000. [] M. Hanke, A. Neubauer, and O. Scherzer, A convergence analysis of the Landweber iteration for nonlinear ill-posed problems, Numerische Mathematik, 7 997, pp. 37. [3] O. Hanner, On the uniform convexity of L p and l p, Ark. Mat., 3 956, pp [4] T. Hein, Convergence rates for regularization of ill-posed problems in Banach spaces by approxinative source coniditions, Inverse Problems, 4 008, pp. Article ID , 0 pages.

23 [5], On Tihkonov regularization in Banach spaces - optimal convergence rates, Applicable Analysis, to appear. [6], Regularization in Banach spaces convergence rates by approxinative source conditions, J.Inv.Ill-Posed Problems, 7 009, pp [7] T. Hein and B. Hofmann, On the nature of ill-posedness of an inverse problem arising in inverse option pricing, Inverse Problems, 9 003, pp [8] B. Hofmann, Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators, Mathematical Methods in the Applied Sciences, 9 006, pp [9] B. Hofmann, D. Düvelmeyer, and K. Krumbiegel, Approximate source conditions in regularization new analytical results and numerical studies, Journal Mathematical Modelling and Analysis, 006, pp [0] B. Hofmann, B. Kaltenbacher, C. Pöschl, and O. Scherzer, A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators, Inverse Problems, 3 007, pp [] B. Hofmann and P. Mathé, Analysis of profile functions for general linear regularization methods, SIAM J.Numer.Anal., , pp. 4. [] T. Hohage, Logarithmic convergence rates of the iteratively regularized Gauss- Newton method for an inverse potential and an inverse scattering problem, Inverse Problems, 3 997, pp [3] B. Kaltenbacher, A. Neubauer, and O. Scherzer, Iterative Regularization Methods for Nonlinear Ill-posed problems, de Gruyter, Berlin, 008. [4] Y. Kwok, Mathematical Models of Financial Derivatives, Springer, Singarpore, 998. [5] A. K. Louis, Inverse und schlecht gestellte Probleme, B.G. Teubner, 989. [6] O. Scherzer, A modified Landweber iteration for solving parameter estimation problems, Appl.Math.Optim., , pp [7] F. Schöpfer, A. Louis, and T. Schuster, Nonlinear iterative methods for linear ill-posed problems in Banach spaces, Inverse Problems, 006, pp [8] F. Schöpfer, T. Schuster, and A. Louis, Metric and Bregman projections onto affine subspaces and their computation via sequential subspace optimization methods, Journal of Inverse and Ill-Posed Problems, 6 008, pp [9] D. Werner, Funktionalanalysis in German, Berlin, Springer, 997. [30] Z.-B. Xu and G. Roach, Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces, J. Math. Anal. Appl., 57 99, pp

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik Accelerated Landweber iteration in Banach spaces T. Hein, K.S. Kazimierski Preprint 2009-17 Fakultät für Mathematik Impressum: Herausgeber: Der Dekan der Fakultät für Mathematik an der Technischen Universität

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

arxiv: v1 [math.na] 16 Jan 2018

arxiv: v1 [math.na] 16 Jan 2018 A FAST SUBSPACE OPTIMIZATION METHOD FOR NONLINEAR INVERSE PROBLEMS IN BANACH SPACES WITH AN APPLICATION IN PARAMETER IDENTIFICATION ANNE WALD arxiv:1801.05221v1 [math.na] 16 Jan 2018 Abstract. We introduce

More information

Convergence rates in l 1 -regularization when the basis is not smooth enough

Convergence rates in l 1 -regularization when the basis is not smooth enough Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal

More information

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto

More information

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.

More information

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear

More information

Convergence rates of the continuous regularized Gauss Newton method

Convergence rates of the continuous regularized Gauss Newton method J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical

More information

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing

More information

Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators

Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators Bernd Hofmann Abstract The object of this paper is threefold. First,

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

Linear convergence of iterative soft-thresholding

Linear convergence of iterative soft-thresholding arxiv:0709.1598v3 [math.fa] 11 Dec 007 Linear convergence of iterative soft-thresholding Kristian Bredies and Dirk A. Lorenz ABSTRACT. In this article, the convergence of the often used iterative softthresholding

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Stephan W Anzengruber Ronny Ramlau Abstract We derive convergence rates for Tikhonov-type regularization with conve

More information

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (206), 424 4225 Research Article Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Jong Soo

More information

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 2191-2195 1 On a new notion of regularizer A.G. Ramm LMA/CNRS, 31 Chemin Joseph Aiguier, Marseille 13402, France and Mathematics

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Radu Ioan Boț and Bernd Hofmann September 16, 2016 Abstract In the literature on singular perturbation

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Functionalanalytic tools and nonlinear equations

Functionalanalytic tools and nonlinear equations Functionalanalytic tools and nonlinear equations Johann Baumeister Goethe University, Frankfurt, Germany Rio de Janeiro / October 2017 Outline Fréchet differentiability of the (PtS) mapping. Nonlinear

More information

arxiv: v2 [math.fa] 18 Jul 2008

arxiv: v2 [math.fa] 18 Jul 2008 Optimal Convergence Rates for Tikhonov Regularization in Besov Scales arxiv:0806095v [mathfa] 8 Jul 008 Introduction D A Lorenz and D Trede Zentrum für Technomathematik University of Bremen D 8334 Bremen

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

Universität des Saarlandes. Fachrichtung 6.1 Mathematik

Universität des Saarlandes. Fachrichtung 6.1 Mathematik Universität des Saarlandes U N I V E R S I T A S S A R A V I E N I S S Fachrichtung 6.1 Mathematik Preprint Nr. 155 A posteriori error estimates for stationary slow flows of power-law fluids Michael Bildhauer,

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

REPRESENTABLE BANACH SPACES AND UNIFORMLY GÂTEAUX-SMOOTH NORMS. Julien Frontisi

REPRESENTABLE BANACH SPACES AND UNIFORMLY GÂTEAUX-SMOOTH NORMS. Julien Frontisi Serdica Math. J. 22 (1996), 33-38 REPRESENTABLE BANACH SPACES AND UNIFORMLY GÂTEAUX-SMOOTH NORMS Julien Frontisi Communicated by G. Godefroy Abstract. It is proved that a representable non-separable Banach

More information

Iterative Regularization Methods for Inverse Problems: Lecture 3

Iterative Regularization Methods for Inverse Problems: Lecture 3 Iterative Regularization Methods for Inverse Problems: Lecture 3 Thorsten Hohage Institut für Numerische und Angewandte Mathematik Georg-August Universität Göttingen Madrid, April 11, 2011 Outline 1 regularization

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

Necessary conditions for convergence rates of regularizations of optimal control problems

Necessary conditions for convergence rates of regularizations of optimal control problems Necessary conditions for convergence rates of regularizations of optimal control problems Daniel Wachsmuth and Gerd Wachsmuth Johann Radon Institute for Computational and Applied Mathematics RICAM), Austrian

More information

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM Georgian Mathematical Journal Volume 9 (2002), Number 3, 591 600 NONEXPANSIVE MAPPINGS AND ITERATIVE METHODS IN UNIFORMLY CONVEX BANACH SPACES HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

More information

A Convergence Rates Result for Tikhonov Regularization in Banach Spaces with Non-Smooth Operators

A Convergence Rates Result for Tikhonov Regularization in Banach Spaces with Non-Smooth Operators A Convergence Rates Result for Tikhonov Regularization in Banach Spaces with Non-Smooth Operators B. Hofmann, B. Kaltenbacher, C. Pöschl and O. Scherzer May 28, 2008 Abstract There exists a vast literature

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY ARMIN LECHLEITER AND ANDREAS RIEDER January 8, 2007 Abstract. The Newton-type regularization

More information

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces YUAN-HENG WANG Zhejiang Normal University Department of Mathematics Yingbing Road 688, 321004 Jinhua

More information

Radius Theorems for Monotone Mappings

Radius Theorems for Monotone Mappings Radius Theorems for Monotone Mappings A. L. Dontchev, A. Eberhard and R. T. Rockafellar Abstract. For a Hilbert space X and a mapping F : X X (potentially set-valued) that is maximal monotone locally around

More information

On an iterative algorithm for variational inequalities in. Banach space

On an iterative algorithm for variational inequalities in. Banach space MATHEMATICAL COMMUNICATIONS 95 Math. Commun. 16(2011), 95 104. On an iterative algorithm for variational inequalities in Banach spaces Yonghong Yao 1, Muhammad Aslam Noor 2,, Khalida Inayat Noor 3 and

More information

A range condition for polyconvex variational regularization

A range condition for polyconvex variational regularization www.oeaw.ac.at A range condition for polyconvex variational regularization C. Kirisits, O. Scherzer RICAM-Report 2018-04 www.ricam.oeaw.ac.at A range condition for polyconvex variational regularization

More information

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction SHARP BOUNDARY TRACE INEQUALITIES GILES AUCHMUTY Abstract. This paper describes sharp inequalities for the trace of Sobolev functions on the boundary of a bounded region R N. The inequalities bound (semi-)norms

More information

Rolle s Theorem for Polynomials of Degree Four in a Hilbert Space 1

Rolle s Theorem for Polynomials of Degree Four in a Hilbert Space 1 Journal of Mathematical Analysis and Applications 265, 322 33 (2002) doi:0.006/jmaa.200.7708, available online at http://www.idealibrary.com on Rolle s Theorem for Polynomials of Degree Four in a Hilbert

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations

Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Ronny Ramlau Gerd Teschke April 13, 25 Abstract We shall be concerned with the construction of Tikhonov based iteration

More information

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011 LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic

More information

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (06), 5536 5543 Research Article On the split equality common fixed point problem for quasi-nonexpansive multi-valued mappings in Banach spaces

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

Synchronal Algorithm For a Countable Family of Strict Pseudocontractions in q-uniformly Smooth Banach Spaces

Synchronal Algorithm For a Countable Family of Strict Pseudocontractions in q-uniformly Smooth Banach Spaces Int. Journal of Math. Analysis, Vol. 8, 2014, no. 15, 727-745 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijma.2014.212287 Synchronal Algorithm For a Countable Family of Strict Pseudocontractions

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES. Jong Soo Jung. 1. Introduction

CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES. Jong Soo Jung. 1. Introduction Korean J. Math. 16 (2008), No. 2, pp. 215 231 CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES Jong Soo Jung Abstract. Let E be a uniformly convex Banach space

More information

NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS. Shouchuan Hu Nikolas S. Papageorgiou. 1. Introduction

NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS. Shouchuan Hu Nikolas S. Papageorgiou. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 34, 29, 327 338 NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS Shouchuan Hu Nikolas S. Papageorgiou

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated

More information

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS Int. J. Appl. Math. Comput. Sci., 2002, Vol.2, No.2, 73 80 CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS JERZY KLAMKA Institute of Automatic Control, Silesian University of Technology ul. Akademicka 6,

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Int. Journal of Math. Analysis, Vol. 1, 2007, no. 4, 175-186 Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Haiyun Zhou Institute

More information

Preconditioned Newton methods for ill-posed problems

Preconditioned Newton methods for ill-posed problems Preconditioned Newton methods for ill-posed problems Dissertation zur Erlangung des Doktorgrades der Mathematisch-Naturwissenschaftlichen Fakultäten der Georg-August-Universität zu Göttingen vorgelegt

More information

Fourier Transform & Sobolev Spaces

Fourier Transform & Sobolev Spaces Fourier Transform & Sobolev Spaces Michael Reiter, Arthur Schuster Summer Term 2008 Abstract We introduce the concept of weak derivative that allows us to define new interesting Hilbert spaces the Sobolev

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION

INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION ANDREAS RIEDER August 2004 Abstract. In our papers [Inverse Problems, 15, 309-327,1999] and [Numer. Math., 88, 347-365, 2001]

More information

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Boris T. Polyak Andrey A. Tremba V.A. Trapeznikov Institute of Control Sciences RAS, Moscow, Russia Profsoyuznaya, 65, 117997

More information

Accelerated Projected Steepest Descent Method for Nonlinear Inverse Problems with Sparsity Constraints

Accelerated Projected Steepest Descent Method for Nonlinear Inverse Problems with Sparsity Constraints Accelerated Projected Steepest Descent Method for Nonlinear Inverse Problems with Sparsity Constraints Gerd Teschke Claudia Borries July 3, 2009 Abstract This paper is concerned with the construction of

More information

A Note on the Class of Superreflexive Almost Transitive Banach Spaces

A Note on the Class of Superreflexive Almost Transitive Banach Spaces E extracta mathematicae Vol. 23, Núm. 1, 1 6 (2008) A Note on the Class of Superreflexive Almost Transitive Banach Spaces Jarno Talponen University of Helsinki, Department of Mathematics and Statistics,

More information

HARNACK S INEQUALITY FOR GENERAL SOLUTIONS WITH NONSTANDARD GROWTH

HARNACK S INEQUALITY FOR GENERAL SOLUTIONS WITH NONSTANDARD GROWTH Annales Academiæ Scientiarum Fennicæ Mathematica Volumen 37, 2012, 571 577 HARNACK S INEQUALITY FOR GENERAL SOLUTIONS WITH NONSTANDARD GROWTH Olli Toivanen University of Eastern Finland, Department of

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Institut für Numerische und Angewandte Mathematik

Institut für Numerische und Angewandte Mathematik Institut für Numerische und Angewandte Mathematik Iteratively regularized Newton-type methods with general data mist functionals and applications to Poisson data T. Hohage, F. Werner Nr. 20- Preprint-Serie

More information

THROUGHOUT this paper, we let C be a nonempty

THROUGHOUT this paper, we let C be a nonempty Strong Convergence Theorems of Multivalued Nonexpansive Mappings and Maximal Monotone Operators in Banach Spaces Kriengsak Wattanawitoon, Uamporn Witthayarat and Poom Kumam Abstract In this paper, we prove

More information

ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE

ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE QINIAN JIN AND PETER MATHÉ Abstract. We consider statistical linear inverse problems in Hilbert spaces. Approximate solutions are sought within

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999 Scientiae Mathematicae Vol. 3, No. 1(2000), 107 115 107 ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI Received December 14, 1999

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

Research Article Strong Convergence of a Projected Gradient Method

Research Article Strong Convergence of a Projected Gradient Method Applied Mathematics Volume 2012, Article ID 410137, 10 pages doi:10.1155/2012/410137 Research Article Strong Convergence of a Projected Gradient Method Shunhou Fan and Yonghong Yao Department of Mathematics,

More information

Institut für Mathematik

Institut für Mathematik U n i v e r s i t ä t A u g s b u r g Institut für Mathematik Martin Rasmussen, Peter Giesl A Note on Almost Periodic Variational Equations Preprint Nr. 13/28 14. März 28 Institut für Mathematik, Universitätsstraße,

More information

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS Olivier Scaillet a * This draft: July 2016. Abstract This note shows that adding monotonicity or convexity

More information

On boundary value problems for fractional integro-differential equations in Banach spaces

On boundary value problems for fractional integro-differential equations in Banach spaces Malaya J. Mat. 3425 54 553 On boundary value problems for fractional integro-differential equations in Banach spaces Sabri T. M. Thabet a, and Machindra B. Dhakne b a,b Department of Mathematics, Dr. Babasaheb

More information

On the Midpoint Method for Solving Generalized Equations

On the Midpoint Method for Solving Generalized Equations Punjab University Journal of Mathematics (ISSN 1016-56) Vol. 40 (008) pp. 63-70 On the Midpoint Method for Solving Generalized Equations Ioannis K. Argyros Cameron University Department of Mathematics

More information

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, 2547-2565 An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational

More information

Part A: An introduction with four exemplary problems and some discussion about ill-posedness concepts

Part A: An introduction with four exemplary problems and some discussion about ill-posedness concepts Part A: An introduction with four exemplary problems and some discussion about ill-posedness concepts B ERND H OFMANN TU Chemnitz Faculty of Mathematics D-97 Chemnitz, GERMANY Thematic Program Parameter

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

BREGMAN DISTANCES, TOTALLY

BREGMAN DISTANCES, TOTALLY BREGMAN DISTANCES, TOTALLY CONVEX FUNCTIONS AND A METHOD FOR SOLVING OPERATOR EQUATIONS IN BANACH SPACES DAN BUTNARIU AND ELENA RESMERITA January 18, 2005 Abstract The aim of this paper is twofold. First,

More information

Examples of Dual Spaces from Measure Theory

Examples of Dual Spaces from Measure Theory Chapter 9 Examples of Dual Spaces from Measure Theory We have seen that L (, A, µ) is a Banach space for any measure space (, A, µ). We will extend that concept in the following section to identify an

More information

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1 NONLINEAR EVOLTION EQATIONS FOR MEASRES ON INFINITE DIMENSIONAL SPACES V.I. Bogachev 1, G. Da Prato 2, M. Röckner 3, S.V. Shaposhnikov 1 The goal of this work is to prove the existence of a solution to

More information

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable) Nonlinear equations Definition A nonlinear equation is any equation of the form where f is a nonlinear function. Nonlinear equations x 2 + x + 1 = 0 (f : R R) f(x) = 0 (x cos y, 2y sin x) = (0, 0) (f :

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

CONVERGENCE OF THE STEEPEST DESCENT METHOD FOR ACCRETIVE OPERATORS

CONVERGENCE OF THE STEEPEST DESCENT METHOD FOR ACCRETIVE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 127, Number 12, Pages 3677 3683 S 0002-9939(99)04975-8 Article electronically published on May 11, 1999 CONVERGENCE OF THE STEEPEST DESCENT METHOD

More information

Parameter Identification

Parameter Identification Lecture Notes Parameter Identification Winter School Inverse Problems 25 Martin Burger 1 Contents 1 Introduction 3 2 Examples of Parameter Identification Problems 5 2.1 Differentiation of Data...............................

More information

Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile Regularized nonconvex minimization for image restoration

Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile Regularized nonconvex minimization for image restoration Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile 2016 Regularized nonconvex minimization for image restoration Claudio Estatico Joint work with: Fabio Di Benedetto, Flavia

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information