Endogenous strategies for choosing the Lagrange multipliers in nonstationary iterated Tikhonov method for ill-posed problems

Size: px
Start display at page:

Download "Endogenous strategies for choosing the Lagrange multipliers in nonstationary iterated Tikhonov method for ill-posed problems"

Transcription

1 Endogenous strategies for choosing the Lagrange multipliers in nonstationary iterated Tikhonov method for ill-posed problems R. Boiger A. Leitão B. F. Svaiter May 20, 2016 Abstract In this article we propose a novel nonstationary iterated Tikhonov (nit) type method for obtaining stable approximate solutions to ill-posed operator equations modeled by linear operators acting between Hilbert spaces. Geometrical properties of the problem are used to derive an endogenous strategy for choosing a sequence of regularization parameters (Lagrange multipliers) for the nit iteration. Converge analysis for this new method is provided. Numerical experiments are presented for two distinct applications: I) A 2D elliptic parameter identification problem (Inverse Potential Problem); II) An image deblurring problem. The obtained results validate the efficiency of our method compared with standard implementations of the nit method (where a geometrical choice is typically used for the sequence of Lagrange multipliers). Keywords. Ill-posed problems; Linear operators; Iterated Tikhonov method; Nonstationary methods. AMS Classification: 65J20, 47J06. 1 Introduction In this article we propose a new nonstationary Iterated Tikhonov (nit) type method [5, Sec. 1.2] for obtaining stable approximations of linear ill-posed problems. The novelty of our approach consists in adopting an endogenous strategy for the choice of the Lagrange multipliers. With this strategy the new iterate is obtained as the projection of the current one onto a levelset of the residual function, whose level belongs to a range defined by the current residual and by the noise level. As a consequence the admissible Lagrange multipliers (in each iteration) shall belong to a non-degenerate interval instead of being a single value, reducing the computational burden of evaluating the multipliers. Moreover, the choice of the above mentioned range enforces geometrical decay of the residual. The resulting method proves, in our preliminary numerical experiments, to be more efficient than the classical geometrical choice of the Lagrange multipliers [14], typically used in implementations of nit type methods. Insitut für Mathematik, Alpen-Adria Universität Klagenfurt, Universitätsstrasse 65-67, 9020 Klagenfurt, Austria, romana.boiger@aau.at. Department of Mathematics, Federal University of St. Catarina, P.O. Box 476, Florianópolis, Brazil, acgleitao@gmail.com. IMPA, Estr. Dona Castorina 110, Rio de Janeiro, Brazil, benar@impa.br. 1

2 The inverse problem we are interested in consists of determining an unknown quantity x X from the set of data y Y, where X, Y are Hilbert spaces. In practical situations, one does not know the data exactly; instead, only approximate measured data y δ Y are available with y δ y Y δ, (1) where δ > 0 is the (known) noise level. The available data y δ are obtained by indirect measurements of the parameter x, this process being described by the ill-posed operator equation A x = y δ, (2) where A : X Y is a bounded linear operator, whose inverse A 1 : R(A) X either does not exist, or is not continuous. Consequently, approximate solutions are extremely sensitive to noise in the data. Linear ill-posed problems are commonly found in applications ranging from image analysis to parameter identification in mathematical models. There is a vast literature on iterative methods for the stable solution of (2). We refer the reader to the text books [13, 17, 2, 23, 24, 1, 21, 10, 25, 19] and the references therein. Iterated Tikhonov (IT) type methods for solving the ill-posed problem (2) are defined by iteration formula x δ k = arg min { λ k Ax y δ 2 + x x δ k 1 2}, what corresponds to x δ k = xδ k 1 (I + λ ka A) 1 λ k A (Ax δ k 1 yδ ), (3) where A : Y X is the adjoint operator to A. The parameter λ k > 0 can be viewed as the Lagrange multiplier of the problem of projecting x k 1 onto a levelset of Ax y δ 2. If the sequence {λ k = λ} is constant, iteration (3) is called stationary IT (or sit), otherwise it is denominated nonstationary IT (or nit). In the literature iterated Tikhonov type methods are also defined by x δ k = arg min { Ax y δ 2 + α k x x δ k 1 2}, what corresponds to x δ k = (α ki + A A) 1 (α k x δ k 1 + A y δ ), this formulation being equivalent to previous one with α k = λ 1 k. The sit method for solving (2) was considered in [23, 13], were a well developed convergence analysis can be found (see also [22], were Lardy analyzed the particular choice λ k = 1). It is worth to distinguish between the sit method and the iterated Tikhonov methods of order n [20, 9, 26], where the number of iterations (namely n) is fixed. In this case λ k = λ > 0, k = 0,..., n 1, and λ plays the role of the regularization parameter. The nit method was addressed by many authors, e.g. [11, 14, 5]. In numerical implementations of this method, the geometrical choice λ k = q k, q > 1, is a commonly adopted strategy and we shall refer to the resulting method as gnit (geometrical nonstationary IT method). The numerical performances of nit type methods are superior to the ones of sit type methods. This fact is illustrated by the next Example 1.1. A linear system modeled by the Hilbert matrix H is considered in Figure 1 (random noise of level δ = 0.001% is used). In this benchmark problem the sit method is implemented for λ k = 2 (RED), while the gnit method is implemented for λ k = 2 k (BLACK) and λ k = 3 k (BLUE). 2

3 Figure 1: Comparison between sit and nit type methods: A = H is the square Hilbert matrix; artificial random noise (δ = 10 3 %) is added to the data. The stopping criteria (discrepancy principle with τ = 2) is reached after 18 steps (BLACK), 12 steps (BLUE) and steps (RED). Figure 2: Unstable behavior of gnit type methods: Matrix A R as in Figure 1; artificial random noise (δ = 10 5 %) is added to the data. The stopping criteria (discrepancy principle with τ = 2) is reached after 28 steps (BLACK), 18 steps (BLUE). In the last implementation (ORANGE), the gnit method becomes unstable before the stopping criteria is reached. The computation λ k = q k in the gnit is straightforward; however, the choice of q > 1 is exogenous to (2), (1) and it is not clear which are the good values for q. Indeed, as shown in the next example, increasing the constant q may lead either to faster convergence or failure to converge: Example 1.2. We set A = H 25 25, X = Y = R 25 and random noisy data with δ = 10 5 %. In Figure 2 the gnit method is implemented for λ k = 2 k (BLACK), λ k = 3 k (BLUE) and λ k = 3 k (RED); the last implementation does not converge. The above described issues motivated us to investigate novel strategies, for choosing λ k, which are endogenous to problem (2), (1), i.e., only information about A, y δ, δ, x δ k and λ k 1 is used to determine λ k. Next we briefly review some relevant convergence results for IT type methods. In [5] Brill and Schock proved that, in the exact data case, the nit method (3) converges to a solution of Ax = y if and only if λ k =. Moreover, a convergence rate result was established under the additional assumption λ 2 k <. The assumptions needed in [5] in order to derive convergence rates are neither satisfied for the sit, nor for the nit with the geometrical choice λ k = q k, q > 1. In [14] rates of convergence are established for the stationary Lardy s method [22] as well 3

4 as for the nit with geometrical choice of λ k. Under the source condition x Rg((A A) ν ), where x = A y is the normal solution of Ax = y 1, the linear rate of convergence x k x = O(q kν ) is proven. The article is outlined as follows: In Section 2 we present a conceptual version of the enit method, which is proposed and analyzed in this manuscript. A detailed formulation of this method is given in Section 3, where some preliminary estimates are also derived. In Section 4 a convergence analysis of the enit method is presented. In Section 5 we discuss the algorithmic implementation of the enit method. In particular, we address the challenging numerical issue of efficiently computing the Lagrange multipliers λ k. Section 6 is devoted to numerical experiments. The Image deblurring problem and the Inverse Potential Problem are considered in Subsections 6.1 and 6.2 respectively. Section 7 is dedicated to final remarks and conclusions. 2 Endogenous nit method In this section an endogenous strategy for the choice of the Lagrange multipliers in (3) is proposed. Our main goal is to derive a conceptual version of the enit method considered in this manuscript. A detailed presentation of this method is left to Section 3. For the remaining of this article we suppose that the following assumptions hold true: (A1) There exist an element x X such that Ax = y, where y Rg(A) are the exact data. (A2) The operator A : X Y is linear, bounded and ill-posed, i.e., even if the operator A 1 : R(A) X (the left inverse of A) exists, it is not continuous. We use the notation Ω µ, for µ 0, to denote the µ-levelset of the residual functional Ax y δ, that is, Ω µ := {x X : Ax y δ 2 µ 2 }. (4) The basic geometric properties of the levelsets Ω µ, described next, are instrumental in the forthcoming analysis. Proposition 2.1. For each µ 0, the set Ω µ in (4) is closed and convex. If 0 µ µ then Ω µ Ω µ. If µ δ then A 1 (y) Ω µ. From the last assertion in Proposition 2.1 we conclude that Ω µ for µ δ. On the other hand, for 0 µ < δ the levelset Ω µ may be empty. Notice that, all available information about the solution set A 1 (y) is contained in (1), (2). Thus, in the absence of additional information, Ω δ is the set of best possible approximate solution for the inverse problem under consideration. 2 The levelset Ω δ is, in general, unbounded and it is desirable to exclude those best approximate solutions with too large norms. Besides that, very often only a crude estimation x 0 to the solution of (2) is available. In view of these reasonings, it is natural to consider the projection problem { min x ˆx 2 s.t. Ax y δ 2 µ 2 (5). 1 I.e., x is the unique vector satisfying x D(A) N(A) and Ax = y. 2 I.e., given two elements in Ω δ, it is not possible to distinguish which of them better approximates x. 4

5 where ˆx is x k and µ 0. Let us briefly discuss the conditioning of this problem with respect to parameter µ: (i) for 0 µ < δ the projection problem (5) may be unfeasible, that is, it may become the problem of projecting ˆx onto an empty set; (ii) for µ = δ, in view of (i), the projection problem (5) is in general ill-posed with respect to the parameter µ; (iii) for µ > δ the projection problem (5) is well posed, and it is natural to expect that the larger the µ the better conditioned it becomes. The analysis of the solution of the projection problem (5) by means of Lagrange multipliers provides additional understanding on the relation between µ and the conditioning of this problem. A compromise between reducing the residual norm Ax y δ and preventing ill-posedness of the projection problem would be to choose µ = p Aˆx y δ + (1 p)δ, where 0 < p < 1 quantify this compromise. Nevertheless, sticking to the projection of ˆx onto a pre-defined level set of Ax y δ entails an additional numerical difficulty. Namely, the projection is computed by solving a linear system where the Lagrange multiplier is implicitly defined by an algebraic equation, as we will revise in Section 3.1. Moreover, the projection of ˆx onto any levelset Ω µ with δ µ µ is as good as (or even better than) the projection of ˆx onto Ω µ itself. We shall generate x δ k from ˆx = xδ k 1 by solving a range-relaxed projection problem, namely { min x ˆx 2 s.t. Ax y δ 2 µ 2, δ µ p Aˆx y δ (6) + (1 p)δ, whenever x δ k 1 / Ω δ. This strategy leads us to the the following conceptual algorithm [1] choose an initial guess x 0 X; [2] choose p (0, 1) and τ > 1; [3] for k 1 do [3.1] compute x δ k solution of min x x δ k 1 2 s.t. Ax y δ 2 µ k, with δ µ k p Ax δ k 1 yδ + (1 p)δ; [3.2] stop to iterate at step k 1 s.t. Ax δ k y δ < τδ for the first time. Algorithm 1: Conceptual algorithm for the enit method. Since each Ω µk is closed, convex and contains Ω δ, this algorithm can be regarded as a method of successive orthogonal projections onto a family of shrinking, separating convex sets, which trivially give us limitation of the sequence x δ k as well as monotonicity of the iteration error x x δ k. Estimation of the number of iterations required to reach the stop criteria in [3.2] as well as proof of strong convergence in the noiseless case (i.e., δ = 0) are presented in Sections 3 and 4. 5

6 In what follows we show how to develop an implementable algorithm from the conceptual one above. The computational burden of each iteration rests on the computation of x k, to be accomplished by means of a Lagrange multiplier that shall satisfy an algebraic inclusion, instead of an algebraic equation. This algebraic inclusion is to be solved by means of a dynamically over-relaxed Newton method in R + with an economic choice of the starting point, as discussed in Section 5. 3 Endogenous nit method: Detailed description Next we discuss the implementation details of the enit method. In Subsection 3.1 the projection problem (5) is revisited. In Subsection 3.2 an implementable algorithm for the enit method is presented; moreover, some preliminary estimates are derived. 3.1 The projection problem as a scalar rational equation The goal of this paragraph is to reformulate problem (5) as a scalar rational equation for the corresponding Lagrange multiplier. For µ > δ, we address the problem of projecting ˆx X onto the levelset Ω µ defined in (4). A generic instance of this problem corresponds to the projection problem (5), i.e. whose associated Lagrangian is min x ˆx 2, s.t. Ax y δ 2 µ 2, L(x, λ) = λ 2 ( Ax yδ 2 µ 2 ) x ˆx 2. Notice that, for each λ > 0, L(, λ) : X R is quadratic, strongly convex, and x L(x, λ) = λa (Ax y δ ) + x ˆx = (I + λa A)(x ˆx) + λa (Aˆx y δ ). For each λ > 0, let π(ˆx, λ) X be the minimizer of L(, λ) and let Gˆx (λ) be the square residual function at this point, that is π(ˆx, λ) = ˆx λ(i + λa A) 1 A (Aˆx y δ ), Gˆx (λ) = Aπ(ˆx, λ) y δ 2. (7) In what follows we use the notation P Ω to denote the orthogonal projection onto Ω, for = Ω X closed and convex. The next lemma summarizes the solution theory for the projection problem (5) by means of classical Lagrange multiplier results [27, Sec. 5.7]. Lemma 3.1. Suppose that Aˆx y δ > µ > δ. The following assertions are equivalent 1. x = P Ωµ (ˆx); 2. x is a solution of (5); 3. x = π(ˆx, λ ), λ 0 and Gˆx (λ ) = µ 2. Moreover, λ as in item 3 is the unique solution of the equation Gˆx (λ) = µ 2, the real function Gˆx ( ) is monotonous strictly decreasing and (with π := π(ˆx, λ)) d d λ Gˆx(λ) = 2 A (Aπ y δ ), (I + λa A) 1 A (Aπ y δ ). 6

7 Proof. Equivalence between items 1 and 2 follows from the definition of orthogonal projections onto closed convex sets. Equivalence between items 2 and 3 is a classical Lagrange multipliers result (see, e.g., [27, Theorem 5.15]). The formula for dgˆx (λ)/dλ follows from the computation of π(ˆx, λ)/ λ by implicit derivation of the identity (I + λa A)(π ˆx) + λa (Aˆx y) = 0, which follows from (7). From Lemma 3.1 it follows that, solving (5) amounts to solve for λ 0 the scalar equation 3.2 Towards and implementable algorithm Gˆx (λ) = µ 2. (8) In this paragraph we devise, based on the equivalence between problem (5) and equation (8), a concrete formulation for the conceptual algorithm introduced in the previous section. Arguing as in Paragraph 3.1 we conclude that solving the range-relaxed projection problem in (6) boils down to solving the inequalities δ 2 Gˆx (λ) ( p Aˆx y δ + (1 p)δ ) 2. (9) Consequently, we may restate the conceptual algorithm (Algorithm 1) in the following form: [1] choose an initial guess x 0 X; [2] choose p (0, 1), τ > 1 and set k := 0; [3] while ( Ax δ k yδ > τδ ) do [3.1] k := k + 1; [3.2] compute λ k, x δ k such that xδ k = xδ k 1 λ k (I + λ k A A) 1 A (Ax δ k 1 yδ ), and δ 2 G x δ k 1 (λ k ) ( p Ax δ k 1 yδ + (1 p)δ ) 2 ; Algorithm 2: Implementable algorithm for the enit method. As in Algorithm 1, the stopping index in the above algorithm is defined by the discrepancy principle k := min{k 1 ; Ax j y δ > τδ, j = 0,..., k 1 and Ax k y δ τδ}. (10) The most difficult task in the above algorithm resides in the efficient computation of step [3.2], which is to say, how to solve (9) in each iteration k. Notice that, according to Lemma 3.1, the derivative of Gˆx (λ) is readily available. So, a scalar Newton-type method provides an efficient tool for computing λ k (a solution of (9) with ˆx = x k 1 ). Efficient performance of Newton type methods depends on good initial guesses. The next results furnishes an estimate which can be used as initial guess for such methods. Lemma 3.2. Under the assumptions of Lemma 3.1, λ ( Aˆx yδ µ) Aˆx y δ A (Aˆx y δ ) 2. 7

8 Proof. Since Aˆx y δ > µ > δ, then ˆx X\Ω δ. Thus, for any z X with Az y δ µ we have Aˆx y δ µ A(z ˆx). (11) Indeed, the assumption on z X implies A(z ˆx) + Aˆx y δ µ, from which (11) follows. Take z := P Ωµ ˆx. Thus, Az y δ = µ and we obtain µ 2 = A(z ˆx) + b 2 = A(z ˆx) 2 + b A(z ˆx), b, where b = Aˆx y δ. Consequently, 2 A(z ˆx), b = A(z ˆx) 2 + ( b 2 µ 2) ( b µ ) 2 + ( b 2 µ 2) = 2 b ( b µ ) (the inequality follows from (11)). Now, using the fact z = π(ˆx, λ ), we obtain ( b µ ) b (z ˆx), A b proving the lemma. = λ (I + λ A A) 1 A b, A b λ (I + λ A A) 1 A b 2 λ A b 2, Corollary 3.3. Let the sequences (x δ k ) and (λ k) be defined by the enit method (Algorithm 2), with δ 0 and y δ Y as in (1). Moreover, define µ k := ( G x δ k 1 (λ k ) ) 1/2. Then λ k ( Ax δ k 1 y δ µ k ) Ax δ k 1 y δ A (Ax δ k 1 yδ ) 2, k = 1,..., k. (12) In the exact data case (i.e., δ = 0) the above estimate simplifies to λ k (1 p) A 2. Proof. From (9) and the definition of µ k it follows that δ < µ k < Ax δ k 1 yδ. Thus, (12) follows from Lemma 3.2 with ˆx = x δ k 1, λ = λ k and µ = µ k (in the proof of that lemma it holds z = x δ k ). ( In the exact data ) case, it follows from (12), together with Assumption (A2), that λ k Axk 1 y µ k A 2 Ax k 1 y 1. Moreover, since δ = 0 we have µ k p Ax k 1 y. Combining these two facts, the second assertion follows. In our numerical experiments, we observed that the sequence λ k is increasing and that the estimation in (12) for λ k was, in general, much lower than λ k 1. So, for k 2, the initial guess (of the Newton method) to determine λ k may be either λ k 1 or the one provided by (12). The use of λ k 1 as the initial guess for λ k entails an additional advantage, it does not require a new factorization (or inversion) of a linear operator for the computation of the first scalar Newton step (see Section 5 for further discussion). 8

9 4 Convergence Analysis We begin this section establishing an estimate for the decay of the residual Ax δ k yδ. Proposition 4.1. Let (x δ k ) be the sequence defined by the enit method (Algorithm 2), with δ 0 and y δ Y as in (1). Then [ Ax δ k y δ δ ] p [ Ax δ k 1 yδ δ ] p k [ Ax 0 y δ δ ], k = 1,..., k, where k N is defined by (10). Proof. It is enough to verify the first inequality. Recall that x δ k Ω µ k, where δ µ k p Ax δ k 1 yδ + (1 p)δ. Consequently, Ax δ k yδ p Ax δ k 1 yδ + (1 p)δ, and the first inequality follows. An immediate consequence of Proposition 4.1 is the following estimate for the stopping index defined in (10). This result guarantees the finiteness of k whenever δ > 0. Corollary 4.2. Let (x δ k ) be the sequence defined by the enit method (Algorithm 2), with δ > 0 and y δ Y as in (1). Then the stopping index k defined in (10) satisfies k [ ln p 1 Ax0 y δ ] δ ln + 1. (τ 1)δ Proof. We may assume Ax 0 y δ > τδ. 3 From (10) follows τδ < Ax δ k yδ, k = 0,... k 1. This inequality (for k = k 1), together with Proposition 4.1 imply that (τ 1)δ < Ax δ k 1 yδ δ p k 1 [ Ax 0 y δ δ ], completing the proof (recall that p (0, 1)). Monotonicity of the iteration error x x δ k was already established in Section 2. In the next proposition we estimate the gain x x δ k 1 2 x x δ k 2 in the enit method. Proposition 4.3. Let (x δ k ) be the sequence defined by the enit method (Algorithm 2), with δ > 0 and y δ Y as in (1). Then ] x x δ k 1 2 x x δ k 2 = x δ k xδ k λ k A(x x δ k ) 2 + λ k [r(x δ k ) r(x ), (13) for k = 1,..., k, where r(x) := Ax y δ 2. Consequently, x x δ k 1 2 x x δ k 2 λ 2 k A (Ax δ k yδ ) 2 + λ k [ A(x x δ k ) 2 + (τ 2 1) δ 2], (14) for k = 1,..., k 1; moreover, x x δ k 1 2 x x δ k 2 λ 2 k A (Ax δ k yδ ) 2 + λ k A(x x δ k ) 2. (15) 3 Otherwise the iteration does not start, i.e., k = 0. 9

10 Proof. First we derive (13). Due to the definition of (x δ k, λδ ) in Algorithm 2, the Lagrangian functional defined in Subsection 3.1 (with µ = µ k and ˆx = x k 1 ) satisfies L(x, λ k ) = L(x δ k, λ k) + L x (x δ k, λ k)(x x δ k ) (x x δ k ), H(x δ k, λ k)(x x δ k ), where H(x δ k, λ k) = (I + λ k A A) is the Hessian of L at (x δ k, λ k). Since L x (x δ k, λ k) = 0, we have L(x, λ k ) = L(x δ k, λ k) + (x x δ k ), 1 2 (I + λ ka A)(x x δ k ) x x δ k λ k [ r(x) µ 2 k ] = x δ k x δ k λ k [ r(x δ k ) µ 2 k] + x x δ k 2 + λ k A(x x δ k ) 2. Now, choosing x = x, one establishes (13). Inequality (14), on the other hand, follows from (13) together with r(x ) δ 2 and r(x δ k ) > τ 2 δ 2, for k = 1,..., k 1. Analogously, (15) follows from (13) together with r(x δ k ) > δ 2 (see Algorithm 2). Estimate (13) allow us to conclude that, in the exact data case, the operator describing the enit iteration is a reasonable wanderer (in the sense of [6]). Corollary 4.4. In the exact data case, i.e., δ = 0 and y δ = y R(A), then (13) becomes x x k 1 2 x x k 2 = x k x k λ k Ax k y 2, from what follows k 1 λ k Ax k y 2 < and k 1 x k x k 1 2 < (the last inequality means that the operator describing the enit iteration is a reasonable wanderer). Next we prove strong convergence of the enit method (in the exact data case) to a solution of the inverse problem (2). The estimate in Lemma 3.2 plays a key role in this proof. Theorem 4.5. Let (x k ) and (λ k ) be the sequences defined by the enit method (Algorithm 2), with δ = 0 and y δ = y R(A). Then (x k ) converges strongly to some x X. Moreover, Ax = y. Proof. From the second assertion in Corollary 3.3 it follows that k 1 λ k now follows from [5, Theor. 1.4]. =. The proof 5 Numerical Implementation In this section we address the numerical implementation of the enit method introduced in Sections 2 and 3. In particular, the challenging issue of efficiently computing the Lagrange multipliers λ k in step [3.2] of Algorithm 2 is addressed. As discussed in Section 3, λ k 0 is to be obtained as a solution of the scalar rational inclusion [ ] G x δ (λ) := k 1 A x δ k 1 λ (I + λa A) 1 A (Ax δ k 1 yδ ) y δ 2 ( δ 2, ( p Ax δ k 1 yδ + (1 p)δ ) ) 2 (16) and its computation can be performed using a scalar Newton-type method. These are iterative type methods, which require the evaluation of π(x δ k 1, λ) = xδ k 1 λ (I +λa A) 1 A (Ax δ k 1 y δ ) in each step (i.e., a well-posed linear system modeled by the operator (I + λa A)). 10

11 On the other hand, in the gnit method, the computation of λ k = q k (for some a priori chosen q > 1) is straightforward. Consequently, one needs to solve one linear system (modeled by (I + λ k A A)) in each step of the gnit method. The numerical effort needed to compute each single step of a Newton-type method for solving (16) is comparable with the numerical effort needed to compute a hole step of the gnit method. Due to this fact we shall use as few Newton-type steps as possible for solving (16). Our main goal is to design an efficient way of (approximately) solving (16), such that the resulting enit method is competitive with gnit; not only from the point of view of the total number of iterations, but also from the point of view of the numerical effort required in each iteration step. Therefore, we propose a truncated scalar Newton-type method as follows: General setup: To obtain λ k one single Newton step, starting from an (educated) initial guess ξ k,0 0, is to be computed. This is chosen to be a greedy step, in the sense that it aims to determine a solution of G x δ k 1 (λ) = 0. Summarizing, we set λ k := ξ k,1 with ξ k,1 = ξ k,0 G x δ k 1 (ξ k,0 )/G x δ k 1(ξ k,0 ). (17) Here G x ( ) is the real function in (16) and G x( ) is computed in Lemma 3.1. Initial guess: According to Corollary 3.3, a natural choice for the initial guess ξ k,0 in (17) is given by estimate (12). However, in our numerical experiments, we observed that the sequence λ k increases exponentially and that the estimation in (12) for λ k was much lower than λ k 1. These remarks suggest the use of the following strategy: In the first step of the enit method (k = 1), choose ξ k,0 according to estimate (12). For k 2 we use the initial guess ξ k,0 = λ k 1. In the next example, the acceleration effect caused by this initial guess choice is investigated. Example 5.1. The benchmark problem presented in Example 1.1 is revisited and solved using the enit method based on the Newton-step (17). In Figure 3 we compare: enit method with ξ k,0 defined by (12) (BLUE); enit method with ξ k,0 = λ k 1 (RED); gnit method (BLACK). Notice that the use of λ k 1 as the initial guess in (17) entails an additional advantage, namely no new factorization (or inversion) of the operator (I + ξ k,0 A A) is needed for the computation of the scalar Newton-step. Over-relaxation: In order to obtain a better approximate solution λ k to (16) and, consequently, further accelerate the enit method, over-relaxation is introduced in the Newton-step (17), namely we define ξ k,1 := ξ k,0 ω k G x δ k 1 (ξ k,0 )/G x δ k 1(ξ k,0 ), (18) where the over-relaxation factor ω k satisfies: for k = 1 take ω 1 = 1; for k 2, after computing λ k, x δ k and µ k := G x δ (λ k ), update ω k : k 1 if µ k > 2 ( p Ax δ k 1 yδ + (1 p)δ ) then ω k+1 = 2 ω k 11

12 Figure 3: Efficient implementation of enit method: Acceleration effect caused by the choice of initial guess ξ k,0 in (17). (BLUE) ξ k,0 chosen as in (12); (RED) ξ k,0 = λ k 1 ; (BLACK) gnit method. Figure 4: Efficient implementation of enit method: Acceleration effect caused by over-relaxation. (RED) Newton-step (17); (GREEN) Newton-step (18); (BLACK) gnit method. else if µ k < ( p Ax δ k 1 yδ + (1 p)δ ) then ω k+1 = 1 This strategy is based on the following heuristic: If the computed Lagrange multiplier λ k := ξ 1 fails to satisfy (16) by a factor larger than two, i.e., if G x δ k 1 (λ k ) < 2 ( p Ax δ k 1 yδ +(1 p)δ ), then the next over-relaxation factor ω k+1 is increased for the computation of λ k+1 ; If λ k fails to satisfy (16) by a factor smaller than two, the over-relaxation factor is kept unchanged; If λ k does satisfy (16) no over-relaxation is triggered in the next enit step (i.e., ω k+1 = 1). In the next example, the acceleration effect caused by over-relaxation is investigated. Example 5.2. The benchmark problem considered in Example 1.1 is once again revisited. In Figure 4 we compare: enit method with Newton-step (17) (RED); enit method with over-relaxed Newton step (18) (GREEN); gnit method (BLACK). Numerical algorithm: The Algorithm 3 is written in a tutorial way. 4 Presented in this form, one recognizes that the major computational effort in each iteration consists in the computation of the M ξ operators. 4 Indeed, the inversion of (I + ξa A) is not always possible. 12

13 [1] choose an initial guess x 0 X; [2] choose p (0, 1), τ > 1 and set k := 0; [3] while ( Ax k y δ Y > τδ ) do [3.1] k := k + 1; [3.2] θ k := p Ax δ k 1 yδ Y + (1 p)δ; [3.3] if (k = 1) then ξ k,0 := Ax δ k 1 yδ ( Ax δ k 1 y δ θ k ) / A (Ax δ k 1 yδ ) 2 ; M ξk,0 := (I + ξ k,0 A A) 1 ; ω k := 1; else ξ k,0 := λ k 1 ; M ξk,0 := M ξk 1,1 ; endif [3.4] x ξk,0 := x δ k 1 ξ k,0 M ξk,0 A (Ax δ k 1 yδ ); [3.5] G k (ξ k,0 ) := Ax ξk,0 y δ 2 ; [3.6] DG k (ξ k,0 ) := A (Ax ξk,0 y δ ), M 2 ξ k,0 A (Ax ξk,0 y δ ) ; [3.7] Newton step: ξ k,1 := ξ k,0 ω k G k (ξ k,0 )/DG k (ξ k,0 ); M ξk,1 := (I + ξ k,1 A A) 1 ; x ξk,1 := x δ k 1 ξ k,1 M ξk,1 A (Ax δ k 1 yδ ); x δ k := x ξ k,1 ; λ k := ξ k,1 ; [3.8] Dynamic over-relaxation: µ k := Ax δ k yδ ; if ( µ k > 2θ k ) then ω k+1 := 2ω k ; else if ( µ k < θ k ) then ω k+1 := 1; endif end of while Algorithm 3: Numerical algorithm for the enit method with over-relaxed Newton-step. 13

14 This task is solved twice in the first iteration (steps [3.3] and [3.7]) and only once in the subsequent iterations (step [3.7]). The above discussed choice of the initial guess ξ k,0 for the Newton-step (18) is evaluated in step [3.3]. Moreover, the choice of the over-relaxation parameter is implemented in step [3.3] (where ω 1 is initialized) and step [3.8] (where ω k, for k 2, is updated). The efficiency of the enit method in Algorithm 3 is discussed in the next example. Example 5.3. The benchmark system in Example 1 is revisited once again. In Figure 5 this inverse problem is solved using: (MAGENTA) enit method with over relaxed Newton-step (18); (GREEN) enit method with λ k satisfying (16) the Lagrange multipliers are computed by the classical Newton method (with λ k 1 as initial guess); (BLACK) gnit method. The number of iterations needed to reach the stop criteria reads: 7 iterations (MAGENTA), 5 iterations (GREEN), 19 iterations (BLACK). The numerical effort is depicted on the last picture (right hand side), where the number of linear systems solved per iteration is plotted: enit method with over relaxed Newton-step (Algorithm 3): total of 8 systems; enit method with λ k computed by the Newton method: total of 16 systems; gnit method with λ k = 2 k solves one system in each iteration: total of 19 systems. Algorithm 3 converges slightly slower than the gnit with Newton method as inner iteration (this is due to the fact that Lagrange multipliers λ k computed by Algorithm 3 do not satisfy (16) in every iteration). However, the number of linear systems/iteration in Algorithm 3 is smaller, effecting a much smaller (total) computational cost. In the next section, Algorithm 3 is implemented for solving two well known linear illposed problems. In Subsection 6.1 the Image Deblurring Problem [4, 3] is considered, while Subsection 6.2 is devoted to a 2D elliptic parameter identification problem, namely the Inverse Potential Problem [15, 18]. 6 Numerical experiments 6.1 Image deblurring problem The first application considered in this article is the image deblurring problem [4]. These are finite dimensional problems modeled, in general, by high dimensional linear systems of the form (2). The vector x X = R n represents the pixel values of an unknown true image, while the vector y Y = X contains the pixel values of the observed (blurred) image. In practice, only noisy blurred data y δ Y satisfying (1) is available. Figure 5: Efficient implementation of enit method: Comparison between Algorithm 3 (MAGENTA), (GREEN) enit method with λ k satisfying (16) and (BLACK) gnit method. 14

15 (a) (b) (c) (d) Figure 6: Image deblurring problem: Setup of the inverse problem. (a) Original image x; (b) Point spread function; (c) Blurred image y; (d) Artificial noise added to blurred image ky y δ k/kyk = The matrix A describes the blurring phenomenon [3, 4]. We consider the simple situation where the blur of the image is modeled by a space invariant point spread function (PSF). In the continuous model, the blurring process is represented by an integral operator of convolution type and (2) corresponds to an integral equation of the first kind [10]. In our discrete setting, after incorporating appropriate boundary conditions into the model, the discrete convolution is evaluated by means of the FFT algorithm. The computation of our deblurring experiment was conducted using MATLAB 2012a. The corresponding setup is shown in Figure 6: (a) True image x Rn, n = 2562 (Cameraman ); (b) PSF is the rotationally symmetric Gaussian low-pass filter of size [ ] and standard deviation σ = 4 (command fspecial( gaussian, [ ], 4.0)); (c) Exact data y = Ax Rn (blurred image); (d) Artificial noise added to the blurred image, 1% relative noise level ky y δ k/kyk. In what follows, two distinct scenarios are discussed: In Figure 7 the exact data case is considered. The following methods are compared: (GRAY) sit method with λ = 2; (BLACK) gnit with λk = 2k ; (BLUE) enit method with one Newton step (17); (PINK) enit method with over-relaxed Newton step (18); (RED) enit method with one Secant step. All methods are stopped according to the discrepancy principle with 15

16 Figure 7: Image deblurring problem: Exact data case. (TOP) Iteration error x x k ; (CENTER) Residual Ax k y ; (BOTTOM) Lagrange multiplier λ k. τ = 1.1 and δ = (in this case, the noise level is defined by MATLAB s double-precision accuracy). As initial guess we choose x 0 = y (the blurred image itself). Moreover, we take p = 0.1 in (16). In Figure 8 the noisy data case is considered. The five above described methods are presented using the same color scheme. All methods are once again stopped according to the discrepancy principle: τ = 1.1 and δ > 0 as shown in Figure 6 (d). We choose the initial guess x 0 = y δ (the noisy blurred image) and p = 0.1 in (16). The plots in Figures 7 and 8 show: (TOP) evolution of the iteration error x x δ k ; (CENTER) evolution of the residual Ax δ k yδ ; (BOTTOM) Lagrange multipliers λ k corresponding to each method. Notice the exponential decay of the residual, and the exponential growth of the Lagrange multipliers. 6.2 Inverse potential problem In this Subsection we address the inverse potential problem [12, 7, 16, 28]. Generalizations of this inverse problem appear in many relevant applications including: Inverse Gravimetry [18, 28], EEG [8], and EMG [29]. The forward problem considered here consists of solving on a Lipschitz domain Ω R d, for a given source function x L 2 (Ω), the boundary value problem u = x, in Ω, u = 0 on Ω. (19) The corresponding inverse problem is the so called inverse potential problem (IPP), which consists of recovering an L 2 function x, from measurements of the Dirichlet data of its corre- 16

17 Figure 8: Image deblurring problem: Noisy data case. (TOP) Iteration error x x δ k ; (CENTER) Residual Ax δ k yδ ; (BOTTOM) Lagrange multiplier λ k. sponding potential on the boundary of Ω, i.e., y := u ν Ω L 2 ( Ω). This problem is modeled by the linear operator A : L 2 (Ω) L 2 ( Ω) defined by Ax := u ν Ω, where u H0 1 (Ω) is the unique solution of (19) [16]. Using this notation, the IPP can be written in the abbreviated form (2), where the available noisy data y δ L 2 ( Ω) satisfies (1). In our experiments we follow [7] in the experimental setup, selecting Ω = (0, 1) (0, 1) and assuming that the unknown parameter x is an H 1 -function with sharp gradients. Problem (19) is solved for x = x shown in Figure 9 (a). In our numerical implementations we set τ = 1.1 and p = 0.1 (same as in Subsection 6.1). Artificial random noise of level 0.1% is introduced to the data. The enit with over-relaxed Newton step reaches the stop criteria (10) after 5 steps. The corresponding iterate x δ 5 and iteration error x x δ 5 are shown in Figure 9 (b) and (c) respectively. In our numerical experiment (see Figure 10) random noise of level 0.1% was added to the exact data; the following methods were implemented for solving the corresponding IPP: (GRAY) sit method with λ = 2; (BLACK) gnit with λ k = 2 k ; (BLUE) enit method with Newton step (17); (PINK) enit method with over-relaxed Newton step (18); (RED) enit method with one Secant step. All methods are stopped according to the discrepancy principle with τ = 1.1. As initial guess we choose x (constant function in Ω). Moreover, we take p = 0.1 in (16). Evolution of iteration error (TOP), residual (CENTER) and Lagrange multiplier (BOTTOM) are shown. Notice once again the exponential decay of the residual, and the exponential growth of the Lagrange multipliers. 17

18 (a) (b) (c) Figure 9: Inverse potential problem. enit method with over-relaxed Newton step. (a) Exact solution x? ; (b) Approximate solution xδk, for k = 5; (c) Iteration error kx? xδ5 k. 7 Conclusions We investigate nit type methods for computing stable approximate solutions to ill-posed linear operator equations. The main contributions of this article are twofold. First, a novel strategy for choosing a sequence of Lagrange multipliers for the nit iteration is derived. Second, an efficient numerical algorithm based on this strategy is developed. In the practice, the geometrical sequence λk = q k (for q > 1) is the most commonly adopted choice for the Lagrange multipliers in nit type methods (here denoted by gnit). The choice of q > 1 is exogenous to (2), (1) and it is not clear which are the good values for q; wrong choices of q may lead either to slow convergence or failure to converge. This critical issue motivated us to investigate novel strategies for choosing λk, which are endogenous to problem (2), (1), i.e., only information about A, y δ, δ, xδk, λk 1 is used to determine λk. We prove monotonicity of the proposed enit method, and exponential decay of the residual kaxδk y δ k2. Moreover, we provide estimates to the gain kx? xδk 1 k2 kx? xδk k2, and to the Lagrange multipliers λk. A convergence proof in the case of exact data is provided. An algorithmic implementation of the enit method is proposed, where approximations to the Lagrange multipliers are computed using a dynamically over relaxed Newton-step, with appropriate choice of the initial guess. The resulting enit method is competitive with gnit; not only from the point of view of the total number of iterations, but also from the point of view of the numerical effort required in each iteration step. Our algorithm is tested for two well known applications: the inverse potential problem, and the image deblurring problem. The obtained results validate the efficiency of our method compared with standard implementations of the gnit method. Acknowledgments The work of R.B. is supported by the research council of the Alpen-Adria-Universita t Klagenfurt (AAU) and by the Karl Popper Kolleg Modeling-Simulation-Optimization funded by the AAU and by the Carinthian Economic Promotion Fund (KWF). A.L. acknowledges support from the research agencies CAPES, CNPq (grant /2013 0), and from the AvH Foundation. The work of B.F.S. was partially supported by CNPq (grants /2013-1, 18

19 Figure 10: Inverse potential problem: Noisy data case. (TOP) Iteration error x x δ k ; (CENTER) Residual Ax δ k yδ ; (BOTTOM) Lagrange multiplier λ k /2011-5) and FAPERJ (grant E-26/ /2011). References [1] A.B. Bakushinsky and M.Y. Kokurin, Iterative methods for approximate solution of inverse problems, Mathematics and Its Applications, vol. 577, Springer, Dordrecht, [2] J. Baumeister, Stable solution of inverse problems, Advanced Lectures in Mathematics, Friedr. Vieweg & Sohn, Braunschweig, MR [3] M. Bertero, Image deblurring with poisson data: from cells to galaxies, Inverse Problems 25 (2009), no. 12, [4] M. Bertero and P. Boccacci, Introduction to inverse problems in imaging, Advanced Lectures in Mathematics, IOP Publishing, Bristol,

20 [5] M. Brill and E. Schock, Iterative solution of ill-posed problems: A survey, ch. in Model Optimization in Exploration Geophysics, Ed. A. Vogel, pp , Vieweg, Braunschweig, [6] F.E. Browder and W.V. Petryshyn, Construction of fixed points of nonlinear mappings in hilbert space, Journal of Mathematical Analysis and Applications 20 (1967), no. 2, [7] A. De Cezaro, A. Leitão, and X.-C. Tai, On multiple level-set regularization methods for inverse problems, Inverse Problems 25 (2009), [8] A. El Badia and M. Farah, Identification of dipole sources in an elliptic equation from boundary measurements: application to the inverse EEG problem, J. Inverse Ill-Posed Probl. 14 (2006), no. 4, [9] H.W. Engl, On the choice of the regularization parameter for iterated Tikhonov regularization of ill-posed problems, J. Approx. Theory 49 (1987), no. 1, [10] H.W. Engl, M. Hanke, and A. Neubauer, Regularization of inverse problems, Kluwer Academic Publishers, Dordrecht, [11] A.G. Fakeev, A class of iterative processes for solving degenerate systems of linear algebraic equations, U.S.S.R. Comput. Math. Math. Phys. 21 (1981), no. 3, [12] F. Frühauf, O. Scherzer, and A. Leitão, Analysis of regularization methods for the solution of ill-posed problems involving discontinuous operators, SIAM J. Numer. Anal. 43 (2005), [13] C. W. Groetsch, The theory of Tikhonov regularization for Fredholm equations of the first kind, Research Notes in Mathematics, vol. 105, Pitman (Advanced Publishing Program), Boston, MA, [14] M. Hanke and C. W. Groetsch, Nonstationary iterated Tikhonov regularization, J. Optim. Theory Appl. 98 (1998), no. 1, [15] F. Hettlich and W. Rundell, Iterative methods for the reconstruction of an inverse potential problem, Inverse Problems 12 (1996), [16], Iterative methods for the reconstruction of an inverse potential problem, Inverse Problems 12 (1996), no. 3, [17] B. Hofmann, Regularization for applied inverse and ill-posed problems, Teubner-Texte zur Mathematik [Teubner Texts in Mathematics], vol. 85, BSB B. G. Teubner Verlagsgesellschaft, Leipzig, 1986, A numerical approach, With German, French and Russian summaries. [18] Victor Isakov, Inverse problems for partial differential equations, second ed., Applied Mathematical Sciences, vol. 127, Springer, New York, [19] B. Kaltenbacher, A. Neubauer, and O. Scherzer, Iterative regularization methods for nonlinear ill-posed problems, Radon Series on Computational and Applied Mathematics, vol. 6, Walter de Gruyter GmbH & Co. KG, Berlin,

21 [20] J.T. King and D. Chillingworth, Approximation of generalized inverses by iterated regularization, Numer. Funct. Anal. Optim. 1 (1979), [21] A. Kirsch, An introduction to the mathematical theory of inverse problems, Applied Mathematical Sciences, vol. 120, Springer-Verlag, New York, [22] L. J. Lardy, A series representation for the generalized inverse of a closed linear operator, Atti della Accademia Nazionale dei Lincei, Rendiconti della Classe di Scienze Fisiche, Matematiche, e Naturali, Serie VIII 58 (1975), [23] A. Louis, Inverse und schlecht gestellte probleme, B.G. Teubner, Stuttgart, [24] V.A. Morozov, Regularization methods for ill posed problems, CRC Press, Boca Raton, [25] F. Natterer and F. Wübbeling, Mathematical Methods in Image Reconstruction, SIAM, Philadelphia, [26] O. Scherzer, Convergence rates of iterated Tikhonov regularized solutions of nonlinear ill-posed problems, Numer. Math. 66 (1993), no. 2, [27] J.L. Troutman, Variational calculus and optimal control, second ed., Undergraduate Texts in Mathematics, Springer-Verlag, New York, 1996, Optimization with elementary convexity. [28] K. van den Doel, U. M. Ascher, and A. Leitão, Multiple level sets for piecewise constant surface reconstruction in highly ill-posed problems, Journal of Scientific Computing 43 (2010), no. 1, [29] Kees van den Doel, Uri M. Ascher, and Dinesh K. Pai, Computed myography: threedimensional reconstruction of motor functions from surface EMG data, Inverse Problems 24 (2008), no. 6, ,

Range-relaxed criteria for the choosing Lagrange multipliers in nonstationary iterated Tikhonov method

Range-relaxed criteria for the choosing Lagrange multipliers in nonstationary iterated Tikhonov method Range-relaxed criteria for the choosing Lagrange multipliers in nonstationary iterated Tikhonov method R. Boiger A. Leitão B. F. Svaiter September 11, 2018 Abstract In this article we propose a novel nonstationary

More information

On a family of gradient type projection methods for nonlinear ill-posed problems

On a family of gradient type projection methods for nonlinear ill-posed problems On a family of gradient type projection methods for nonlinear ill-posed problems A. Leitão B. F. Svaiter September 28, 2016 Abstract We propose and analyze a family of successive projection methods whose

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing

More information

Functionalanalytic tools and nonlinear equations

Functionalanalytic tools and nonlinear equations Functionalanalytic tools and nonlinear equations Johann Baumeister Goethe University, Frankfurt, Germany Rio de Janeiro / October 2017 Outline Fréchet differentiability of the (PtS) mapping. Nonlinear

More information

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical

More information

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear

More information

Iterative Regularization Methods for a Discrete Inverse Problem in MRI

Iterative Regularization Methods for a Discrete Inverse Problem in MRI CUBO A Mathematical Journal Vol.10, N ō 02, (137 146). July 2008 Iterative Regularization Methods for a Discrete Inverse Problem in MRI A. Leitão Universidade Federal de Santa Catarina, Departamento de

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik Accelerated Landweber iteration in Banach spaces T. Hein, K.S. Kazimierski Preprint 2009-17 Fakultät für Mathematik Impressum: Herausgeber: Der Dekan der Fakultät für Mathematik an der Technischen Universität

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

New Algorithms for Parallel MRI

New Algorithms for Parallel MRI New Algorithms for Parallel MRI S. Anzengruber 1, F. Bauer 2, A. Leitão 3 and R. Ramlau 1 1 RICAM, Austrian Academy of Sciences, Altenbergerstraße 69, 4040 Linz, Austria 2 Fuzzy Logic Laboratorium Linz-Hagenberg,

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS Olivier Scaillet a * This draft: July 2016. Abstract This note shows that adding monotonicity or convexity

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Cambridge - July 28, 211. Doctoral

More information

Lecture 2: Tikhonov-Regularization

Lecture 2: Tikhonov-Regularization Lecture 2: Tikhonov-Regularization Bastian von Harrach harrach@math.uni-stuttgart.de Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Advanced Instructional School on Theoretical

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated

More information

Necessary conditions for convergence rates of regularizations of optimal control problems

Necessary conditions for convergence rates of regularizations of optimal control problems Necessary conditions for convergence rates of regularizations of optimal control problems Daniel Wachsmuth and Gerd Wachsmuth Johann Radon Institute for Computational and Applied Mathematics RICAM), Austrian

More information

A model function method in total least squares

A model function method in total least squares www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI

More information

A range condition for polyconvex variational regularization

A range condition for polyconvex variational regularization www.oeaw.ac.at A range condition for polyconvex variational regularization C. Kirisits, O. Scherzer RICAM-Report 2018-04 www.ricam.oeaw.ac.at A range condition for polyconvex variational regularization

More information

Adaptive methods for control problems with finite-dimensional control space

Adaptive methods for control problems with finite-dimensional control space Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

Initial Temperature Reconstruction for a Nonlinear Heat Equation: Application to Radiative Heat Transfer.

Initial Temperature Reconstruction for a Nonlinear Heat Equation: Application to Radiative Heat Transfer. Proceedings of the 5th International Conference on Inverse Problems in Engineering: Theory and Practice, Cambridge, UK, 11-15th July 2005 Initial Temperature Reconstruction for a Nonlinear Heat Equation:

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

Convergence rates in l 1 -regularization when the basis is not smooth enough

Convergence rates in l 1 -regularization when the basis is not smooth enough Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal

More information

Merit functions and error bounds for generalized variational inequalities

Merit functions and error bounds for generalized variational inequalities J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada

More information

Kantorovich s Majorants Principle for Newton s Method

Kantorovich s Majorants Principle for Newton s Method Kantorovich s Majorants Principle for Newton s Method O. P. Ferreira B. F. Svaiter January 17, 2006 Abstract We prove Kantorovich s theorem on Newton s method using a convergence analysis which makes clear,

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators

Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators Bernd Hofmann Abstract The object of this paper is threefold. First,

More information

Nonlinear regularization methods for ill-posed problems with piecewise constant or strongly varying solutions

Nonlinear regularization methods for ill-posed problems with piecewise constant or strongly varying solutions IOP PUBLISHING (19pp) INVERSE PROBLEMS doi:10.1088/0266-5611/25/11/115014 Nonlinear regularization methods for ill-posed problems with piecewise constant or strongly varying solutions H Egger 1 and A Leitão

More information

Parameter Identification in Partial Differential Equations

Parameter Identification in Partial Differential Equations Parameter Identification in Partial Differential Equations Differentiation of data Not strictly a parameter identification problem, but good motivation. Appears often as a subproblem. Given noisy observation

More information

On the convergence properties of the projected gradient method for convex optimization

On the convergence properties of the projected gradient method for convex optimization Computational and Applied Mathematics Vol. 22, N. 1, pp. 37 52, 2003 Copyright 2003 SBMAC On the convergence properties of the projected gradient method for convex optimization A. N. IUSEM* Instituto de

More information

Optimization with nonnegativity constraints

Optimization with nonnegativity constraints Optimization with nonnegativity constraints Arie Verhoeven averhoev@win.tue.nl CASA Seminar, May 30, 2007 Seminar: Inverse problems 1 Introduction Yves van Gennip February 21 2 Regularization strategies

More information

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton

More information

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

arxiv: v1 [math.fa] 16 Jun 2011

arxiv: v1 [math.fa] 16 Jun 2011 arxiv:1106.3342v1 [math.fa] 16 Jun 2011 Gauge functions for convex cones B. F. Svaiter August 20, 2018 Abstract We analyze a class of sublinear functionals which characterize the interior and the exterior

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1)

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1) Title On the stability of contact Navier-Stokes equations with discont free b Authors Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 4 Issue 4-3 Date Text Version publisher URL

More information

INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION

INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION ANDREAS RIEDER August 2004 Abstract. In our papers [Inverse Problems, 15, 309-327,1999] and [Numer. Math., 88, 347-365, 2001]

More information

Adaptive and multilevel methods for parameter identification in partial differential equations

Adaptive and multilevel methods for parameter identification in partial differential equations Adaptive and multilevel methods for parameter identification in partial differential equations Barbara Kaltenbacher, University of Stuttgart joint work with Hend Ben Ameur, Université de Tunis Anke Griesbaum,

More information

Convergence rates of the continuous regularized Gauss Newton method

Convergence rates of the continuous regularized Gauss Newton method J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper

More information

10. Multi-objective least squares

10. Multi-objective least squares L Vandenberghe ECE133A (Winter 2018) 10 Multi-objective least squares multi-objective least squares regularized data fitting control estimation and inversion 10-1 Multi-objective least squares we have

More information

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Nonlinear Flows for Displacement Correction and Applications in Tomography

Nonlinear Flows for Displacement Correction and Applications in Tomography Nonlinear Flows for Displacement Correction and Applications in Tomography Guozhi Dong 1 and Otmar Scherzer 1,2 1 Computational Science Center, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Wien,

More information

Pseudoinverse and Adjoint Operators

Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 5 ECE 275A Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p.

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Convergence rates of convex variational regularization

Convergence rates of convex variational regularization INSTITUTE OF PHYSICS PUBLISHING Inverse Problems 20 (2004) 1411 1421 INVERSE PROBLEMS PII: S0266-5611(04)77358-X Convergence rates of convex variational regularization Martin Burger 1 and Stanley Osher

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS

LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Lavrentiev regularization is a popular approach to the solution of linear discrete illposed problems

More information

arxiv: v1 [math.na] 25 Sep 2012

arxiv: v1 [math.na] 25 Sep 2012 Kantorovich s Theorem on Newton s Method arxiv:1209.5704v1 [math.na] 25 Sep 2012 O. P. Ferreira B. F. Svaiter March 09, 2007 Abstract In this work we present a simplifyed proof of Kantorovich s Theorem

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

Nonstationary lterated

Nonstationary lterated Nonstationary lterated Tikhon~v Regularization Martin Hanke and C.W. Groetsch Preprint Nr. 277 ISSN 0943-887 4 April 1996 Nonstationary Iterated Tikhonov Regularization Martin Hanke and C.W. Groetsch Abstract

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise www.oeaw.ac.at Numerical differentiation by means of Legendre polynomials in the presence of square summable noise S. Lu, V. Naumova, S. Pereverzyev RICAM-Report 2012-15 www.ricam.oeaw.ac.at Numerical

More information

Summary Notes on Maximization

Summary Notes on Maximization Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject

More information

A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems

A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems c de Gruyter 2007 J. Inv. Ill-Posed Problems 15 (2007), 12 8 DOI 10.1515 / JIP.2007.002 A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems Johnathan M. Bardsley Communicated

More information

arxiv: v1 [math.oc] 5 Feb 2018

arxiv: v1 [math.oc] 5 Feb 2018 A Bilevel Approach for Parameter Learning in Inverse Problems Gernot Holler Karl Kunisch Richard C. Barnard February 5, 28 arxiv:82.365v [math.oc] 5 Feb 28 Abstract A learning approach to selecting regularization

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Unconstrained minimization of smooth functions

Unconstrained minimization of smooth functions Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and

More information

1.4 The Jacobian of a map

1.4 The Jacobian of a map 1.4 The Jacobian of a map Derivative of a differentiable map Let F : M n N m be a differentiable map between two C 1 manifolds. Given a point p M we define the derivative of F at p by df p df (p) : T p

More information

Complexity of gradient descent for multiobjective optimization

Complexity of gradient descent for multiobjective optimization Complexity of gradient descent for multiobjective optimization J. Fliege A. I. F. Vaz L. N. Vicente July 18, 2018 Abstract A number of first-order methods have been proposed for smooth multiobjective optimization

More information

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Stephan W Anzengruber Ronny Ramlau Abstract We derive convergence rates for Tikhonov-type regularization with conve

More information

AN ALGORITHM FOR DETERMINATION OF AGE-SPECIFIC FERTILITY RATE FROM INITIAL AGE STRUCTURE AND TOTAL POPULATION

AN ALGORITHM FOR DETERMINATION OF AGE-SPECIFIC FERTILITY RATE FROM INITIAL AGE STRUCTURE AND TOTAL POPULATION J Syst Sci Complex (212) 25: 833 844 AN ALGORITHM FOR DETERMINATION OF AGE-SPECIFIC FERTILITY RATE FROM INITIAL AGE STRUCTURE AND TOTAL POPULATION Zhixue ZHAO Baozhu GUO DOI: 117/s11424-12-139-8 Received:

More information

Accelerated Gradient Methods for Constrained Image Deblurring

Accelerated Gradient Methods for Constrained Image Deblurring Accelerated Gradient Methods for Constrained Image Deblurring S Bonettini 1, R Zanella 2, L Zanni 2, M Bertero 3 1 Dipartimento di Matematica, Università di Ferrara, Via Saragat 1, Building B, I-44100

More information

Contraction Methods for Convex Optimization and monotone variational inequalities No.12

Contraction Methods for Convex Optimization and monotone variational inequalities No.12 XII - 1 Contraction Methods for Convex Optimization and monotone variational inequalities No.12 Linearized alternating direction methods of multipliers for separable convex programming Bingsheng He Department

More information

ITERATIVE METHODS FOR SOLVING A NONLINEAR BOUNDARY INVERSE PROBLEM IN GLACIOLOGY S. AVDONIN, V. KOZLOV, D. MAXWELL, AND M. TRUFFER

ITERATIVE METHODS FOR SOLVING A NONLINEAR BOUNDARY INVERSE PROBLEM IN GLACIOLOGY S. AVDONIN, V. KOZLOV, D. MAXWELL, AND M. TRUFFER ITERATIVE METHODS FOR SOLVING A NONLINEAR BOUNDARY INVERSE PROBLEM IN GLACIOLOGY S. AVDONIN, V. KOZLOV, D. MAXWELL, AND M. TRUFFER Abstract. We address a Cauchy problem for a nonlinear elliptic PDE arising

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

TOWARDS A GENERAL CONVERGENCE THEORY FOR INEXACT NEWTON REGULARIZATIONS

TOWARDS A GENERAL CONVERGENCE THEORY FOR INEXACT NEWTON REGULARIZATIONS TOWARDS A GENERAL CONVERGENCE THEOR FOR INEXACT NEWTON REGULARIZATIONS ARMIN LECHLEITER AND ANDREAS RIEDER July 13, 2009 Abstract. We develop a general convergence analysis for a class of inexact Newtontype

More information

Tuning of Fuzzy Systems as an Ill-Posed Problem

Tuning of Fuzzy Systems as an Ill-Posed Problem Tuning of Fuzzy Systems as an Ill-Posed Problem Martin Burger 1, Josef Haslinger 2, and Ulrich Bodenhofer 2 1 SFB F 13 Numerical and Symbolic Scientific Computing and Industrial Mathematics Institute,

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, 2547-2565 An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information

Parameter identification in medical imaging

Parameter identification in medical imaging Trabalho apresentado no XXXVII CNMAC, S.J. dos Campos - SP, 017. Proceeding Series of the Brazilian Society of Computational and Applied Mathematics Parameter identification in medical imaging Louise Reips

More information

An inexact subgradient algorithm for Equilibrium Problems

An inexact subgradient algorithm for Equilibrium Problems Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY ARMIN LECHLEITER AND ANDREAS RIEDER January 8, 2007 Abstract. The Newton-type regularization

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information