A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization

Size: px
Start display at page:

Download "A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization"

Transcription

1 A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton algorithm for the minimization of general nonlinear functionals was introduced by Smirnova, Renaut and Khan 2007). In this paper, we establish theoretical convergence results for an extended stabilized family of Generalized Preconditioned Iterative methods which includes M times iterated Tihonov regularization with line search. Numerical schemes illustrating the theoretical results are also presented. Keywords: Gauss-Newton method, stopping rule, ill-posed problem, regularization. AMS Subject Classification: 47A52, 65F22, 65J15, 65N21. 1 Introduction Consider a general ill-posed problem of minimizing a nonlinear functional Jq) := F δ q) 2 H 1 1.1) with a noisy operator F δ mapping between Hilbert spaces H and H 1, i.e., F : DF ) H H 1. Notice that this form of the nonlinear functional allows for error both in measurement data, and the forward nonlinear operator used for estimating the measured data. In particular, suppose that g δ are measured data and Cq) is a forward operator for obtaining estimates of g δ, then the nonlinear operator in which C is assumed error free is given by F δ = C g δ. Here we mae the more general assumption that C is also noise contaminated, probably due to a discretization process, and thus the subscript δ is omitted in the noise free case. When F δ is Fréchet differentiable in a neighborhood of a minimizer, one of the most used stabilizing numerical algorithms for solving 1.1) is the Iteratively Regularized Gauss-Newton IRGN) method BA93] q +1) = q F δ q ) )F δq ) ) + ) I) 1 F δ q ) ){F δ q ) ) F δq ) )q ) q), q H. 1.2) Georgia State University, Department of Mathematics and Statistics, Atlanta, GA Tel: , Fax: Supported by NSF grants DMS and DMS Arizona State University, Department of Mathematics and Statistics, Tempe, AZ Tel: , Fax:

2 2 IRGN, March 27, 2008 The element q +1) in 1.2) has the variational characterization DES98] F δ q ) ) F δq ) )q ) q) 2 H 1 + ) q q 2 H. For some inverse problems it proves to be extremely beneficial to impose the regularization in an alternative space. This suggests the use of a preconditioned IRGN method SRK07] q +1) = q F δ q ) )F δq ) ) + ) L L) 1 F δ q ) ){F δ q ) ) F δq ) )q ) q), q H, 1.3) with L LH, H 2 ) and F δ )F δ ) + L L being invertible for > 0. Here q +1) has the variational characterization F δ q ) ) F δq ) )q ) q) 2 H 1 + ) Lq q) 2 H 2, and the equivalence between the IRGN presented in SRK07] for the noise free case in C follows by taing F δ = C g δ. It was shown in SRK07] that method 1.3) is very effective for the diffusion optical tomography inverse problem, in which the operator L moves regularization from a B-spline coefficient space directly to the physical space. It may also be applied to allow an appropriate weighting on the elements of q ) to reflect the differing sensitivities of the operator with respect to different physical components. Assuming the necessary invertibility conditions on L L, and introducing the notation T := L L) 1/2 for self-adjoint operator T, F δ q) )T ) = T F δ q) ), 1.3) is rewritten as follows ] 1 q +1) = q T A δq ) )A δ q ) ) + ) I A δq ) ){F δ q ) ) F δq ) )q ) q), 1.4) where A δ q ) ) := F δq ) )T. 1.5) While scheme 1.4) is, in fact, preconditioned Tihonov s regularization combined with the Gauss-Newton algorithm, one can use other regularization methods in order to stabilize the Newton step EHN96]. In particular, replacing A δ q) )A δ q ) ) + ) I] 1 A δ q) ) in 1.4) by the more general operator Φ )A δ q ) ), yields q +1) = q T Φ )A δ q ) ){F δ q ) ) F δq ) )q ) q). 1.6) The class of methods described by 1.6) includes, amongst others, the M-times iterated preconditioned Tihonov s method, which is discussed in Section 3. For T = I, it has been shown in BA95] that through their incorporation in the Gauss-Newton process, some of these regularization techniques result in better convergence rates for sufficiently smooth solutions. Algorithm 1.6) is even more robust and efficient when we introduce a line search procedure with variable step size α ), 0 < α α ) 1 ] q +1) = q ) + α q ) q ) T Φ )A δ q ) ){F δ q ) ) F δq ) )q ) q). 1.7)

3 RSV, March 27, The family of methods 1.7) with T = I no preconditioning) and α ) = 1 no line search) was suggested by A.Baushinsy in BA95], and investigated further in K97], H97], BS05], BKA06], KN06]. Additional savings may also be realized by reuse of the Jacobian for M M,, inner iteration steps at the outer iteration step as follows: For m = 0 to M 1 Do ] q,m+1) = q,m) + α q,m) q,m) T Φ,m)A δ q,0) ){F δ q,m) ) F δq,0) )q,m) q) 1.8) End For Here the inner steps are initialized with q,0) = q 1,) and the line search parameter is still constrained by 0 < α α,m) 1. An alternative algorithm which uses inner iterations to limit the computational cost uses regularized Landweber iterations for the inner steps, hence avoiding matrix inversion in the inner steps, K97]. A ey to the convergence analysis of the methods presented in this paper, along with the local Lipschitz continuity of F, is the modified source condition L L)ˆq q) = F ˆq)v for some v H ) This source condition was introduced in KR93] and used in SRK07] for the analysis of schemes with preconditioning operator L. Here ˆq is a, possibly nonunique, solution to the noise free equation F q) = 0. Clearly, for T = I, 1.9) taes the form ˆq q = F ˆq)v, v H 1, which is equivalent to the Hölder source condition ˆq q = F ˆq)F ˆq)) 1/2 w, w H. A discussion of the advantages of 1.9) for convergence, and the associated stopping rule, as compared to other adopted convergence conditions and the Lepsij-type a posteriori stopping rule L90], BH05], was presented in SRK07]. Here, we emphasize that our results extend the methods in SRK07] to both the use of the more general operator Φ, as well as introducing the use of the inner iterations 1.8) for both T = I and the more general T = L L) 1/2. The paper is organized as follows: Theorems on the convergence of the iterations 1.7) and 1.8) are presented in Section 2. The resulting numerical schemes are discussed in Section 3. 2 Convergence Analysis We present the details of the basic convergence result for 1.7), followed by the result for 1.8) highlighting only the crucial aspects in which the proofs differ.

4 4 IRGN, March 27, 2008 Theorem 1. Assume that 1. F : DF ) H H 1 with H and H 1 being Hilbert spaces. The equation F q) = 0 is solvable maybe nonuniquely) and ˆq DF ) is a solution. 2. The operators F and F δ are Fréchet differentiable in U η ˆq), where U η ˆq) = {q H : ˆq q η DF ), and η = l 0) with l defined in 2.10) below. There is a positive constant M such that F δq 1 ) F δq 2 ) M q 1 q 2 for any q 1, q 2 U η ˆq). 2.1) 3. T LH, H) is a linear self-adjoint operator, T := t, and for some element v H 1, v := ε, the source condition holds ˆq q = T A ˆq)v, where Aˆq) := F ˆq)T, q H. 2.2) 4. The operator F δ approximates F to the following level of accuracy F δ ˆq) δ 1, F ˆq) F δˆq)) v δ ) 5. The regularization sequence { ) and the step size sequence {α ) satisfy the conditions ) > 0, ) ) 0, sup N {0 := d <, 0 < α +1) α) ) 6. For all G LH, H 1 ) and 0 < the regularizer Φ in 1.7) is constrained by Φ G)G I C 1, 2.5) Φ G)G I)G C 2, 2.6) To simplify the presentation we tae C = max{c 1, C 2, C 3 in the analysis. 7. Iterative process 1.7) is terminated according to the a priori stopping rule Φ G) C 3, > ) tδ 2 ) + δ 1 ) ρ < tδ 2 K) + δ 1 K), 0 < K = Kδ 1, δ 2 ), ρ > ) 8. For the constants associated with the operator F and iterations 1.7) the following conditions are fulfilled d 1 dα + t2 CMε + tc 2Mρ + ε) 1, 2.9) q 0) ˆq 0) 2tCρ + ε) 1 d 1 dα := l. 2.10) t2 CMε

5 RSV, March 27, Then the preconditioned iteratively regularized Gauss-Newton iterations with line search 1.7) satisfy q ) ˆq ) l, = 0, 1,..., Kδ 1, δ 2 ), 2.11) and q K) ˆq = O δ), 2.12) where δ = maxδ 1, δ 2 ). Proof. Tae arbitrary < Kδ 1, δ 2 ) and suppose that for any j such that 0 j < Kδ 1, δ 2 ) the induction assumption holds. Then, one has σ j) := qj) ˆq j) l. 2.13) q +1) ˆq = α ) q ˆq + 1 α ) )q ) { α ) T Φ )A δ q ) )) F δ q ) ) F δ ˆq) F δq ) )q ) ˆq) F δq ) )ˆq q) + F δ ˆq) = J 1 + J 2 + J 3, 2.14) where { J 1 = α ) T Φ )A δ q ) )) F δ q ) ) F δ ˆq) F δq ) )q ) ˆq), J 2 = α {ˆq ) q T Φ )A δ q ) ))F δq ) )ˆq q) and J 3 = 1 α ) )q ) ˆq) α ) T Φ )A δ q ) ))F δ ˆq). 2.15) We now estimate J i, i = 1, 2, By assumption 2.1), see for example ) EHN96]), F δ q ) ) F δ ˆq) F δq ) )q ) ˆq) M 2 q) ˆq 2. Thus, from 2.7) one derives J 1 α) tc ) M 2 q) ˆq ) 2. Source condition 2.2) yields ˆq q T Φ )A δ q ) ))F δq ) )ˆq q) = ] I T Φ )A δ q ) ))F δq ) ) T A ˆq)v. Adding and subtracting relevant terms, as well as applying 2.2) for F δ, gives ] I T Φ )A δ q ) ))F δq ) ) T A ˆq)v ]{ = T I Φ )A δ q ) ))A δ q ) ) Aˆq) A δ ˆq)) + A δ ˆq) A δ q ) )) + A δq ) ) v,

6 6 IRGN, March 27, 2008 which has the two terms T I Φ )A δ q ) ))A δ q ) ) ]{T F ˆq) F δˆq)) + T F δˆq) F δq ) )) v and ] T I Φ )A δ q ) ))A δ q ) ) A δq ) )v. Hence, assumptions 2.5) and 2.6), with 2.1) and 2.3), imply J 2 α ) t 2 Cδ 2 + α ) t 2 CM q ) ˆq ε + α ) tc ) ε. 2.17) 3. Finally, by 2.7) J 3 1 α ) ) q ) ˆq + α ) t Cδ 1 ). 2.18) Summarizing 2.16)-2.18) one concludes q +1) ˆq α) tcm 2 q ) ˆq α ) + α ) t 2 CMε] q ) ˆq ) { + α ) δ1 tc ) + tδ 2 + ε. 2.19) ) ) Because < Kδ 1, δ 2 ), combining 2.8), 2.19), 2.4) and 2.13), σ +1) α) tcmdl α ) + α ) t 2 CMε]dl + α ) tcdρ + ε). 2.20) Assumptions 2.9) and 2.10), together with 2.20), yield σ +1) l, hence proving inequality 2.11). 2.12) follows from stopping rule 2.8). Corollary 1. Assume F δq) N for any q U η ˆq), 2.21) and all conditions of Theorem 1, except 1. stopping rule 2.8) is replaced by the a posteriori stopping rule F δ q K) ) µ δ < F δ q ) ), 0 < Kδ), µ > 1, δ = max{δ 1, δ 2, and 2.22) ) and 2.10) are respectively replaced by { d 1 dα + t2 CMε + 2tC t 0) + 1)N 2 µ 1) 2 + M ε 1, 2.23) 2 and q 0) ˆq 0) 2tCε 1 d 1 dα t2 CMε := l. 2.24)

7 RSV, March 27, Then the iterations 1.7) satisfy q ) ˆq ) l, = 0, 1,..., Kδ), 2.25) and the sequence {Kδ) is admissible. Proof. From 2.22) it is immediate that for any < Kδ) one has µ δ F δ q ) ) F δ ˆq) + F δ ˆq) N q ) ˆq + δ, and therefore µ δ δ N q ) ˆq. Without loss of generality, one can set δ < 1. Thus and from 2.19) one obtains δ N 2 q ) ˆq 2 µ 1) ) q +1) ˆq α) tct 0) + 1)N 2 q ) ˆq 2 + α) tcm ) µ 1) 2 2 q ) ˆq 2 ) + 1 α ) + α ) t 2 CMε] q ) ˆq + α ) tcε ). 2.27) Using 2.23) and 2.24), result 2.25) now follows. While estimate 2.12) does not follow from stopping rule 2.22), the sequence K = Kδ) is nondecreasing as δ 0. Two cases are possible: 1. Kδ) = K 0 for any δ δ 0. Then by 2.3) and 2.22) q K0) F δ ) converges to a solution of the equation F q) = 0 in the norm of H as δ Kδ) as δ 0. Then q Kδ)) ˆq l Kδ)) 0 as δ 0. Sequence {Kδ) is, therefore, admissible. Remar 1. As opposed to the convergence results for iteratively regularized methods 1.6) with T = I, see BA95], K97], BS05], BKA06], KN06], in our convergence theorem ε, the norm of v in the source condition, does not have to be small for inequality 2.9) to be satisfied. Instead, t 2 ε with t := T must be small. While element q in 1.7) does not need to be close to the solution ˆq, condition 2.10) must hold for an appropriate choice of 0). Remar 2. When the forward operator F δ is not contaminated by noise, the constant δ 2 in 2.3) is zero, and the a priori stopping condition 2.8) simplifies accordingly. Scheme 1.8), which is just 1.7) when M = 1, permits reuse of the operator and has the potential to reduce the overall iteration cost, while possibly increasing the total number of iterations,. In the following note that the, m) element of the sequence corresponds to the i th element where i = 1 p=0 M p + m.

8 8 IRGN, March 27, 2008 Theorem 2. Assume all conditions of Theorem 1 but with the definitions 2.9) and 2.10) replaced by Then the iterations 1.8) satisfy d 1 dα + t2 CMεd + tc q 0,0) ˆq 0,0) q,m) ˆq,m) ˆl, 10Md ρ + ε) 1, 2.28) 2tCρ + ε) 1 d 1 dα t2 CMεd := ˆl. 2.29) 1 M p + m = 0, 1,..., Kδ 1, δ 2 ). 2.30) p=0 Proof. The result follows very similarly to the proof of Theorem 1. Assume that for the first i elements of the sequence, σ,m) := q,m) ˆq,m) ˆl, 1 0 M p + m i < Kδ 1, δ 2 ). 2.31) The i + 1) st element of the sequence is q,m+1) if m < M ), or q +1,0) if m = M ). Let m < M. Expression 2.14) is replaced by { q,m+1) ˆq = α,m) q ˆq + 1 α,m) )q,m) α,m) T Φ,m)A δ q,0) )) F δ q,m) ) F δ ˆq) F δq,0) )q,m) ˆq) F δq,0) )ˆq q) + F δ ˆq). 2.32) p=0 Notice now that { J 1 = α,m) T Φ,m)A δ q,0) )) F δ q,m) ) F δ ˆq) F δq,m) )q,m) ˆq) + F δq,m) ) F δq,0) ))q,m) ˆq). 2.33) Thus J 1 α,m) tcm 2 q,m) ˆq 2 + α,m) tcm q,m) ˆq q,m) q,0),m),m) 3α,m) tcm 2 q,m) ˆq 2 + α,m) tcm q,0) ˆq q,m) ˆq. 2.34),m),m) The bound of J 2 is obtained as for 2.17) yielding J 2 α,m) t 2 Cδ 2 + α,m) t 2 CM q,0) ˆq ε + α,m) tc,m) ε, 2.35) and we also immediately obtain Therefore J 3 1 α,m) ) q,m) ˆq + α,m) t Cδ 1,m). 2.36) q,m+1) ˆq 3α,m) tcm 2 q,m) ˆq α,m) ] q,m) ˆq + α,m) t 2 CMε q,0) ˆq,m) { + α,m) δ1 tc,m) + tδ 2 + ε + α,m) tcm q,m) ˆq q,0) ˆq.2.37),m),m),m)

9 RSV, March 27, By the induction, and using result d d M in order to combine first and fith terms of 2.37), one arrives at σ,m+1) 5α,m) tcmd Mˆl 2 2 and the result follows. Finally, if m = M, the result holds by Theorem 1. 3 Discussion of the Numerical Schemes + d dα ) + α,m) t 2 CMεd M ]ˆl + α,m) tcdρ + ε), 2.38) In this section we consider examples of generating operators Φ G), G LH, H 1 ), motivated by different regularization techniques merged with Gauss-Newton iterations. Example 1. From 1.3) and 1.4) it is clear that for Φ G) := G G + I] 1 G 3.1) algorithm 1.7) taes the form: q +1) = q ) + α ) p ), 3.2) where the search direction p ) is the solution of F δ q ) )F δq ) ) + ) L L]p ) = F δ q ) )F δ q ) ) + ) L Lq ) q)]. 3.3) This line search algorithm with search direction obtained from 3.3) was introduced and analyzed in SRK07]. Here, we extend its use by adopting the appropriately initialized inner iterations 1.8) For m = 0 to M 1 Do q,m+1) = q,m) α,m) F δ q ) )F δq ) )+,m) L L] 1 F δ q ) )F δ q,m) )+,m) L Lq,m) q)] End For Each inner update requires a system solve with system matrix G G +,m) I] which can be obtained cheaply provided that a method such as preconditioned conjugate gradients is adopted at every step and for M > 1 the solution is initialized with a good initial starting value, presumably the last value from the previous iteration. Example 2. In order to further stabilize the Gauss-Newton step, we may mae use of M-times repeated Tihonov regularization, BA95], Section 5.1 EHN96]). The search direction p ) in 3.2) is computed as the result of inner iterations in which the solution is prevented from stepping too far from the previous mapped value of q ). Calculate p,1) by 3.3) For End For m = 1 to M 1 Do p,m+1) = F δ q ) )F δq ) ) + ) L L] 1 F δ q ) )F δ q ) ) ) L Lp,m) ] 3.5) q +1) = q ) + α ) p,m) 3.4)

10 10 IRGN, March 27, 2008 Practically the algorithm is implemented as given, but to see how it differs from 3.4) notice that the inner updates, for m < M 1, can be written in the equivalent form q +1,m+1) = q ) F δ q ) )F δq ) ) + ) L L] 1 F δ q ) )F δ q ) ) + ) L Lq ) q +1,m) )] 3.6) with the line search parameter α ) introduced for m = M 1. We now show that this iteration is equivalent to 1.7) for the operator Introduce the notation Φ G) := i G G + I] i+1) G. 3.7) D := F δ q ) )F δq ) ) + ) B], g := F δ q ) )F δ q ) ), B = L L, 3.8) then and p,m+1) = ) D 1 Bp,m) D 1 g, m 1 3.9) D 1 B = F δ q ) )F δq ) ) + ) B] 1 B = T A δq ) )A δ q ) ) + ) I] 1 T 1 := Y. Noticing from 3.3) that we can define p,0) = q q ), one obtains Identity 3.11) yields p,m) = )) M Y M Y M q q ) ) )) i Y i D 1 g. 3.10) = T Λ M T 1, where Λ := A δq ) )A δ q ) ) + ) I. 3.11) p,m) = = )) M T Λ M )) M T Λ M )) i T Λ i+1) T 1 q q ) ) T 1 q q ) ) )) i T Λ i+1) T F δ q ) )F δ q ) ) )) i T Λ i+1) A δq ) ){F δ q ) ) F δq ) )q ) q). A δq ) )F δq ) )q ) q) But now, by induction, the first two terms in 3.10) simplify to q q ), and 3.10) becomes p,m) = q q ) T )) i Λ i+1) A δq ) ){F δ q ) ) F δq ) )q ) q), 3.12) from which 3.7) follows. It remains to verify that 3.7) meets conditions 2.5)-2.7) of Theorem Φ G)G I = i G G + I] i+1) G G I sup λ 0, ) i λ + ) i+1) λ 1

11 RSV, March 27, ] M = sup 1. λ 0, ) λ + 2. Φ G)G I)G = ) ] M i G G + I] i+1) G G I G sup λ λ 0, ) λ + = 1 2 2M 1) M 1 2 2M) M 2M. 3. Finally, Φ G) = i G G + I] i+1) G = sup λ 0, ) λ λ + ] i = sup λ + λ 0, ) To complete the estimate, we apply Lemma 3.1 BS05] With a = sup a 0,1] λ λ+ and P = M 1 one obtains Φ G) { sup λ 0, ) i λ + ) i+1) λ P 1 a) i a 2P + 1). 2M. 1 λ ] i λ λ + λ + 1 λ +. Example 3. A modification of 3.5) replaces the dependence of the iterative step in 3.5) on ) by the constant parameter σ > 0, F δ q ) )F δq ) ) + σl L]p,m+1) = F δ q ) )F δ q ) ) + σl Lp,m). It is immediate by the derivation of 3.12) that in this case p,m) = q q ) T M 1 σ i A δq ) )A δ q ) )+σi] i+1) A δq ) ){F δ q ) ) F δq ) )q ) q). 3.13) Now taing M as to satisfy the condition ) 0 for ) := 1/M, gives 1.7) with operator Again, it remains to verify conditions 2.5)-2.7). 1. For 2.5) Φ G)G I sup λ 0, ) 1 1 Φ G) := σ i G G + σi] i+1) G. 3.14) ] 1 σ λ+σ 2. For 2.6) Φ G)G I)G sup λ 0, ) σ λ+σ 1. ] 1 λ = σ ) σ 2, provided < 2, which is immediate for M To obtain 2.7) apply again Lemma 3.1 BS05] with a = λ λ+σ and P = 1 1, yielding Φ 2 G) σ.

12 12 IRGN, March 27, 2008 Remar 3. From Example 3, it is now evident that introducing a line search parameter in 3.6) q +1,m) = q ) + α,m) p,m), replaces σ on the right hand side in 3.5) by σ,m) = σ/α,m). An equivalent modification occurs in 3.5), and the distinction between 3.12) and 3.13) is lost. Moreover, it is not possible to reformulate the new parameter dependent scheme in terms of 1.7), and we conclude that line searching in the inner iterations is not feasible unless we utilize 1.8). 4 Conclusions Theoretical convergence results and numerical implementation of a family of preconditioned iteratively regularized Gauss-Newton schemes for solution of general ill-posed nonlinear minimization have been presented. These extend the initial theory of SRK07] for application of preconditioning to a larger class of iterative schemes with line search, and have potential application in a wide variety of ill-posed problems. References BA93] BA95] BS05] BH05] BKA06] DES98] EHN96] H97] Baushinsy, A. B., Iterative methods for nonlinear operator equations without regularity, new approach, Dol. Russian Acad. Sci , 1993). Baushinsy, A. B., Iterative methods without saturation for solving degenerate nonlinear operator equations, Dol. Russian Acad. Sci , 1995). Baushinsy, A. B. and Smirnova, A. On application of generalized discrepancy principle to iterative methods for nonlinear ill-posed problems, Numerical Functional Analysis and Optimization, 26, N1, 35-48, 2005). Bauer, F. and Hohage, T. A Lepsij-type stopping rule for regularized Newton methods, Inverse Problems, 21, , 2005). Burger M., and Kaltenbacher B., Regularizing NewtonKaczmarz Methods for Nonlinear Ill-Posed Problems, SIAM J. Num Anal., 44, N1, , 2006). Deuflhard, P., Engl, H.W. and Scherzer, O. A convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions, Inv. Probl., 14, , 1998). Engl, H., Hane, M. and Neubauer, A. Regularization of Inverse Problems, Kluwer Academic Publisher, Dordecht, Boston, London, 1996). Hohage, T. Logarithmic convergence rates of the iteratively regularized Gauss-Newton method for an inverse potential and inverse scattering problem, Inverse Problems, 13, , 1997).

13 RSV, March 27, K97] KN06] KR93] Kaltenbacher, B., Some Newton-type methods for the regularization of nonlinear ill-posed problems, Inverse Problems, 13, , 1997). Kaltenbacher, B. and Neubauer, A. Convergence of projected iterative regularization methods for nonlinear problems with smooth solutions. Inverse Problems, 22, N3, , 2006). Kunisch K. and Ring N. Regularization of nonlinear illposed problems with closed operators. Numerical Functional Analysis and Optimization, 14, , 1993). L90] Lepsij, O.V. On a problem of adaptive estimation in Gaussian white noise, Theory Probab. Appl., 35, , 1990). SRK07] Smirnova, A.B., Renaut R.A., and Khan, T. Convergence and application of a modified iteratively regularized GaussNewton algorithm. Inverse Problems, 23, N4, ).

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical

More information

Convergence rates of the continuous regularized Gauss Newton method

Convergence rates of the continuous regularized Gauss Newton method J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Functionalanalytic tools and nonlinear equations

Functionalanalytic tools and nonlinear equations Functionalanalytic tools and nonlinear equations Johann Baumeister Goethe University, Frankfurt, Germany Rio de Janeiro / October 2017 Outline Fréchet differentiability of the (PtS) mapping. Nonlinear

More information

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

Iterative Regularization Methods for Inverse Problems: Lecture 3

Iterative Regularization Methods for Inverse Problems: Lecture 3 Iterative Regularization Methods for Inverse Problems: Lecture 3 Thorsten Hohage Institut für Numerische und Angewandte Mathematik Georg-August Universität Göttingen Madrid, April 11, 2011 Outline 1 regularization

More information

A model function method in total least squares

A model function method in total least squares www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI

More information

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated

More information

How large is the class of operator equations solvable by a DSM Newton-type method?

How large is the class of operator equations solvable by a DSM Newton-type method? This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. How large

More information

Preconditioned Newton methods for ill-posed problems

Preconditioned Newton methods for ill-posed problems Preconditioned Newton methods for ill-posed problems Dissertation zur Erlangung des Doktorgrades der Mathematisch-Naturwissenschaftlichen Fakultäten der Georg-August-Universität zu Göttingen vorgelegt

More information

An improved convergence theorem for the Newton method under relaxed continuity assumptions

An improved convergence theorem for the Newton method under relaxed continuity assumptions An improved convergence theorem for the Newton method under relaxed continuity assumptions Andrei Dubin ITEP, 117218, BCheremushinsaya 25, Moscow, Russia Abstract In the framewor of the majorization technique,

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS

CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS MATHEMATICS OF COMPUTATION Volume 68, Number 228, Pages 165 1613 S 25-5718(99)1135-7 Article electronically published on March 1, 1999 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS BENEDETTA MORINI Abstract.

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 2191-2195 1 On a new notion of regularizer A.G. Ramm LMA/CNRS, 31 Chemin Joseph Aiguier, Marseille 13402, France and Mathematics

More information

= E [ξ ϕ ξ ψ ] satises 1

= E [ξ ϕ ξ ψ ] satises 1 ITERATIVELY REGULARIZED GAUSS-NEWTON METHOD FOR NONLINEAR INVERSE PROBLEMS WITH RANDOM NOISE FRAN BAUER 1, THORSTEN HOHAGE 2, AXEL MUN 2 1 FUZZY LOGIC LABORATORIUM LINZ-HAGENBERG, AUSTRIA 2 GEORG-AUGUST

More information

PEER-REVIEWED PUBLICATIONS

PEER-REVIEWED PUBLICATIONS PEER-REVIEWED PUBLICATIONS (In most cases, authors are listed alphabetically to indicate equivalent contributions) BOOKS 1. A.B.Bakushinsky, M.Yu.Kokurin, A.B.Smirnova, Iterative Methods for Ill-Posed

More information

Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations

Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Ronny Ramlau Gerd Teschke April 13, 25 Abstract We shall be concerned with the construction of Tikhonov based iteration

More information

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Boris T. Polyak Andrey A. Tremba V.A. Trapeznikov Institute of Control Sciences RAS, Moscow, Russia Profsoyuznaya, 65, 117997

More information

Adaptive methods for control problems with finite-dimensional control space

Adaptive methods for control problems with finite-dimensional control space Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy

More information

ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE

ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE QINIAN JIN AND PETER MATHÉ Abstract. We consider statistical linear inverse problems in Hilbert spaces. Approximate solutions are sought within

More information

New Algorithms for Parallel MRI

New Algorithms for Parallel MRI New Algorithms for Parallel MRI S. Anzengruber 1, F. Bauer 2, A. Leitão 3 and R. Ramlau 1 1 RICAM, Austrian Academy of Sciences, Altenbergerstraße 69, 4040 Linz, Austria 2 Fuzzy Logic Laboratorium Linz-Hagenberg,

More information

Properties of the Scattering Transform on the Real Line

Properties of the Scattering Transform on the Real Line Journal of Mathematical Analysis and Applications 58, 3 43 (001 doi:10.1006/jmaa.000.7375, available online at http://www.idealibrary.com on Properties of the Scattering Transform on the Real Line Michael

More information

Cubic regularization of Newton s method for convex problems with constraints

Cubic regularization of Newton s method for convex problems with constraints CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized

More information

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information

Parameter Identification in Partial Differential Equations

Parameter Identification in Partial Differential Equations Parameter Identification in Partial Differential Equations Differentiation of data Not strictly a parameter identification problem, but good motivation. Appears often as a subproblem. Given noisy observation

More information

Institut für Numerische und Angewandte Mathematik

Institut für Numerische und Angewandte Mathematik Institut für Numerische und Angewandte Mathematik Iteratively regularized Newton-type methods with general data mist functionals and applications to Poisson data T. Hohage, F. Werner Nr. 20- Preprint-Serie

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

Affine covariant Semi-smooth Newton in function space

Affine covariant Semi-smooth Newton in function space Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held

More information

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, 2547-2565 An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational

More information

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage: Nonlinear Analysis 71 2009 2744 2752 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A nonlinear inequality and applications N.S. Hoang A.G. Ramm

More information

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik Accelerated Landweber iteration in Banach spaces T. Hein, K.S. Kazimierski Preprint 2009-17 Fakultät für Mathematik Impressum: Herausgeber: Der Dekan der Fakultät für Mathematik an der Technischen Universität

More information

ITERATIVE METHODS FOR SOLVING A NONLINEAR BOUNDARY INVERSE PROBLEM IN GLACIOLOGY S. AVDONIN, V. KOZLOV, D. MAXWELL, AND M. TRUFFER

ITERATIVE METHODS FOR SOLVING A NONLINEAR BOUNDARY INVERSE PROBLEM IN GLACIOLOGY S. AVDONIN, V. KOZLOV, D. MAXWELL, AND M. TRUFFER ITERATIVE METHODS FOR SOLVING A NONLINEAR BOUNDARY INVERSE PROBLEM IN GLACIOLOGY S. AVDONIN, V. KOZLOV, D. MAXWELL, AND M. TRUFFER Abstract. We address a Cauchy problem for a nonlinear elliptic PDE arising

More information

Convergence rates of spectral methods for statistical inverse learning problems

Convergence rates of spectral methods for statistical inverse learning problems Convergence rates of spectral methods for statistical inverse learning problems G. Blanchard Universtität Potsdam UCL/Gatsby unit, 04/11/2015 Joint work with N. Mücke (U. Potsdam); N. Krämer (U. München)

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2 Mathematical Modelling and Numerical Analysis Modélisation Mathématique et Analyse Numérique Will be set by the publisher REGULARIZATION OF NONLINEAR ILL-POSED PROBLEMS BY EXPONENTIAL INTEGRATORS Marlis

More information

A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces

A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces Cyril Dennis Enyi and Mukiawa Edwin Soh Abstract In this paper, we present a new iterative

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 016, 4478 4488 Research Article Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert

More information

1. Introduction. In this paper we derive an algorithm for solving the nonlinear system

1. Introduction. In this paper we derive an algorithm for solving the nonlinear system GLOBAL APPROXIMATE NEWTON METHODS RANDOLPH E. BANK AND DONALD J. ROSE Abstract. We derive a class of globally convergent and quadratically converging algorithms for a system of nonlinear equations g(u)

More information

Modified Landweber iteration in Banach spaces convergence and convergence rates

Modified Landweber iteration in Banach spaces convergence and convergence rates Modified Landweber iteration in Banach spaces convergence and convergence rates Torsten Hein, Kamil S. Kazimierski August 4, 009 Abstract Abstract. We introduce and discuss an iterative method of relaxed

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems

A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems c de Gruyter 2007 J. Inv. Ill-Posed Problems 15 (2007), 12 8 DOI 10.1515 / JIP.2007.002 A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems Johnathan M. Bardsley Communicated

More information

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:

More information

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY ARMIN LECHLEITER AND ANDREAS RIEDER January 8, 2007 Abstract. The Newton-type regularization

More information

RECONSTRUCTION OF NUMERICAL DERIVATIVES FROM SCATTERED NOISY DATA. 1. Introduction

RECONSTRUCTION OF NUMERICAL DERIVATIVES FROM SCATTERED NOISY DATA. 1. Introduction RECONSTRUCTION OF NUMERICAL DERIVATIVES FROM SCATTERED NOISY DATA T. WEI, Y. C. HON, AND Y. B. WANG Abstract. Based on the thin plate spline approximation theory, we propose in this paper an efficient

More information

Optimal Control for Radiative Heat Transfer Model with Monotonic Cost Functionals

Optimal Control for Radiative Heat Transfer Model with Monotonic Cost Functionals Optimal Control for Radiative Heat Transfer Model with Monotonic Cost Functionals Gleb Grenkin 1,2 and Alexander Chebotarev 1,2 1 Far Eastern Federal University, Sukhanova st. 8, 6995 Vladivostok, Russia,

More information

Chapter 2 Smooth Spaces

Chapter 2 Smooth Spaces Chapter Smooth Spaces.1 Introduction In this chapter, we introduce the class of smooth spaces. We remark immediately that there is a duality relationship between uniform smoothness and uniform convexity.

More information

Motion Estimation (I) Ce Liu Microsoft Research New England

Motion Estimation (I) Ce Liu Microsoft Research New England Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Novel tomography techniques and parameter identification problems

Novel tomography techniques and parameter identification problems Novel tomography techniques and parameter identification problems Bastian von Harrach harrach@ma.tum.de Department of Mathematics - M1, Technische Universität München Colloquium of the Institute of Biomathematics

More information

Dynamical systems method (DSM) for selfadjoint operators

Dynamical systems method (DSM) for selfadjoint operators Dynamical systems method (DSM) for selfadjoint operators A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 6656-262, USA ramm@math.ksu.edu http://www.math.ksu.edu/ ramm Abstract

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

A Bound-Constrained Levenburg-Marquardt Algorithm for a Parameter Identification Problem in Electromagnetics

A Bound-Constrained Levenburg-Marquardt Algorithm for a Parameter Identification Problem in Electromagnetics A Bound-Constrained Levenburg-Marquardt Algorithm for a Parameter Identification Problem in Electromagnetics Johnathan M. Bardsley February 19, 2004 Abstract Our objective is to solve a parameter identification

More information

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Stephan W Anzengruber Ronny Ramlau Abstract We derive convergence rates for Tikhonov-type regularization with conve

More information

Bulletin of the Transilvania University of Braşov Vol 10(59), No Series III: Mathematics, Informatics, Physics, 63-76

Bulletin of the Transilvania University of Braşov Vol 10(59), No Series III: Mathematics, Informatics, Physics, 63-76 Bulletin of the Transilvania University of Braşov Vol 1(59), No. 2-217 Series III: Mathematics, Informatics, Physics, 63-76 A CONVERGENCE ANALYSIS OF THREE-STEP NEWTON-LIKE METHOD UNDER WEAK CONDITIONS

More information

INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION

INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION ANDREAS RIEDER August 2004 Abstract. In our papers [Inverse Problems, 15, 309-327,1999] and [Numer. Math., 88, 347-365, 2001]

More information

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Radu Ioan Boț and Bernd Hofmann September 16, 2016 Abstract In the literature on singular perturbation

More information

Adaptive and multilevel methods for parameter identification in partial differential equations

Adaptive and multilevel methods for parameter identification in partial differential equations Adaptive and multilevel methods for parameter identification in partial differential equations Barbara Kaltenbacher, University of Stuttgart joint work with Hend Ben Ameur, Université de Tunis Anke Griesbaum,

More information

Iterative Regularization Methods for a Discrete Inverse Problem in MRI

Iterative Regularization Methods for a Discrete Inverse Problem in MRI CUBO A Mathematical Journal Vol.10, N ō 02, (137 146). July 2008 Iterative Regularization Methods for a Discrete Inverse Problem in MRI A. Leitão Universidade Federal de Santa Catarina, Departamento de

More information

The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator

The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator G. Bal (gb2030@columbia.edu) 1 W. Naetar (wolf.naetar@univie.ac.at) 2 O. Scherzer (otmar.scherzer@univie.ac.at) 2,3

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

Parameter Identification

Parameter Identification Lecture Notes Parameter Identification Winter School Inverse Problems 25 Martin Burger 1 Contents 1 Introduction 3 2 Examples of Parameter Identification Problems 5 2.1 Differentiation of Data...............................

More information

GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS

GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS Methods in Geochemistry and Geophysics, 36 GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS Michael S. ZHDANOV University of Utah Salt Lake City UTAH, U.S.A. 2OO2 ELSEVIER Amsterdam - Boston - London

More information

On the acceleration of augmented Lagrangian method for linearly constrained optimization

On the acceleration of augmented Lagrangian method for linearly constrained optimization On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental

More information

A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010),

A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010), A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (21), 1916-1921. 1 Implicit Function Theorem via the DSM A G Ramm Department of Mathematics Kansas

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut Collaborators: Jodi Mead and Iveta Hnetynkova DEPARTMENT OF MATHEMATICS

More information

On the Midpoint Method for Solving Generalized Equations

On the Midpoint Method for Solving Generalized Equations Punjab University Journal of Mathematics (ISSN 1016-56) Vol. 40 (008) pp. 63-70 On the Midpoint Method for Solving Generalized Equations Ioannis K. Argyros Cameron University Department of Mathematics

More information

This article was originally published in a journal published by Elsevier, and the attached copy is provided by Elsevier for the author s benefit and for the benefit of the author s institution, for non-commercial

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

Tuning of Fuzzy Systems as an Ill-Posed Problem

Tuning of Fuzzy Systems as an Ill-Posed Problem Tuning of Fuzzy Systems as an Ill-Posed Problem Martin Burger 1, Josef Haslinger 2, and Ulrich Bodenhofer 2 1 SFB F 13 Numerical and Symbolic Scientific Computing and Industrial Mathematics Institute,

More information

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto

More information

A range condition for polyconvex variational regularization

A range condition for polyconvex variational regularization www.oeaw.ac.at A range condition for polyconvex variational regularization C. Kirisits, O. Scherzer RICAM-Report 2018-04 www.ricam.oeaw.ac.at A range condition for polyconvex variational regularization

More information

Some Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces

Some Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces Some Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces S.S. Dragomir Abstract. Some new inequalities for commutators that complement and in some instances improve recent results

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

1.4 The Jacobian of a map

1.4 The Jacobian of a map 1.4 The Jacobian of a map Derivative of a differentiable map Let F : M n N m be a differentiable map between two C 1 manifolds. Given a point p M we define the derivative of F at p by df p df (p) : T p

More information

Numerical Modeling of Methane Hydrate Evolution

Numerical Modeling of Methane Hydrate Evolution Numerical Modeling of Methane Hydrate Evolution Nathan L. Gibson Joint work with F. P. Medina, M. Peszynska, R. E. Showalter Department of Mathematics SIAM Annual Meeting 2013 Friday, July 12 This work

More information

ON THE REGULARIZING PROPERTIES OF THE GMRES METHOD

ON THE REGULARIZING PROPERTIES OF THE GMRES METHOD ON THE REGULARIZING PROPERTIES OF THE GMRES METHOD D. CALVETTI, B. LEWIS, AND L. REICHEL Abstract. The GMRES method is a popular iterative method for the solution of large linear systems of equations with

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems

Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems Alessandro Buccini a, Yonggi Park a, Lothar Reichel a a Department of Mathematical Sciences, Kent State University,

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

arxiv: v1 [math.na] 16 Jan 2018

arxiv: v1 [math.na] 16 Jan 2018 A FAST SUBSPACE OPTIMIZATION METHOD FOR NONLINEAR INVERSE PROBLEMS IN BANACH SPACES WITH AN APPLICATION IN PARAMETER IDENTIFICATION ANNE WALD arxiv:1801.05221v1 [math.na] 16 Jan 2018 Abstract. We introduce

More information

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1. A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE THOMAS CHEN AND NATAŠA PAVLOVIĆ Abstract. We prove a Beale-Kato-Majda criterion

More information