Iterative regularization of nonlinear ill-posed problems in Banach space

Size: px
Start display at page:

Download "Iterative regularization of nonlinear ill-posed problems in Banach space"

Transcription

1 Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and Thomas Schuster, University of Oldenburg 25th IFIP TC 7, Berlin, September 2011

2 Parameter identification in PDEs (I) e.g., electrical impedance tomography (EIT) (σ φ) = 0 in Ω. Identify conductivity σ from measurements of the Dirichlet-to-Neumann map Λ σ, i.e., all possible pairs (φ, σ n φ) on Ω. e.g., magnetic resonance electrical impedance tomography (MREIT): ( µ 1 E ) + σ t E + ε 2 t 2 E = J in Ω. Identify σ from measurements of the deposited energy σ E 2 in Ω.

3 Parameter identification in PDEs (I) e.g., electrical impedance tomography (EIT) (σ φ) = 0 in Ω. Identify conductivity σ from measurements of the Dirichlet-to-Neumann map Λ σ, i.e., all possible pairs (φ, σ n φ) on Ω. e.g., magnetic resonance electrical impedance tomography (MREIT): ( µ 1 E ) + σ t E + ε 2 t 2 E = J in Ω. Identify σ from measurements of the deposited energy σ E 2 in Ω.

4 Parameter identification in PDEs (II) e.g. a-example (transmissivity in groundwater modelling) (a u) = 0 in Ω. Identify a from measurements of u in Ω. e.g. c-example (potential in stat. Schrödinger equation) u + c u = 0 in Ω. Identify c from measurements of u in Ω.

5 Parameter identification in PDEs (II) e.g. a-example (transmissivity in groundwater modelling) (a u) = 0 in Ω. Identify a from measurements of u in Ω. e.g. c-example (potential in stat. Schrödinger equation) u + c u = 0 in Ω. Identify c from measurements of u in Ω.

6 Phase retrieval Ff = r Reconstruct the real-valued function f : R R from measurements of the intensity r : R R + of its Fourier transform.

7 Forward operator F EIT: F : σ Λ σ where Λ σ : φ σ n φ and (σ φ) = 0 in Ω. MREIT: F : σ σ E 2 where ( µ 1 E ) + σ t E + ε 2 t 2 E = J in Ω + bndy.cond.

8 Forward operator F EIT: F : σ Λ σ where Λ σ : φ σ n φ and (σ φ) = 0 in Ω. MREIT: F : σ σ E 2 where ( µ 1 E ) + σ t E + ε 2 E = J in Ω + bndy.cond. t 2 a-example: F : a u where (a u) = 0 in Ω + boundary conditions

9 Forward operator F EIT: F : σ Λ σ where Λ σ : φ σ n φ and (σ φ) = 0 in Ω. MREIT: F : σ σ E 2 where ( µ 1 E ) + σ t E + ε 2 E = J in Ω + bndy.cond. t 2 a-example: F : a u where (a u) = 0 in Ω + boundary conditions c-example: F : c u where u + c u = 0 in Ω + boundary conditions

10 Forward operator F EIT: F : σ Λ σ where Λ σ : φ σ n φ and (σ φ) = 0 in Ω. MREIT: F : σ σ E 2 where ( µ 1 E ) + σ t E + ε 2 E = J in Ω + bndy.cond. t 2 a-example: F : a u where (a u) = 0 in Ω + boundary conditions c-example: F : c u where u + c u = 0 in Ω + boundary conditions phase retrieval: F : f Ff

11 Forward operator F EIT: F : σ Λ σ where Λ σ : φ σ n φ and (σ φ) = 0 in Ω. MREIT: F : σ σ E 2 where ( µ 1 E ) + σ t E + ε 2 E = J in Ω + bndy.cond. t 2 a-example: F : a u where (a u) = 0 in Ω + boundary conditions c-example: F : c u where u + c u = 0 in Ω + boundary conditions phase retrieval: F : f Ff forward operator F is nonlinear and in the last example nonsmooth

12 Forward operator F EIT: F : σ Λ σ where Λ σ : φ σ n φ and (σ φ) = 0 in Ω. MREIT: F : σ σ E 2 where ( µ 1 E ) + σ t E + ε 2 E = J in Ω + bndy.cond. t 2 a-example: F : a u where (a u) = 0 in Ω + boundary conditions c-example: F : c u where u + c u = 0 in Ω + boundary conditions phase retrieval: F : f Ff forward operator F is nonlinear and in the last example nonsmooth

13 Nonlinear ill-posed problems nonlinear operator equation F (x) = y F : D(F )( X ) Y... nonlinear operator; F not continuously invertible; X, Y... Banach spaces; y δ y... noisy data, y δ y δ... noise level. regularization necessary

14 Motivation for working in Banach space X = L P with P 1 sparse solutions MS08 Optimization in Banach Spaces with Sparsity Constraints X = L P with P ellipticity and boundedness in the context of parameter id. in PDEs (e.g. (a u) = 0 ); avoid artificial increase of ill-posedness, that would result from a Hilbert space choice X = H d/2+ɛ

15 Motivation for working in Banach space X = L P with P 1 sparse solutions MS08 Optimization in Banach Spaces with Sparsity Constraints X = L P with P ellipticity and boundedness in the context of parameter id. in PDEs (e.g. (a u) = 0 ); avoid artificial increase of ill-posedness, that would result from a Hilbert space choice X = H d/2+ɛ Y = L R with R = realistic measurement noise model; avoid artificial increase of ill-posedness, that would result from a Hilbert space choice Y = L 2

16 Motivation for working in Banach space X = L P with P 1 sparse solutions MS08 Optimization in Banach Spaces with Sparsity Constraints X = L P with P ellipticity and boundedness in the context of parameter id. in PDEs (e.g. (a u) = 0 ); avoid artificial increase of ill-posedness, that would result from a Hilbert space choice X = H d/2+ɛ Y = L R with R = realistic measurement noise model; avoid artificial increase of ill-posedness, that would result from a Hilbert space choice Y = L 2

17 c-example in Banach spaces u + c u = 0 in Ω. Identify c from measurements of u in Ω R d. (theoretical) reconstruction formula: c = u u

18 c-example in Banach spaces u + c u = 0 in Ω. Identify c from measurements of u in Ω R d. (theoretical) reconstruction formula: c = u u abstract stability result: Assume u u > 0, use Y = L : c 1 c 2 L P 1 u u(c 1) u(c 2 ) W 2,P + u(c 2) W 2,P u 2 u(c 1 ) u(c 2 ) L,

19 c-example in Banach spaces u + c u = 0 in Ω. Identify c from measurements of u in Ω R d. (theoretical) reconstruction formula: c = u u abstract stability result: Assume u u > 0, use Y = L : c 1 c 2 L P 1 u u(c 1) u(c 2 ) W 2,P + u(c 2) W 2,P u 2 u(c 1 ) u(c 2 ) L, where W 2,d/2 (Ω) L (Ω) in the sense that W 2,d/2 (Ω) L (Ω) and W 2,d/2+ɛ (Ω) L (Ω) for any ɛ > 0;

20 c-example in Banach spaces u + c u = 0 in Ω. Identify c from measurements of u in Ω R d. (theoretical) reconstruction formula: c = u u abstract stability result: Assume u u > 0, use Y = L : c 1 c 2 L P 1 u u(c 1) u(c 2 ) W 2,P + u(c 2) W 2,P u 2 u(c 1 ) u(c 2 ) L, where W 2,d/2 (Ω) L (Ω) in the sense that W 2,d/2 (Ω) L (Ω) and W 2,d/2+ɛ (Ω) L (Ω) for any ɛ > 0; choose P = d/2, X = L d/2 (Ω)

21 c-example in Banach spaces u + c u = 0 in Ω. Identify c from measurements of u in Ω R d. (theoretical) reconstruction formula: c = u u abstract stability result: Assume u u > 0, use Y = L : c 1 c 2 L P 1 u u(c 1) u(c 2 ) W 2,P + u(c 2) W 2,P u 2 u(c 1 ) u(c 2 ) L, where W 2,d/2 (Ω) L (Ω) in the sense that W 2,d/2 (Ω) L (Ω) and W 2,d/2+ɛ (Ω) L (Ω) for any ɛ > 0; choose P = d/2, X = L d/2 (Ω)

22 Phase retrieval in Banach spaces Ff = r Reconstruct the real-valued function f : R R from measurements of the intensity r : R R + of its Fourier transform. natural preimage- and image spaces (Hausdorff-Young Theorem): X = L P R (R), Y = L P P 1 R (R) with P [1, 2]

23 Phase retrieval in Banach spaces Ff = r Reconstruct the real-valued function f : R R from measurements of the intensity r : R R + of its Fourier transform. natural preimage- and image spaces (Hausdorff-Young Theorem): X = L P R (R), Y = L P P 1 R (R) with P [1, 2] sparse signal, L measurement noise X = L 1, Y = L

24 Phase retrieval in Banach spaces Ff = r Reconstruct the real-valued function f : R R from measurements of the intensity r : R R + of its Fourier transform. natural preimage- and image spaces (Hausdorff-Young Theorem): X = L P R (R), Y = L P P 1 R (R) with P [1, 2] sparse signal, L measurement noise X = L 1, Y = L

25 Regularization in Banach space case Y = X : iterative and variational regularization methods for linear and nonlinear problems [Plato 92, 94, 95, Bakushinskii&Kokurin 04] Tikhonov regularization for linear and nonlinear problems S(F (x), y δ ) + αr(x) = min! [Burger&Osher 04, Resmerita&Scherzer 06, Hofmann&BK&Pöschl&Scherzer 07, Grasmair&Haltmeier&Scherzer 08, 10,... ] needs global minimizer, but Tikhonov functional i.g. nonsmooth and nonconvex if F nonlinear

26 Regularization in Banach space case Y = X : iterative and variational regularization methods for linear and nonlinear problems [Plato 92, 94, 95, Bakushinskii&Kokurin 04] Tikhonov regularization for linear and nonlinear problems S(F (x), y δ ) + αr(x) = min! [Burger&Osher 04, Resmerita&Scherzer 06, Hofmann&BK&Pöschl&Scherzer 07, Grasmair&Haltmeier&Scherzer 08, 10,... ] needs global minimizer, but Tikhonov functional i.g. nonsmooth and nonconvex if F nonlinear motivates iterative regularization for nonlinear problems [Schöpfer&Louis&Schuster 06, Hein&Kazimierski 10, BK&Schöpfer&Schuster 09, BK&Hofmann 10,... ]

27 Regularization in Banach space case Y = X : iterative and variational regularization methods for linear and nonlinear problems [Plato 92, 94, 95, Bakushinskii&Kokurin 04] Tikhonov regularization for linear and nonlinear problems S(F (x), y δ ) + αr(x) = min! [Burger&Osher 04, Resmerita&Scherzer 06, Hofmann&BK&Pöschl&Scherzer 07, Grasmair&Haltmeier&Scherzer 08, 10,... ] needs global minimizer, but Tikhonov functional i.g. nonsmooth and nonconvex if F nonlinear motivates iterative regularization for nonlinear problems [Schöpfer&Louis&Schuster 06, Hein&Kazimierski 10, BK&Schöpfer&Schuster 09, BK&Hofmann 10,... ]

28 Outline short review on iterative regularization for nonlinear problems in Hilbert space some Banach space tools Landweber for nonlinear problems in Banach space Newton for nonlinear problems in Banach space numerical tests

29 Iterative regularization for nonlinear problems in Hilbert space gradient method for min X F (x) y δ 2 Landweber iteration x δ k+1 = x δ k µ kf (x δ k ) ( F (x δ k ) y δ) [Hanke&Neubauer&Scherzer 96]

30 Iterative regularization for nonlinear problems in Hilbert space Newton s method for F (x) = y δ Levenberg Marquardt method plus regularization x δ k+1 = x δ k (F (x δ k ) F (x δ k ) + α ki ) 1 F (x δ k ) }{{} F (x δ k ) 1 (F (x δ k ) y δ) [Hanke 97, 10], Newton-CG: [Hanke 97], inexact Newton [Rieder 01] iteratively regularized Gauss-Newton method (IRGN) ( ) xk+1 δ = x k δ (F (xk δ ) F (xk δ ) + α ki ) 1 F (xk δ }{{ ) F (xk δ } ) y δ +α k (xk δ x 0) F (xk δ) 1 [Bakushinskii 92, BK&Neubauer&Scherzer 96, 08, BK 97, Hohage 97]

31 Some Banach space tools (I) Smoothness X... smooth norm Gâteaux differentiable on X \ {0}; X... uniformly smooth norm Fréchet differentiable on unit sphere; Convexity X... strictly convex boundary of unit ball contains no line segment; X... uniformly convex modulus of convexity δ X (ɛ) > 0 ɛ (0, 2]; δ X (ɛ) = inf{1 1 2 (x + y) : x = y = 1, x y ɛ} L P (Ω), P (1, ) is uniformly convex (Hanner s ineq.) and uniformly smooth

32 Some Banach space tools (II) Dual space: X = L(X, R)... bounded linear functionals on X x : x x, x X uniformly smooth X uniformly convex X reflexive: X smooth X strictly convex

33 Some Banach space tools (III) Duality mapping: J p : X 2 X, J p (x) = {x X : x, x = x x = x p } J p set valued; j p... single valued selection of J p ; J p = 1 p p (Asplund) X smooth J p single valued X reflexive, smooth, strictly convex J 1 p = J p p 1 L P (Ω), P (1, ): J P (x) = x p P L P x P 1 sign(x) J P possibly nonlinear and nonsmooth

34 Some Banach space tools (IV) Bregman distance: D p (x, x) = 1 p x p 1 p x p inf{ ξ, x x : ξ J p ( x)}, x, x X ; smooth X : D p (x, x) = p 1 p ( x p x p ) + J p (x) J p ( x), x ; smooth and uniformly convex X : convergence in D p convergence in Hilbert space case: D 2 (x, x) = 1 2 x x 2

35 Assumptions on pre-image and image space X, Y X smooth, uniformly convex X reflexive (Milman-Pettis) and strictly convex J p single valued, norm-to-weak-continuous, bijective Y arbitrary Banach space

36 Further assumptions for convergence proofs closeness to a solution x : x 0 x sufficiently small F Lipschitz continuous and x sufficiently smooth or tangential cone condition: For all x D(F ) there exists F (x) L(X, Y ) (F (x) not necess. Fréchet derivative) s.t. F (x) F ( x) F (x)(x x) c tc F (x) F ( x) x, x B for Landweber: F, F continuous and interior of D(F ) nonempty for IRGN: F (weakly) sequentially closed

37 Stopping rule discrepancy principle k (δ) = min{k N : F (x δ k ) y δ Cdp δ} C dp > 1, y δ y δ... noise level Trade off between stability and approximation: Stop as early as possible (stability) such that the resudual is lower than the noise level (approximation)

38 Landweber for inverse problems in Banach space J p (x δ k+1 ) = J p(x δ k ) µ kf (x δ k ) j r (F (x δ k ) y δ ), xk+1 δ = J p (J p (xk+1 δ )) p 1 p, r (1, ). for comparison: Landweber in Hilbert space: x δ k+1 = x δ k µ kf (x δ k ) ( F (x δ k ) y δ)

39 Convergence results for Landweber (I) Theorem (monotonicity of the error) µ k appropriately chosen (suff. small), c tc suff. small, C dp suff. large. Then for all k k (δ) 1, xk+1 δ D(F ) and D p (x, x δ k+1 ) D p(x, x δ k ) C F (xk ) y δ p F (x δ k ) p < 0

40 Idea of proof recall: D p (x, x) = p 1 p ( x p x p ) + (J p (x) J p ( x), x) D p (x, xk+1 δ ) D p(x, xk δ ) = p 1 ( x δ p x p k+1 δ p) k J p (xk+1 δ ) J p(xk δ }{{} ), x =µ k F (xk δ) j r (F (xk δ) y δ ) = D p (xk δ, x k+1 δ ) µ k j r (F (xk δ ) y δ ), F (xk δ )(x k δ x ) }{{} F (xk δ) y δ

41 Idea of proof recall: D p (x, x) = p 1 p ( x p x p ) + (J p (x) J p ( x), x) D p (x, xk+1 δ ) D p(x, xk δ ) = p 1 ( x δ p x p k+1 δ p) k J p (xk+1 δ ) J p(xk δ }{{} ), x =µ k F (xk δ) j r (F (xk δ) y δ ) = D p (xk δ, x k+1 δ ) µ k j r (F (xk δ ) y δ ), F (xk δ )(x k δ x ) }{{} ) D p (xk δ, x k+1 δ F }{{} ) µ k (1 c(c tc, C dp ) (x δ k ) y δ r =O(µ 1+ɛ k ) F (x δ k ) y δ

42 Idea of proof recall: D p (x, x) = p 1 p ( x p x p ) + (J p (x) J p ( x), x) D p (x, xk+1 δ ) D p(x, xk δ ) = p 1 ( x δ p x p k+1 δ p) k J p (xk+1 δ ) J p(xk δ }{{} ), x =µ k F (xk δ) j r (F (xk δ) y δ ) = D p (xk δ, x k+1 δ ) µ k j r (F (xk δ ) y δ ), F (xk δ )(x k δ x ) }{{} ) D p (xk δ, x k+1 δ F }{{} ) µ k (1 c(c tc, C dp ) (x δ k ) y δ r =O(µ 1+ɛ k ) 0 by the choice of µ k. F (x δ k ) y δ

43 Idea of proof recall: D p (x, x) = p 1 p ( x p x p ) + (J p (x) J p ( x), x) D p (x, xk+1 δ ) D p(x, xk δ ) = p 1 ( x δ p x p k+1 δ p) k J p (xk+1 δ ) J p(xk δ }{{} ), x =µ k F (xk δ) j r (F (xk δ) y δ ) = D p (xk δ, x k+1 δ ) µ k j r (F (xk δ ) y δ ), F (xk δ )(x k δ x ) }{{} ) D p (xk δ, x k+1 δ F }{{} ) µ k (1 c(c tc, C dp ) (x δ k ) y δ r =O(µ 1+ɛ k ) 0 by the choice of µ k. F (x δ k ) y δ

44 Convergence results for Landweber (II) Theorem (convergence with exact data) δ = 0, µ k appropriately chosen (suff. small). Then x k x solution to F (x) = y as k Theorem (stability for δ > 0 and convergence as δ 0) µ k appropriately chosen, c tc suff. small, C dp suff. large. Y uniformly smooth. Then for all k k (δ), x δ k continuously depends on y δ and x δ k (δ) x solution to F (x) = y as δ 0 [Schöpfer&Louis&Schuster 06] linear case, [BK&Schöpfer&Schuster 09] nonlinear case

45 Remark convergence rates can be shown for the iteratively regularized Landweber iteration J p (x δ k+1 x 0) = (1 α k )J p (x δ k x 0) µ k F (x δ k ) j r (F (x δ k ) y δ ) xk+1 δ = x 0 + J p (J p (xk+1 δ x 0)) p 1 p, r (1, ), and x 0... initial guess.

46 IRGN for inverse problems in Banach space x δ k+1 argmin x D(F ) p, r (1, ), and x 0... initial guess; F (x δ k )(x x δ k ) + F (x δ k ) y δ r +αk x x 0 p, convex minimization problem: efficient solution see, e.g., [Bonesky, Kazimierski, Maass, Schöpfer, Schuster 07] for comparison: IRGN in Hilbert space: xk+1 δ = x k δ (F (xk δ ) F (xk δ ) + α ki ) 1( ) F (xk δ ) y δ + α k (xk δ x 0)

47 Choice of α k discrepancy type principle: θ F (x k ) y δ F (xk δ )(x k+1 δ (α) x k δ ) + F (x k δ ) y δ F θ (xk ) y δ 0 < θ < θ < 1 Trade off between stability and approximation: Choose α k as large as possible (stability) such that the predicted residual is smaller than the old one (approximation) see also: inexact Newton (for inverse problems: [Hanke 97, Rieder 99, 01])

48 Convergence of the IRGN Theorem [BK&Schöpfer&Schuster 09] C dp, θ, θ sufficiently large, c tc sufficently small. Additionally, assume that either (a) F (x) : X Y is weakly closed for all x D(F ) and Y reflexive or (b) D(F ) weakly closed. Then for all k k (δ) 1 the iterates xk+1 δ F argmin (xk δ )(x x k δ ) + F (x k δ ) y δ r + αk x x 0 p α k s.t. F (xk δ )(x k+1 δ x k δ ) + F (x k δ ) y δ F θ (xk ) y δ are well-defined and x k (δ) x solution to F (x) = y as δ 0 if x unique, (and along subsequences otherwise).

49 Idea of proof By minimality of x δ k+1 : F (xk δ )(x k+1 δ x k δ ) + F (x k δ ) y δ r x + δ p αk k+1 x 0 F (x δ k )(x x δ k ) + F (x δ k ) y δ r } {{ } δ r C r dp F (xk δ) y δ r +α k x x 0 p. Choice of α k F (x δ k )(x δ k+1 x δ k ) + F (x δ k ) y δ θ F (x δ k ) y δ

50 Idea of proof By minimality of x δ k+1 : F (xk δ )(x k+1 δ x k δ ) + F (x k δ ) y δ r x + δ p αk k+1 x 0 F (x δ k )(x x δ k ) + F (x δ k ) y δ r } {{ } δ r C r dp F (xk δ) y δ r +α k x x 0 p. Choice of α k F (x δ k )(x δ k+1 x δ k ) + F (x δ k ) y δ θ F (x δ k ) y δ α k ( x δ k+1 x 0 p x x 0 p) (c(c tc, C dp ) θ r ) F (x δ k ) y δ r

51 Idea of proof By minimality of x δ k+1 : F (xk δ )(x k+1 δ x k δ ) + F (x k δ ) y δ r x + δ p αk k+1 x 0 F (x δ k )(x x δ k ) + F (x δ k ) y δ r } {{ } δ r C r dp F (xk δ) y δ r +α k x x 0 p. Choice of α k F (x δ k )(x δ k+1 x δ k ) + F (x δ k ) y δ θ F (x δ k ) y δ α k ( x δ k+1 x 0 p x x 0 p) (c(c tc, C dp ) θ r ) F (x δ k ) y δ r Choice of θ r > c(c tc, C dp ) xk+1 δ x p 0 x x 0 p

52 Idea of proof By minimality of x δ k+1 : F (xk δ )(x k+1 δ x k δ ) + F (x k δ ) y δ r x + δ p αk k+1 x 0 F (x δ k )(x x δ k ) + F (x δ k ) y δ r } {{ } δ r C r dp F (xk δ) y δ r +α k x x 0 p. Choice of α k F (x δ k )(x δ k+1 x δ k ) + F (x δ k ) y δ θ F (x δ k ) y δ α k ( x δ k+1 x 0 p x x 0 p) (c(c tc, C dp ) θ r ) F (x δ k ) y δ r Choice of θ r > c(c tc, C dp ) xk+1 δ x p 0 x x 0 p

53 Convergence rates for the IRGN Theorem [BK&Hofmann 10] Let the assumptions of the previous theorem be satisfied. Under the source type condition J p (x x 0 ) R(F (x ) ), i.e., ˆξ J p (x x 0 ), v Y : ˆξ = F (x ) v we obtain optimal convergence rates D p (x k x 0, x x 0 ) = O(δ), where D x 0 p (x, x) = D p (x x 0, x x 0 ). Hilbert space case: x k x = O( δ)

54 Idea of proof recall: D p (x, x) = 1 p x p 1 p x p inf{ ξ, x x : ξ J p ( x)} We have, from the previous proof, xk δ x p 0 x x 0 p for k k.

55 Idea of proof recall: D p (x, x) = 1 p x p 1 p x p inf{ ξ, x x : ξ J p ( x)} We have, from the previous proof, xk δ x p 0 x x 0 p for k k. D p (xk δ x 0, x x 0 ) 1 xk δ p x 0 p 1 p x p x 0 } {{ } 0 ˆξ, x x }{{} k δ =F (x ) v

56 Idea of proof recall: D p (x, x) = 1 p x p 1 p x p inf{ ξ, x x : ξ J p ( x)} We have, from the previous proof, xk δ x p 0 x x 0 p for k k. D p (xk δ x 0, x x 0 ) 1 xk δ p x 0 p 1 p x p x 0 } {{ } 0 v, F (x )(xk δ x ) }{{} F (xk δ) y δ ˆξ, x x }{{} k δ =F (x ) v

57 Idea of proof recall: D p (x, x) = 1 p x p 1 p x p inf{ ξ, x x : ξ J p ( x)} We have, from the previous proof, xk δ x p 0 x x 0 p for k k. D p (xk δ x 0, x x 0 ) 1 xk δ p x 0 p 1 p x p x 0 } {{ } 0 v, F (x )(xk δ x ) }{{} F (xk δ) y δ v (1 + c tc ) F (x δ k ) y δ ˆξ, x x }{{} k δ =F (x ) v

58 Idea of proof recall: D p (x, x) = 1 p x p 1 p x p inf{ ξ, x x : ξ J p ( x)} We have, from the previous proof, xk δ x p 0 x x 0 p for k k. D p (xk δ x 0, x x 0 ) 1 xk δ p x 0 p 1 p x p x 0 } {{ } 0 v, F (x )(xk δ x ) }{{} F (xk δ) y δ v (1 + c tc ) F (x δ k ) y δ ˆξ, x x }{{} k δ =F (x ) v Hence, for k = k : D p (x x 0, x δ k x 0 ) v (1 + c tc )C dp δ.

59 Idea of proof recall: D p (x, x) = 1 p x p 1 p x p inf{ ξ, x x : ξ J p ( x)} We have, from the previous proof, xk δ x p 0 x x 0 p for k k. D p (xk δ x 0, x x 0 ) 1 xk δ p x 0 p 1 p x p x 0 } {{ } 0 v, F (x )(xk δ x ) }{{} F (xk δ) y δ v (1 + c tc ) F (x δ k ) y δ ˆξ, x x }{{} k δ =F (x ) v Hence, for k = k : D p (x x 0, x δ k x 0 ) v (1 + c tc )C dp δ.

60 Remarks rates result can be extended to D x 0 p (x k, x ) = O(δ κ ) with κ (0, 1) or D x 0 p (x k, x ) = O(log(δ) κ ) with κ > 0 under approximate source conditions; rates can be shown alternatively with a priori choice of α k and k instead of the discrepancy principle; needs a priori information on smoothness of x, though

61 Numerical tests c-example: u + c u = 0 in Ω. Identify c from measurements of u in Ω. Ω = (0, 1) 2 R 2, smooth c: c(x, y) = 10 exp ( (x 0.3)2 + (y 0.3) 2 ) 0.04 sparse c: c(x, y) = 40χ [0.19,0.24] 2(x, y)

62 Test examples smooth c: sparse c:

63 Smooth test example, 1% L -noise exact data u: noisy data u δ, u u δ L =1%:

64 Smooth test example, 1% L -noise exact potential c: reconstruction c δ k, X = L 2, Y = L 2, p = 2, r = 2:

65 Smooth test example, 1% L -noise exact potential c: reconstruction c δ k, X = L 2, Y = L 22, p = 2, r = 2:

66 Smooth test example, 10% L -noise exact data u: noisy data u δ, u u δ L =10%:

67 Smooth test example, 10% L -noise exact potential c: reconstruction c δ k, X = L 2, Y = L 2, p = 2, r = 2:

68 Smooth test example, 10% L -noise exact potential c: reconstruction c δ k, X = L 2, Y = L 22, p = 2, r = 2:

69 Smooth test example, 10% L -noise exact potential c: reconstruction c δ k, X = L 1.1, Y = L 22, p = 1.1, r = 2:

70 Sparse test example, 3% L -noise exact potential c: exact data u: noisy data u δ : computations by Frank Schöpfer

71 Sparse test example, 3% L -noise exact potential c: reconstructions c δ k : X = L 2, Y = L 11 X = L 1.1, Y = L 11 p = 2, r = 2: p = 1.1, r = 2:

72 Sparse test example, 3% L -noise

73 Sparse test example, 1% L -noise exact potential c: reconstructions c δ k : X = L 2, Y = L 11 X = L 1.1, Y = L 11 p = 2, r = 2: p = 1.1, r = 2:

74 Sparse test example, 1% L -noise

75 Summary and Outlook motivation for solving inverse problems in Banach spaces: more natural norms, possible reduction of ill-posedness, sparsity use of Banach spaces instead of Hilbert spaces may add nonlinearity and nonsmoothness, but keeps convexity gradient (Landweber) and Gauss-Newton methods for nonlinear inverse problems formulation and convergence analysis in Banach space replace Tikhonov for Newton step by an inner iteration...

76 Thank you for your attention!

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

arxiv: v1 [math.na] 16 Jan 2018

arxiv: v1 [math.na] 16 Jan 2018 A FAST SUBSPACE OPTIMIZATION METHOD FOR NONLINEAR INVERSE PROBLEMS IN BANACH SPACES WITH AN APPLICATION IN PARAMETER IDENTIFICATION ANNE WALD arxiv:1801.05221v1 [math.na] 16 Jan 2018 Abstract. We introduce

More information

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik Accelerated Landweber iteration in Banach spaces T. Hein, K.S. Kazimierski Preprint 2009-17 Fakultät für Mathematik Impressum: Herausgeber: Der Dekan der Fakultät für Mathematik an der Technischen Universität

More information

Parameter Identification in Partial Differential Equations

Parameter Identification in Partial Differential Equations Parameter Identification in Partial Differential Equations Differentiation of data Not strictly a parameter identification problem, but good motivation. Appears often as a subproblem. Given noisy observation

More information

Functionalanalytic tools and nonlinear equations

Functionalanalytic tools and nonlinear equations Functionalanalytic tools and nonlinear equations Johann Baumeister Goethe University, Frankfurt, Germany Rio de Janeiro / October 2017 Outline Fréchet differentiability of the (PtS) mapping. Nonlinear

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto

More information

Iterative Regularization Methods for Inverse Problems: Lecture 3

Iterative Regularization Methods for Inverse Problems: Lecture 3 Iterative Regularization Methods for Inverse Problems: Lecture 3 Thorsten Hohage Institut für Numerische und Angewandte Mathematik Georg-August Universität Göttingen Madrid, April 11, 2011 Outline 1 regularization

More information

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear

More information

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing

More information

The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator

The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator G. Bal (gb2030@columbia.edu) 1 W. Naetar (wolf.naetar@univie.ac.at) 2 O. Scherzer (otmar.scherzer@univie.ac.at) 2,3

More information

Modified Landweber iteration in Banach spaces convergence and convergence rates

Modified Landweber iteration in Banach spaces convergence and convergence rates Modified Landweber iteration in Banach spaces convergence and convergence rates Torsten Hein, Kamil S. Kazimierski August 4, 009 Abstract Abstract. We introduce and discuss an iterative method of relaxed

More information

Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional

Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional arxiv:183.1757v1 [math.na] 5 Mar 18 Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional Simon Hubmer, Ronny Ramlau March 6, 18 Abstract In

More information

Inverse scattering problem from an impedance obstacle

Inverse scattering problem from an impedance obstacle Inverse Inverse scattering problem from an impedance obstacle Department of Mathematics, NCKU 5 th Workshop on Boundary Element Methods, Integral Equations and Related Topics in Taiwan NSYSU, October 4,

More information

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

Convergence rates in l 1 -regularization when the basis is not smooth enough

Convergence rates in l 1 -regularization when the basis is not smooth enough Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

Parameter Identification

Parameter Identification Lecture Notes Parameter Identification Winter School Inverse Problems 25 Martin Burger 1 Contents 1 Introduction 3 2 Examples of Parameter Identification Problems 5 2.1 Differentiation of Data...............................

More information

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Stephan W Anzengruber Ronny Ramlau Abstract We derive convergence rates for Tikhonov-type regularization with conve

More information

Canberra Symposium on Regularization and Chemnitz Symposium on Inverse Problems on Tour

Canberra Symposium on Regularization and Chemnitz Symposium on Inverse Problems on Tour Monday, 19 th November 2012 8:00 Registration 8:45 Welcome and Opening Remarks Chair: Bob Anderssen Jin Cheng (Fudan University Shanghai, China) Ill-posedness and Tikhonov regularization in the study of

More information

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY ARMIN LECHLEITER AND ANDREAS RIEDER January 8, 2007 Abstract. The Newton-type regularization

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

Parallel Cimmino-type methods for ill-posed problems

Parallel Cimmino-type methods for ill-posed problems Parallel Cimmino-type methods for ill-posed problems Cao Van Chung Seminar of Centro de Modelización Matemática Escuela Politécnica Naciónal Quito ModeMat, EPN, Quito Ecuador cao.vanchung@epn.edu.ec, cvanchung@gmail.com

More information

Adaptive and multilevel methods for parameter identification in partial differential equations

Adaptive and multilevel methods for parameter identification in partial differential equations Adaptive and multilevel methods for parameter identification in partial differential equations Barbara Kaltenbacher, University of Stuttgart joint work with Hend Ben Ameur, Université de Tunis Anke Griesbaum,

More information

Institut für Numerische und Angewandte Mathematik

Institut für Numerische und Angewandte Mathematik Institut für Numerische und Angewandte Mathematik Iteratively regularized Newton-type methods with general data mist functionals and applications to Poisson data T. Hohage, F. Werner Nr. 20- Preprint-Serie

More information

On Nonlinear Dirichlet Neumann Algorithms for Jumping Nonlinearities

On Nonlinear Dirichlet Neumann Algorithms for Jumping Nonlinearities On Nonlinear Dirichlet Neumann Algorithms for Jumping Nonlinearities Heiko Berninger, Ralf Kornhuber, and Oliver Sander FU Berlin, FB Mathematik und Informatik (http://www.math.fu-berlin.de/rd/we-02/numerik/)

More information

Calderón s inverse problem in 2D Electrical Impedance Tomography

Calderón s inverse problem in 2D Electrical Impedance Tomography Calderón s inverse problem in 2D Electrical Impedance Tomography Kari Astala (University of Helsinki) Joint work with: Matti Lassas, Lassi Päivärinta, Samuli Siltanen, Jennifer Mueller and Alan Perämäki

More information

Part A: An introduction with four exemplary problems and some discussion about ill-posedness concepts

Part A: An introduction with four exemplary problems and some discussion about ill-posedness concepts Part A: An introduction with four exemplary problems and some discussion about ill-posedness concepts B ERND H OFMANN TU Chemnitz Faculty of Mathematics D-97 Chemnitz, GERMANY Thematic Program Parameter

More information

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

Simultaneous reconstruction of outer boundary shape and conductivity distribution in electrical impedance tomography

Simultaneous reconstruction of outer boundary shape and conductivity distribution in electrical impedance tomography Simultaneous reconstruction of outer boundary shape and conductivity distribution in electrical impedance tomography Nuutti Hyvönen Aalto University nuutti.hyvonen@aalto.fi joint work with J. Dardé, A.

More information

Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile Regularized nonconvex minimization for image restoration

Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile Regularized nonconvex minimization for image restoration Progetto di Ricerca GNCS 2016 PING Problemi Inversi in Geofisica Firenze, 6 aprile 2016 Regularized nonconvex minimization for image restoration Claudio Estatico Joint work with: Fabio Di Benedetto, Flavia

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem Larisa Beilina Michael V. Klibanov December 18, 29 Abstract

More information

Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations

Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Ronny Ramlau Gerd Teschke April 13, 25 Abstract We shall be concerned with the construction of Tikhonov based iteration

More information

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Radu Ioan Boț and Bernd Hofmann September 16, 2016 Abstract In the literature on singular perturbation

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION

INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION ANDREAS RIEDER August 2004 Abstract. In our papers [Inverse Problems, 15, 309-327,1999] and [Numer. Math., 88, 347-365, 2001]

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Convergence rates of the continuous regularized Gauss Newton method

Convergence rates of the continuous regularized Gauss Newton method J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper

More information

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

Research Article Approximation of Solutions of Nonlinear Integral Equations of Hammerstein Type with Lipschitz and Bounded Nonlinear Operators

Research Article Approximation of Solutions of Nonlinear Integral Equations of Hammerstein Type with Lipschitz and Bounded Nonlinear Operators International Scholarly Research Network ISRN Applied Mathematics Volume 2012, Article ID 963802, 15 pages doi:10.5402/2012/963802 Research Article Approximation of Solutions of Nonlinear Integral Equations

More information

Three Generalizations of Compressed Sensing

Three Generalizations of Compressed Sensing Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Cambridge - July 28, 211. Doctoral

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

A semismooth Newton method for L 1 data fitting with automatic choice of regularization parameters and noise calibration

A semismooth Newton method for L 1 data fitting with automatic choice of regularization parameters and noise calibration A semismooth Newton method for L data fitting with automatic choice of regularization parameters and noise calibration Christian Clason Bangti Jin Karl Kunisch April 26, 200 This paper considers the numerical

More information

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2 Mathematical Modelling and Numerical Analysis Modélisation Mathématique et Analyse Numérique Will be set by the publisher REGULARIZATION OF NONLINEAR ILL-POSED PROBLEMS BY EXPONENTIAL INTEGRATORS Marlis

More information

On a family of gradient type projection methods for nonlinear ill-posed problems

On a family of gradient type projection methods for nonlinear ill-posed problems On a family of gradient type projection methods for nonlinear ill-posed problems A. Leitão B. F. Svaiter September 28, 2016 Abstract We propose and analyze a family of successive projection methods whose

More information

Inverse problems and medical imaging

Inverse problems and medical imaging Inverse problems and medical imaging Bastian von Harrach harrach@math.uni-frankfurt.de Institute of Mathematics, Goethe University Frankfurt, Germany Colloquium of the Department of Mathematics Saarland

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction Functional analysis can be seen as a natural extension of the real analysis to more general spaces. As an example we can think at the Heine - Borel theorem (closed and bounded is

More information

Sparse Recovery in Inverse Problems

Sparse Recovery in Inverse Problems Radon Series Comp. Appl. Math XX, 1 63 de Gruyter 20YY Sparse Recovery in Inverse Problems Ronny Ramlau and Gerd Teschke Abstract. Within this chapter we present recent results on sparse recovery algorithms

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

Model-aware Newton-type regularization in electrical impedance tomography

Model-aware Newton-type regularization in electrical impedance tomography Model-aware Newton-type regularization in electrical impedance tomography Andreas Rieder Robert Winkler FAKULTÄT FÜR MATHEMATIK INSTITUT FÜR ANGEWANDTE UND NUMERISCHE MATHEMATIK SFB 1173 KIT University

More information

Descent methods. min x. f(x)

Descent methods. min x. f(x) Gradient Descent Descent methods min x f(x) 5 / 34 Descent methods min x f(x) x k x k+1... x f(x ) = 0 5 / 34 Gradient methods Unconstrained optimization min f(x) x R n. 6 / 34 Gradient methods Unconstrained

More information

Semi-discrete equations in Banach spaces: The approximate inverse approach

Semi-discrete equations in Banach spaces: The approximate inverse approach Semi-discrete equations in Banach spaces: The approximate inverse approach Andreas Rieder Thomas Schuster Saarbrücken Frank Schöpfer Oldenburg FAKULTÄT FÜR MATHEMATIK INSTITUT FÜR ANGEWANDTE UND NUMERISCHE

More information

New Algorithms for Parallel MRI

New Algorithms for Parallel MRI New Algorithms for Parallel MRI S. Anzengruber 1, F. Bauer 2, A. Leitão 3 and R. Ramlau 1 1 RICAM, Austrian Academy of Sciences, Altenbergerstraße 69, 4040 Linz, Austria 2 Fuzzy Logic Laboratorium Linz-Hagenberg,

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

BREGMAN DISTANCES, TOTALLY

BREGMAN DISTANCES, TOTALLY BREGMAN DISTANCES, TOTALLY CONVEX FUNCTIONS AND A METHOD FOR SOLVING OPERATOR EQUATIONS IN BANACH SPACES DAN BUTNARIU AND ELENA RESMERITA January 18, 2005 Abstract The aim of this paper is twofold. First,

More information

arxiv: v2 [math.fa] 18 Jul 2008

arxiv: v2 [math.fa] 18 Jul 2008 Optimal Convergence Rates for Tikhonov Regularization in Besov Scales arxiv:0806095v [mathfa] 8 Jul 008 Introduction D A Lorenz and D Trede Zentrum für Technomathematik University of Bremen D 8334 Bremen

More information

Inverse Problems and Optimal Design in Electricity and Magnetism

Inverse Problems and Optimal Design in Electricity and Magnetism Inverse Problems and Optimal Design in Electricity and Magnetism P. Neittaanmäki Department of Mathematics, University of Jyväskylä M. Rudnicki Institute of Electrical Engineering, Warsaw and A. Savini

More information

Inverse problems and medical imaging

Inverse problems and medical imaging Inverse problems and medical imaging Bastian von Harrach harrach@math.uni-frankfurt.de Institute of Mathematics, Goethe University Frankfurt, Germany Seminario di Calcolo delle Variazioni ed Equazioni

More information

Linear convergence of iterative soft-thresholding

Linear convergence of iterative soft-thresholding arxiv:0709.1598v3 [math.fa] 11 Dec 007 Linear convergence of iterative soft-thresholding Kristian Bredies and Dirk A. Lorenz ABSTRACT. In this article, the convergence of the often used iterative softthresholding

More information

The continuity method

The continuity method The continuity method The method of continuity is used in conjunction with a priori estimates to prove the existence of suitably regular solutions to elliptic partial differential equations. One crucial

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Numerical Solution of a Coefficient Identification Problem in the Poisson equation

Numerical Solution of a Coefficient Identification Problem in the Poisson equation Swiss Federal Institute of Technology Zurich Seminar for Applied Mathematics Department of Mathematics Bachelor Thesis Spring semester 2014 Thomas Haener Numerical Solution of a Coefficient Identification

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Viscosity Approximative Methods for Nonexpansive Nonself-Mappings without Boundary Conditions in Banach Spaces

Viscosity Approximative Methods for Nonexpansive Nonself-Mappings without Boundary Conditions in Banach Spaces Applied Mathematical Sciences, Vol. 2, 2008, no. 22, 1053-1062 Viscosity Approximative Methods for Nonexpansive Nonself-Mappings without Boundary Conditions in Banach Spaces Rabian Wangkeeree and Pramote

More information

A multigrid method for large scale inverse problems

A multigrid method for large scale inverse problems A multigrid method for large scale inverse problems Eldad Haber Dept. of Computer Science, Dept. of Earth and Ocean Science University of British Columbia haber@cs.ubc.ca July 4, 2003 E.Haber: Multigrid

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in

More information

Statistical regularization theory for Inverse Problems with Poisson data

Statistical regularization theory for Inverse Problems with Poisson data Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical

More information

Conductivity imaging from one interior measurement

Conductivity imaging from one interior measurement Conductivity imaging from one interior measurement Amir Moradifam (University of Toronto) Fields Institute, July 24, 2012 Amir Moradifam (University of Toronto) A convergent algorithm for CDII 1 / 16 A

More information

Inverse problems and medical imaging

Inverse problems and medical imaging Inverse problems and medical imaging Bastian von Harrach harrach@math.uni-stuttgart.de Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Rhein-Main Arbeitskreis Mathematics of

More information

MAT 578 FUNCTIONAL ANALYSIS EXERCISES

MAT 578 FUNCTIONAL ANALYSIS EXERCISES MAT 578 FUNCTIONAL ANALYSIS EXERCISES JOHN QUIGG Exercise 1. Prove that if A is bounded in a topological vector space, then for every neighborhood V of 0 there exists c > 0 such that tv A for all t > c.

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

arxiv: v2 [math.na] 16 May 2014

arxiv: v2 [math.na] 16 May 2014 A GLOBAL MINIMIZATION ALGORITHM FOR TIKHONOV FUNCTIONALS WITH SPARSITY CONSTRAINTS WEI WANG, STEPHAN W. ANZENGRUBER, RONNY RAMLAU, AND BO HAN arxiv:1401.0435v [math.na] 16 May 014 Abstract. In this paper

More information

TOWARDS A GENERAL CONVERGENCE THEORY FOR INEXACT NEWTON REGULARIZATIONS

TOWARDS A GENERAL CONVERGENCE THEORY FOR INEXACT NEWTON REGULARIZATIONS TOWARDS A GENERAL CONVERGENCE THEOR FOR INEXACT NEWTON REGULARIZATIONS ARMIN LECHLEITER AND ANDREAS RIEDER July 13, 2009 Abstract. We develop a general convergence analysis for a class of inexact Newtontype

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Necessary conditions for convergence rates of regularizations of optimal control problems

Necessary conditions for convergence rates of regularizations of optimal control problems Necessary conditions for convergence rates of regularizations of optimal control problems Daniel Wachsmuth and Gerd Wachsmuth Johann Radon Institute for Computational and Applied Mathematics RICAM), Austrian

More information

Partial regularity for fully nonlinear PDE

Partial regularity for fully nonlinear PDE Partial regularity for fully nonlinear PDE Luis Silvestre University of Chicago Joint work with Scott Armstrong and Charles Smart Outline Introduction Intro Review of fully nonlinear elliptic PDE Our result

More information

arxiv: v3 [math.na] 11 Oct 2017

arxiv: v3 [math.na] 11 Oct 2017 Indirect Image Registration with Large Diffeomorphic Deformations Chong Chen and Ozan Öktem arxiv:1706.04048v3 [math.na] 11 Oct 2017 Abstract. The paper adapts the large deformation diffeomorphic metric

More information

Lecture 2: Tikhonov-Regularization

Lecture 2: Tikhonov-Regularization Lecture 2: Tikhonov-Regularization Bastian von Harrach harrach@math.uni-stuttgart.de Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Advanced Instructional School on Theoretical

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.

More information

Iterative Convex Regularization

Iterative Convex Regularization Iterative Convex Regularization Lorenzo Rosasco Universita di Genova Universita di Genova Istituto Italiano di Tecnologia Massachusetts Institute of Technology Optimization and Statistical Learning Workshop,

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

Non-smooth Non-convex Bregman Minimization: Unification and New Algorithms

Non-smooth Non-convex Bregman Minimization: Unification and New Algorithms JOTA manuscript No. (will be inserted by the editor) Non-smooth Non-convex Bregman Minimization: Unification and New Algorithms Peter Ochs Jalal Fadili Thomas Brox Received: date / Accepted: date Abstract

More information

Recent progress on the factorization method for electrical impedance tomography

Recent progress on the factorization method for electrical impedance tomography Recent progress on the factorization method for electrical impedance tomography Bastian von Harrach harrach@math.uni-stuttgart.de Chair of Optimization and Inverse Problems, University of Stuttgart, Germany

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Detecting Interfaces in a Parabolic-Elliptic Problem

Detecting Interfaces in a Parabolic-Elliptic Problem Detecting Interfaces in a Parabolic-Elliptic Problem Bastian Gebauer bastian.gebauer@oeaw.ac.at Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences, Linz,

More information

Electric potentials with localized divergence properties

Electric potentials with localized divergence properties Electric potentials with localized divergence properties Bastian Gebauer bastian.gebauer@oeaw.ac.at Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences,

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information