Adaptive and multilevel methods for parameter identification in partial differential equations

Size: px
Start display at page:

Download "Adaptive and multilevel methods for parameter identification in partial differential equations"

Transcription

1 Adaptive and multilevel methods for parameter identification in partial differential equations Barbara Kaltenbacher, University of Stuttgart joint work with Hend Ben Ameur, Université de Tunis Anke Griesbaum, University of Heidelberg Boris Vexler, Technical University of Munich AIP09, Vienna, July

2 Overview motivation: parameter identification in PDEs ideas on adaptivity for inverse problems 1st approach: refinement/coarsening based on predicted misfit reduction 2nd approach: goal oriented error estimators outlook AIP09, Vienna, July

3 Motivation: Parameter Identification in PDEs instability: sufficiently high precision (amplification of numerical errors) computational effort: large scale problem: each regularized inversion involves several PDE solves repeated solution of regularized problem to determine regularization parameter AIP09, Vienna, July

4 Motivation: Parameter Identification in PDEs instability: sufficiently high precision (amplification of numerical errors) computational effort: large scale problem: each regularized inversion involves several PDE solves repeated solution of regularized problem to determine regularization parameter Example u = q: refine grid for u and q: at jumps or large gradients or at locations with large error contribution AIP09, Vienna, July

5 Motivation: Parameter Identification in PDEs instability: sufficiently high precision (amplification of numerical errors) computational effort: large scale problem: each regularized inversion involves several PDE solves repeated solution of regularized problem to determine regularization parameter Example u = q: refine grid for u and q: at jumps or large gradients or at locations with large error contribution location of large gradients / large errors a priori unknown AIP09, Vienna, July

6 Motivation: Parameter Identification in PDEs instability: sufficiently high precision (amplification of numerical errors) computational effort: large scale problem: each regularized inversion involves several PDE solves repeated solution of regularized problem to determine regularization parameter Example u = q: refine grid for u and q: at jumps or large gradients or at locations with large error contribution location of large gradients / large errors a priori unknown general strategy for mesh generation possibly separately for q and u (example q(u) u) = f) AIP09, Vienna, July

7 Motivation: Parameter Identification in PDEs instability: sufficiently high precision (amplification of numerical errors) computational effort: large scale problem: each regularized inversion involves several PDE solves repeated solution of regularized problem to determine regularization parameter Example u = q: refine grid for u and q: at jumps or large gradients or at locations with large error contribution location of large gradients / large errors a priori unknown general strategy for mesh generation possibly separately for q and u (example q(u) u) = f) instability regularization necessary! AIP09, Vienna, July

8 Regularization of Inverse Problems unstable operator equation: F (q) = g with F : q u or C(u) AIP09, Vienna, July

9 Regularization of Inverse Problems unstable operator equation: F (q) = g with F : q u or C(u) solution q = F 1 (g) does not depend continuously on g i.e., ( (g n ), g n g q n := F 1 (g n ) F 1 (g) ) AIP09, Vienna, July

10 Regularization of Inverse Problems unstable operator equation: F (q) = g with F : q u or C(u) solution q = F 1 (g) does not depend continuously on g i.e., ( (g n ), g n g q n := F 1 (g n ) F 1 (g) ) only noisy data g δ g available (measurements): g δ g δ AIP09, Vienna, July

11 Regularization of Inverse Problems unstable operator equation: F (q) = g with F : q u or C(u) solution q = F 1 (g) does not depend continuously on g i.e., ( (g n ), g n g q n := F 1 (g n ) F 1 (g) ) only noisy data g δ g available (measurements): g δ g δ making F (q) g δ small good result for q! AIP09, Vienna, July

12 An Example (Identification of a source term in a 1-d differential equation) exact and noisy data g, g δ (δ = 1%) exact solution q and q with F ( q) g δ = 1.e 14 exact and noisy data g, g δ (δ = 1%) exact solution q and q with F ( q) g δ = 2δ AIP09, Vienna, July

13 Regularization of Inverse Problems unstable operator equation: F (q) = g with F : q u or C(u) solution q = F 1 (g) does not depend continuously on g i.e., ( (g n ), g n g q n := F 1 (g n ) F 1 (g) ) only noisy data g δ g available (measurements): g δ g δ making F (q) g δ small good result for q! regularization means approaching solution along stable path: given (g n ), g n g construct q n := R αn (g n ) such that q n = R αn (g n ) F 1 (g) AIP09, Vienna, July

14 Regularization of Inverse Problems unstable operator equation: F (q) = g with F : q u or C(u) solution q = F 1 (g) does not depend continuously on g i.e., ( (g n ), g n g q n := F 1 (g n ) F 1 (g) ) only noisy data g δ g available (measurements): g δ g δ making F (q) g δ small good result for q! regularization means approaching solution along stable path: given (g n ), g n g construct q n := R αn (g n ) such that q n = R αn (g n ) F 1 (g) regularization method: family (R α ) α>0 with parameter choice α = α(g δ, δ) such that worst case convergence as δ 0: sup R α(g δ,δ) (gδ ) F 1 (g) 0 as δ 0 g δ g δ AIP09, Vienna, July

15 Motivation: Parameter Identification in PDEs instability: sufficiently high precision (amplification of numerical errors) computational effort: large scale problem: each regularized inversion involves several PDE solves repeated solution of regularized problem to determine regularization parameter Example u = q: refine grid for u and q: at jumps or large gradients or at locations with large error contribution location of large gradients / large errors a priori unknown general strategy for mesh generation possibly separately for q and u (example q(u) u) = f) AIP09, Vienna, July

16 Motivation: Parameter Identification in PDEs instability: sufficiently high precision (amplification of numerical errors) computational effort: large scale problem: each regularized inversion involves several PDE solves repeated solution of regularized problem to determine regularization parameter Example u = q: refine grid for u and q: at jumps or large gradients or at locations with large error contribution location of large gradients / large errors a priori unknown general strategy for mesh generation possibly separately for q and u (example q(u) u) = f) computational effort efficient numerical strategies necessary! AIP09, Vienna, July

17 Efficient Methods for PDEs l = 4 l = 3 l = 2 l = 1 multilevel iteration: start with coarse discretization refine successively adaptive discretization: coarse discretization where possible fine grid only where necessary AIP09, Vienna, July

18 Efficient Methods for PDEs combined multilevel adaptive strategy: courtesy to [R.Becker&M.Braack&B.Vexler, App.Num.Math., 2005] start on coarse grid sucessive adaptive refinement AIP09, Vienna, July

19 Some Ideas on Adaptivity for Inverse Problems Haber&Heldmann&Ascher 07: Tikhonov with BV type regularization: Refine for u to compute residual term sufficiently precisely; Refine for q to compute regularization term sufficiently precisely Neubauer 03, 06, 07: moving mesh regularization, adaptive grid regularization: Tikhonov with BV type regularization: Refine where q has jumps or large gradients Borcea&Druskin 02: optimal finite difference grids (a priori refinement): Refine close to measurements Chavent&Bissell 98, Ben Ameur&Chavent&Jaffré 02, BK&Ben Ameur 02: refinement and coarsening indicators Becker&Vexler 04, Griesbaum&BK&Vexler 07, Bangerth 08, BK&Vexler 09: goal oriented error estimators... AIP09, Vienna, July

20 Some Ideas on Adaptivity for Inverse Problems Haber&Heldmann&Ascher 07: Tikhonov with BV type regularization: Refine for u to compute residual term sufficiently precisely; Refine for q to compute regularization term sufficiently precisely Neubauer 03, 06, 07: moving mesh regularization, adaptive grid regularization: Tikhonov with BV type regularization: Refine where q has jumps or large gradients Borcea&Druskin 02: optimal finite difference grids (a priori refinement): Refine close to measurements Chavent&Bissell 98, Ben Ameur&Chavent&Jaffré 02, BK&Ben Ameur 02: refinement and coarsening indicators Becker&Vexler 04, Griesbaum&BK&Vexler 07, Bangerth 08, BK&Vexler 09: goal oriented error estimators... AIP09, Vienna, July

21 1st approach: refinement/coarsening based on predicted misfit reduction AIP09, Vienna, July

22 Identification of a Distributed Parameter: Groundwater modelling s u div (q grad u) = f in Ω IR2 t with initial and boundary conditions u... hydraulic potential (ground water level), s(x, y)... storage coefficients, q(x, y)... hydraulic transmissivity, f(x, y, t)... source term, space and time discretization (time step t, mesh size h). AIP09, Vienna, July

23 Parameter Identification s u div (q grad u) = f in Ω t Reconstruction of the transmissivity q (pcw. const.) from measurements of u. Find zonation and values of q such that J(q) := u(q) u obs 2 = min! [Ben Ameur&Chavent&Jaffré 02], [Chavent&Bissell 98], [BK&Ben Ameur 02] AIP09, Vienna, July

24 Refinement Indicators 1 zone 2 zones q := min of J(q) solves min J(( q 1 )) q 2 s.t. d T ( q 1 q 2 ) = q 1 q 2 = B =: 0 ( q 1 q 2 ):= min of J(( q 1 q 2 )) solves min J(( q 1 q 2 )) s.t. d T ( q 1 q 2 ) = q 1 q 2 = B =: q 1 q 2 B J(( qb 1 q B 2 )) = λ B J(( q 1 q )) J(q ) + λ 0 (q1 q2) 2 λ 0 large large possible reduction of data misfit J B opt λ 0 = (1/d T d)d T J(q ) (negligible computational effort) AIP09, Vienna, July

25 Compute all refinement indicators for zonations generated systematically by families of vertical, horizontal, checkerboard and oblique cuts. Mark those cuts that yield largest refinement indicators λ 0 AIP09, Vienna, July

26 Z 11 Z 12 Z 13 Coarsening Indicators Z 2 ( q 1 ):=solution of min J(( q 1 )) q 2 J( q B 11 q B 12 q B 13 q B 2 ) Bi =0,B j =B,j i }{{} optimum if q 1i is aggregated with q 2 J(( q 1 q 2 q 2 solves min J( q 11 q 12 q 13 q 2 q 11 q 2 = B 1 q 12 q 2 = B 2 q 13 q 2 = B 3 with B i := B := q1 q2 )) λ } B i{{} B coarsening indicator ) s.t. AIP09, Vienna, July

27 Multilevel Refinement and Coarsening Algorithm [H.Ben Ameur, G.Chavent, J.Jaffré, 2002] Minimize J on starting zonation Do until refinement indicators = 0 Refinement: compute refinement indicators λ chooose cuts with largest λ Coarsening: if chosen cuts yield several sub-zones: evaluate coarsening indicators and aggregate zones where possible Minimize J for each of the retained zonations and keep those with largest reduction in J AIP09, Vienna, July

28 6 6 mesh, 20 per cent noise: Numerical Tests exact transmissivity starting value third iterate fourth iterate mesh, 10 per cent noise: exact transmissivity seventh iterate AIP09, Vienna, July

29 Abstract Setting for Refinement and Coarsening nonlinear ill-posed operator equation F (q) = g F : X Y, X, Y Hilbert spaces exact solution q exists, F (q ) = g noisy data g δ with g g δ δ misfit minimization min X F (q) g δ 2 discretization: X N = span{φ 1,..., φ N } s.t. X = N IN X N AIP09, Vienna, July

30 Abstract Setting for Refinement and Coarsening discretization: X N = span{φ 1,..., φ N } misfit minimization s.t. X = N IN X N min F (q) g δ 2 = min F ( N q X N a IR N i=1 a iφ i ) g δ }{{} 2 =J (a) AIP09, Vienna, July

31 Abstract Setting for Refinement and Coarsening discretization: X N = span{φ 1,..., φ N } misfit minimization s.t. X = N IN X N min F (q) g δ 2 = min F ( N q X N a IR N i=1 a iφ i ) g δ }{{} 2 =J (a) consider misfit minimization on some index set I {1, 2,..., N}: min F ( a IR I i I a iφ i ) g δ 2 (P I ) solution a I, q I with a i := 0 for i I AIP09, Vienna, July

32 Abstract Setting for Refinement and Coarsening discretization: X N = span{φ 1,..., φ N } misfit minimization s.t. X = N IN X N min F (q) g δ 2 = min F ( N q X N a IR N i=1 a iφ i ) g δ }{{} 2 =J (a) consider misfit minimization on some index set I {1, 2,..., N}: min F ( a IR I i I a iφ i ) g δ 2 (P I ) solution a I, q I with a i := 0 for i I sparsity AIP09, Vienna, July

33 Abstract Setting for Refinement and Coarsening discretization: X N = span{φ 1,..., φ N } misfit minimization s.t. X = N IN X N min F (q) g δ 2 = min F ( N q X N a IR N i=1 a iφ i ) g δ }{{} 2 =J (a) consider misfit minimization on some index set I {1, 2,..., N}: min F ( a IR I i I a iφ i ) g δ 2 (P I ) solution a I, q I with a i := 0 for i I sparsity Find index set I and coefficients a I such that F ( i I a I i φ i ) g δ 2 = min a IR I F ( i I a I i φ i ) g δ 2 = min F (q) g δ 2 q X N AIP09, Vienna, July

34 Refinement Indicators current index set I k with computed solution a Ik of (P Ik ); AIP09, Vienna, July

35 Refinement Indicators current index set I k with computed solution a Ik of (P Ik ); for some index {i } I k consider constrained minimization problem min F ( a IR Ik i I k {i } a iφ i ) g δ 2 +1 }{{} =J (a) s.t. a i = β (P Ik, i β ) solution a β with a i := 0 for i I k {i }; AIP09, Vienna, July

36 Refinement Indicators current index set I k with computed solution a Ik of (P Ik ); for some index {i } I k consider constrained minimization problem min F ( a IR Ik i I k {i } a iφ i ) g δ 2 +1 }{{} =J (a) s.t. a i = β (P Ik, i β ) solution a β with a i := 0 for i I k {i }; note that a β=0 = a Ik solves (P Ik ) AIP09, Vienna, July

37 Refinement Indicators current index set I k with computed solution a Ik of (P Ik ); for some index {i } I k consider constrained minimization problem min F ( a IR Ik i I k {i } a iφ i ) g δ 2 +1 }{{} =J (a) s.t. a i = β (P Ik, i β ) solution a β with a i := 0 for i I k {i }; note that a β=0 = a Ik solves (P Ik ) Lagrange function L(a, λ) = J (a) + λ(β a i ) AIP09, Vienna, July

38 Refinement Indicators current index set I k with computed solution a Ik of (P Ik ); for some index {i } I k consider constrained minimization problem min F ( a IR Ik i I k {i } a iφ i ) g δ 2 +1 }{{} =J (a) s.t. a i = β (P Ik, i β ) solution a β with a i := 0 for i I k {i }; note that a β=0 = a Ik solves (P Ik ) Lagrange function L(a, λ) = J (a) + λ(β a i ) necessary optimality conditions: 0 = L a i (a β, λ β ) = J a i (a β) λ β ( ) AIP09, Vienna, July

39 Refinement Indicators current index set I k with computed solution a Ik of (P Ik ); for some index {i } I k consider constrained minimization problem min F ( a IR Ik i I k {i } a iφ i ) g δ 2 +1 }{{} =J (a) s.t. a i = β (P Ik, i β ) solution a β with a i := 0 for i I k {i }; note that a β=0 = a Ik solves (P Ik ) Lagrange function L(a, λ) = J (a) + λ(β a i ) necessary optimality conditions: 0 = L a i (a β, λ β ) = J a i (a β) λ β Lagrange multipliers = sensitivities: d dβ J (a β) = d dβ L(a β, λ β ) = λ β ( ) AIP09, Vienna, July

40 Refinement Indicators current index set I k with computed solution a Ik of (P Ik ); for some index {i } I k consider constrained minimization problem min F ( a IR Ik i I k {i } a iφ i ) g δ 2 +1 }{{} =J (a) s.t. a i = β (P Ik, i β ) solution a β with a i := 0 for i I k {i }; note that a β=0 = a Ik solves (P Ik ) Lagrange function L(a, λ) = J (a) + λ(β a i ) necessary optimality conditions: 0 = L a i (a β, λ β ) = J a i (a β) λ β Lagrange multipliers = sensitivities: d dβ J (a β) = d dβ L(a β, λ β ) = λ β Taylor expansion J (a β ) J (a 0 ) + d dβ J (a 0) β = J (a Ik ) + λ β=0 β ( ) AIP09, Vienna, July

41 Refinement Indicators current index set I k with computed solution a Ik of (P Ik ); for some index {i } I k consider constrained minimization problem min F ( a IR Ik i I k {i } a iφ i ) g δ 2 +1 }{{} =J (a) s.t. a i = β (P Ik, i β ) solution a β with a i := 0 for i I k {i }; note that a β=0 = a Ik solves (P Ik ) Lagrange function L(a, λ) = J (a) + λ(β a i ) necessary optimality conditions: 0 = L a i (a β, λ β ) = J a i (a β) λ β Lagrange multipliers = sensitivities: d dβ J (a β) = d dβ L(a β, λ β ) = λ β Taylor expansion J (a β ) J (a 0 ) + d dβ J (a 0) β = J (a Ik ) + λ β=0 β r i := λ β=0 ( ) = J a i (aik )... refinement indicator ( ) AIP09, Vienna, July

42 Coarsening Indicators current index set Ĩk with computed solution aĩk of (P Ĩk ); for some index {l } Ĩk consider constrained minimization problem min a IR Ĩk F ( i Ĩk a i φ i ) g δ 2 }{{} =J (a) s.t. a l = γ ( P Ĩk, l γ ) solution a γ note that a γ = aĩk with γ := aĩk l solves (P Ĩk ) Lagrange function L(a, µ) = J (a) + µ(γ a l ) necessary optimality conditions: 0 = L a (a γ, µ γ ) = J l a l (a γ ) µ γ ( ) Lagrange multipliers = sensitivities: d dγ J (a γ) = d dγ L(a γ, µ γ ) = µ γ Taylor expansion J (a γ=0 ) J (a γ ) d dγ J (a γ ) γ = J (aĩk ) µ γ γ c l := µ γ γ ( ) = J a l (aĩk )γ... coarsening indicator AIP09, Vienna, July

43 Multilevel Refinement and Coarsening Algorithm k = 0: Minimize J on starting index set I 0 minimal value J 0 Do until refinement indicators = 0 Refinement: compute refinement indicators r i, i I k chooose index sets I k {i } with largest r i Minimize J on each of these index sets and keep Ĩ := Ik {i } with largest reduction in J Coarsening (only if J < J k ): evaluate coarsening indicators c l J chooose index sets Ĩk \ {l } with largest c l Minimize J on each of these index sets and keep I := Ĩk \ {l } with largest reduction in J J If J J + ρ(j k J ) (coarsening does not deteriorate optimal value too much) set I k+1 := I, J k+1 := J (refinement and coarsening) Else set I k+1 := Ĩ, J k+1 := J (refinement only) AIP09, Vienna, July

44 Convergence Proof For fixed N <, Algorithm stops after finitely many steps k = K; q K := i I K a K i φ i a K solves (P IK ) 0 = J (a K ) 0 = F (q K ) g δ, F (q K )φ i i I K refinement indicators vanish 0 = r i = F (q K ) g δ, F (q K )φ i i I K Proj XN F (q K ) (F (q K ) g δ ) = 0 Stability and convergence follow from (existing) results on regularization by discretization AIP09, Vienna, July

45 Convergence Results Stability for fixed discretization N < (regularization by discretization) convergence with exact data as N noisy data with noise level δ g δ F (q ) : a priori or a posteriori choice N(δ) convergence as δ 0 AIP09, Vienna, July

46 Numerical Tests example 1 (linear system): with (Aq) i x i 0 Aq = g q(t) dt numerical differentiation Identify vector (q i ) i=1,...,n from right hand side g AIP09, Vienna, July

47 Example 1: Linear System... a... a 0... I +... I 0... r 0 AIP09, Vienna, July

48 Example 1: Linear System... a... a 1... I +... I 1... r 1 AIP09, Vienna, July

49 Example 1: Linear System... a... a 2... I +... I 2... r 2 AIP09, Vienna, July

50 Example 1: Linear System... a... a 3... I +... I 3... r 3 AIP09, Vienna, July

51 Example 1: Linear System... a... a 3... I +... I 3... r 3 70 instead of 8000 flops! AIP09, Vienna, July

52 Numerical Tests example 2 (coefficients in system of ODEs): u = Qu, u(0) = u 0 Identify matrix Q = (q ij ) i,j=1,...,n from measurements of u(t l ), l = 1,..., T. AIP09, Vienna, July

53 Example 2: Coefficients in System of ODEs a a I... I 0 r 0 (r max = 5.0) AIP09, Vienna, July

54 Example 2: Coefficients in System of ODEs a a I... I 1 r 1 (r max = 2.5) AIP09, Vienna, July

55 Example 2: Coefficients in System of ODEs a a I... I 2 r 2 (r max = 1.2) AIP09, Vienna, July

56 Example 2: Coefficients in System of ODEs a a I... I 3 r 3 (r max = 1.5) AIP09, Vienna, July

57 Example 2: Coefficients in System of ODEs a a I... I 4 r 4 (r max = 1.0) AIP09, Vienna, July

58 Example 2: Coefficients in System of ODEs a a I... I 5 r 5 (r max = 0.6) AIP09, Vienna, July

59 Example 2: Coefficients in System of ODEs a a I... I 6 r 6 (r max = 0.23) AIP09, Vienna, July

60 Example 2: Coefficients in System of ODEs a a I... I 7 r 7 (r max = 0.21) AIP09, Vienna, July

61 Example 2: Coefficients in System of ODEs a a I... I 8 r 8 (r max = 0.2) AIP09, Vienna, July

62 Example 2: Coefficients in System of ODEs a a I... I 9 r 9 (r max = 0.22) AIP09, Vienna, July

63 Example 2: Coefficients in System of ODEs a a I... I 10 r 10 (r max = 3.e 14) AIP09, Vienna, July

64 Example 2: Coefficients in System of ODEs a a instead of flops! +... I... I 10 r 10 (r max = 3.e 14) AIP09, Vienna, July

65 Remarks for 1st Approach more systematic coarsening based on problem specific properties (related dofs due to local closeness in groundwater example) Lagrange multipliers = gradient components (but we do not carry out gradient steps!): possible improvement by taking into account Hessian information (Newton type) Greedy type approach (Burger&Hofinger 04, Denis&Lorenz&Trede 09) relation active set strategy semismooth Newton (Hintermüller&Ito&Kunisch 03) AIP09, Vienna, July

66 2nd approach: goal oriented error estimators AIP09, Vienna, July

67 Tikhonov Regularization and the Discrepancy Principle Parameter identification as a nonlinear operator equation F (q) = g g δ g... given data; noise level δ g δ g F... forward operator: F (q) = (C S)(q) = C(u) where u = S(q) solves A(q, u)(v) = (f, v) v V... PDE in weak form Hilbert spaces Q, V, G: q Q S u V C g G AIP09, Vienna, July

68 Tikhonov Regularization and the Discrepancy Principle Parameter identification as a nonlinear operator equation F (q) = g g δ g... given data; noise level δ g δ g F... forward operator: F (q) = (C S)(q) = C(u) where u = S(q) solves A(q, u)(v) = (f, v) v V... PDE in weak form Hilbert spaces Q, V, G: q Q S u V C g G Tikhonov regularization: Minimize j α (q) = F (q) g δ 2 + α q 2 over q Q, AIP09, Vienna, July

69 Tikhonov Regularization and the Discrepancy Principle Parameter identification as a nonlinear operator equation F (q) = g g δ g... given data; noise level δ g δ g F... forward operator: F (q) = (C S)(q) = C(u) where u = S(q) solves A(q, u)(v) = (f, v) v V... PDE in weak form Hilbert spaces Q, V, G: q Q S u V C g G Tikhonov regularization: Minimize j α (q) = F (q) g δ 2 + α q 2 over q Q, Minimize J α (q, u) = C(u) g δ 2 + α q 2 over q Q, u V, under the constraint A(q, u)(v) = (f, v) v V AIP09, Vienna, July

70 Tikhonov Regularization and the Discrepancy Principle Parameter identification as a nonlinear operator equation F (q) = g g δ g... given data; noise level δ g δ g F... forward operator: F (q) = (C S)(q) = C(u) where u = S(q) solves Tikhonov regularization: A(q, u)(v) = (f, v) v V... PDE in weak form Minimize j α (q) = F (q) g δ 2 + α q 2 over q Q, AIP09, Vienna, July

71 Tikhonov Regularization and the Discrepancy Principle Parameter identification as a nonlinear operator equation F (q) = g g δ g... given data; noise level δ g δ g F... forward operator: F (q) = (C S)(q) = C(u) where u = S(q) solves Tikhonov regularization: A(q, u)(v) = (f, v) v V... PDE in weak form Minimize j α (q) = F (q) g δ 2 + α q 2 over q Q, Choice of α: discrepancy principle (fixed constant τ 1) F (q δ α ) g δ = τδ AIP09, Vienna, July

72 Tikhonov Regularization and the Discrepancy Principle Parameter identification as a nonlinear operator equation F (q) = g g δ g... given data; noise level δ g δ g F... forward operator: F (q) = (C S)(q) = C(u) where u = S(q) solves Tikhonov regularization: A(q, u)(v) = (f, v) v V... PDE in weak form Minimize j α (q) = F (q) g δ 2 + α q 2 over q Q, Choice of α: discrepancy principle (fixed constant τ 1) F (q δ α ) g δ = τδ Convergence analysis: [Engl& Hanke& Neubauer 1996] and the references therein AIP09, Vienna, July

73 Goal Oriented Error Estimators in PDE Constrained Optimization (I) [Becker&Kapp&Rannacher 00], [Becker&Rannacher 01], [Becker&Vexler 04, 05] Lagrange functional: Minimize J(q, u) over q Q, u V under the constraints A(q, u)(v) = f(v) v V, L(q, u, z) = J(q, u) + f(z) A(q, u)(z). First order optimality conditions: L (q, u, z)[(p, v, y)] = 0 (p, v, y) Q V V (1) Discretization Q h Q, V h V discretized version of (1). Estimate the error due to discretization in some quantity of interest I: I(q, u) I(q h, u h ) η AIP09, Vienna, July

74 Auxiliary functional: Goal Oriented Error Estimators (II) M(q, u, z, p, v, y) = I(q, u)+l (q, u, z)[(p, v, y)] (q, u, z, p, v, y) (Q V V ) 2 Consider additional equations: M (x h )(dx h ) = 0 dx h X h = (Q h V h V h ) 2 Proposition (Becker&Vexler, J. Comp. Phys., 2005): I(q, u) I(q h, u h ) = 1 2 M (x h )(x x h ) +O( x x h 3 ) x h X h. }{{} =:η Error estimator η is a sum of local contributions due to either q, u, z, p, v, or y: η = N q i=1 N u η q i + i=1 η u i + N z i=1 η z i + N p i=1 N v η p i + i=1 η v i + local refinement separately for q Q h, u V h, z V h,... N y i=1 η y i AIP09, Vienna, July

75 Choice of Quantity of Interest? aim: recover infinite dimensional convergence results for Tikhonov + discr. princ. in the adaptively discretized setting challenge: carrying over infinite dimensional results is... straightforward if we can guarantee smallness of operator norm F h F huge number of quantities of interest!... not too hard if we can guarantee smallness of F h (q ) F (q ) large number of quantities of interest!... but we only want to guarantee precision of one or two quantities of interest AIP09, Vienna, July

76 Convergence Analysis Choice of Quantity of Interest Proposition [Griesbaum&BK& Vexler 07], [BK& Vexler 09]: α = α (δ, g δ ) and Q h V h V h such that for I(q, u) := C(u) g δ 2 G = F (q) gδ 2 G τ 2 δ 2 I(qh,α δ, u δ h,α ) τδ 2 (i) If additionally I(qh,α δ, u δ h,α ) I(qα δ, u δ α ) ci(qh,α δ, u δ h,α ) for some sufficiently small constant c > 0 then qα δ q as δ 0. Optimal rates under source conditions (logarithic/hölder). AIP09, Vienna, July

77 Convergence Analysis Choice of Quantity of Interest Proposition [Griesbaum&BK& Vexler 07], [BK& Vexler 09]: α = α (δ, g δ ) and Q h V h V h such that for I(q, u) := C(u) g δ 2 G = F (q) gδ 2 G τ 2 δ 2 I(qh,α δ, u δ h,α ) τδ 2 (ii) If additionally for I 2 (q, u) := J α (q, u) I 2 (qh,α δ, u δ h,α ) I 2 (qα δ, u δ α ) σδ 2 for some constant C > 0 with τ σ, then qh,α δ q as δ 0. see also [Neubauer&Scherzer 1990] AIP09, Vienna, July

78 Convergence Analysis Choice of Quantity of Interest Proposition [Griesbaum&BK& Vexler 07], [BK& Vexler 09]: α = α (δ, g δ ) and Q h V h V h such that for I(q, u) := C(u) g δ 2 G = F (q) gδ 2 G τ 2 δ 2 I(qh,α δ, u δ h,α ) τδ 2 (ii) If additionally for I 2 (q, u) := J α (q, u) I 2 (qh,α δ, u δ h,α ) I 2 (qα δ, u δ α ) σδ 2 for some constant C > 0 with τ σ, then qh,α δ q as δ 0. see also [Neubauer&Scherzer 1990] J as quantity of interest [Becker&Kapp&Rannacher 00], [Becker&Rannacher 01], [Bangerth 08] L as quantity of interest [Beilina&Klibanov 09] AIP09, Vienna, July

79 Idea of Proof error bound J α (q δ h,α, u δ h,α ) J α (q δ α, u δ α ) σδ 2 and optimality of q δ α, u δ α imply J α (q δ h,α, u δ h,α ) J α (q δ α, u δ α ) + σδ 2 J α (q, u ) + σδ 2 AIP09, Vienna, July

80 Idea of Proof error bound J α (q δ h,α, u δ h,α ) J α (q δ α, u δ α ) σδ 2 and optimality of q δ α, u δ α imply J α (q δ h,α, u δ h,α ) J α (q δ α, u δ α ) + σδ 2 J α (q, u ) + σδ 2 on the other hand, by the discrepancy principle τ 2 δ 2 F (q δ h,α ) g δ 2 τδ 2 and the definition of the cost functional J α (q, u) = F (q) g δ 2 + α q 2 J α (q δ h,α, u δ h,α ) τ 2 δ 2 + α q δ h,α 2 J α (q, u ) δ 2 + α q 2 AIP09, Vienna, July

81 Idea of Proof error bound J α (q δ h,α, u δ h,α ) J α (q δ α, u δ α ) σδ 2 and optimality of q δ α, u δ α imply J α (q δ h,α, u δ h,α ) J α (q δ α, u δ α ) + σδ 2 J α (q, u ) + σδ 2 on the other hand, by the discrepancy principle τ 2 δ 2 F (q δ h,α ) g δ 2 τδ 2 and the definition of the cost functional J α (q, u) = F (q) g δ 2 + α q 2 J α (q δ h,α, u δ h,α ) τ 2 δ 2 + α q δ h,α 2 J α (q, u ) δ 2 + α q 2 Combining these estimates and the choice τ 2 > 1 + σ we get q δ h,α 2 q α (1 + σ τ 2 )δ 2 q 2. The rest of the proof is standard. AIP09, Vienna, July

82 Idea of Proof error bound J α (q δ h,α, u δ h,α ) J α (q δ α, u δ α ) σδ 2 and optimality of q δ α, u δ α imply J α (q δ h,α, u δ h,α ) J α (q δ α, u δ α ) + σδ 2 J α (q, u ) + σδ 2 on the other hand, by the discrepancy principle τ 2 δ 2 F (q δ h,α ) g δ 2 τδ 2 and the definition of the cost functional J α (q, u) = F (q) g δ 2 + α q 2 J α (q δ h,α, u δ h,α ) τ 2 δ 2 + α q δ h,α 2 J α (q, u ) δ 2 + α q 2 Combining these estimates and the choice τ 2 > 1 + σ we get q δ h,α 2 q α (1 + σ τ 2 )δ 2 q 2. The rest of the proof is standard. (Also works for stationary points q δ h,α instead of global minimizers.) AIP09, Vienna, July

83 Efficient Computation of α Choice of Quantity of Interest Choice of α: discrepancy principle (fixed constant τ 1) 1-d nonlinear equation solve by Newton s method F (q δ α ) g δ = τδ AIP09, Vienna, July

84 Efficient Computation of α Choice of Quantity of Interest Proposition [Griesbaum&BK& Vexler 07], [BK& Vexler 09]: Define i( 1 α ) := I(q, u) := C(u) gδ 2 G = F (q) gδ 2 G I 2 (q, u) := i ( 1 α ) β solution to i(β ) = τ 2 δ 2 (discr.princ.) β k+1 = β k ik h τ 2 δ 2 for k k 1 with k = min{k IN i k h τ 2 δ 2 0} with i k h, i k h satisfying i(β k ) i k h εk, i (β k ) i k h ε k, i k h (Newton) ε k, ε k sufficiently small (see above). Then β k satisfies an asymptotically optimal quadratic convergence estimate and (τ 2 τ 2 )δ 2 i(β k ) (τ 2 + τ 2 )δ 2 AIP09, Vienna, July

85 Remarks computation of error estimators for i(β): just one more SQP type step; evaluation of i (β): can be extracted from quantities computed for error estimators for i(β) error estimators for i (β): stationary point of another auxiliary functional AIP09, Vienna, July

86 Numerical Tests u + q = 0 on Ω = (0, 1) 2 + homogeneous Dirichlet BC Identify q from distributed measurements of u at points in Ω (a), (b) q (x, y) = 1 2πσ 2 exp ( (x 5 11 )2 +(y 5 ) 11 )2 2σ 2, σ = { 0.05 (a) 0.01 (b) (c) q (x, y) = { 1 x x > 1 2 Computations with Gascoigne and RoDoBo. AIP09, Vienna, July

87 Numerical Tests exact par. q: (a), (b), (c) exact state u: (a), (b), (c) AIP09, Vienna, July

88 Numerical Results (I) Computations with 1% random noise: CPU times: (a) (b) (c) adaptive s s 7.976s global s s s adaptive grids: AIP09, Vienna, July

89 Numerical Results (II) exact par. q: (a), (b), (c) computed par. q: (a), (b), (c) AIP09, Vienna, July

90 Numerical Results (III) Convergence as δ 0 for examples (a) and (b): q δ α q δ q 1/α 8% % % % % q δ α q δ q 1/α 8% % % % % AIP09, Vienna, July

91 Regularization by Projection and the Discrepancy Principle P l (F (q) g δ ) = 0, q q 0 F (q ) G l P l = Proj Gl, dim(g l ) <, G 1 G 2 G 3... G F... forward operator: F (q) = (C S)(q) = C(u) where u = S(q) solves A(q, u)(v) = (f, v) v V... PDE in weak form AIP09, Vienna, July

92 Regularization by Projection and the Discrepancy Principle P l (F (q) g δ ) = 0, q q 0 F (q ) G l P l = Proj Gl, dim(g l ) <, G 1 G 2 G 3... G F... forward operator: F (q) = (C S)(q) = C(u) where u = S(q) solves A(q, u)(v) = (f, v) v V... PDE in weak form Φ(q, u, z, λ)(dq, du, dz, dλ) = C(u) g δ, dλ G +f(dz) A(q, u)(dz) + q q 0, dq Q (A q(q, u )dq)(z) +(A u(q, u )du)(z) + λ, C (u )du G = 0 (dq, du, dz, dλ) (Q V V G l ) AIP09, Vienna, July

93 Regularization by Projection and the Discrepancy Principle P l (F (q) g δ ) = 0, q q 0 F (q ) G l P l = Proj Gl, dim(g l ) <, G 1 G 2 G 3... G F... forward operator: F (q) = (C S)(q) = C(u) where u = S(q) solves A(q, u)(v) = (f, v) v V... PDE in weak form Φ(q, u, z, λ)(dq, du, dz, dλ) = C(u) g δ, dλ G +f(dz) A(q, u)(dz) + q q 0, dq Q (A q(q, u )dq)(z) +(A u(q, u )du)(z) + λ, C (u )du G = 0 (dq, du, dz, dλ) (Q V V G l ) Choice of l: discrepancy principle (fixed constant τ 1) l = min{l IN : F (ql δ ) g δ τδ} AIP09, Vienna, July

94 Multilevel Iteration l = 4 l = 3 l = 2 l = 1 P l (F (q) g δ ) = 0 l = min{l IN : F (ql δ ) g δ τδ}... stopping rule [BK 00, 07] AIP09, Vienna, July

95 Convergence Analysis Proposition [BK& Vexler 09]: l = l (δ, g δ ) and Q h V h V h such that for I(q, u) := C(u) g δ 2 G = F (q) gδ 2 G τ 2 δ 2 I(qh,l δ, u δ h,l ) τδ 2 (i) If additionally I(qh,l δ, u δ h,l ) I(ql δ, u δ l ) ci(qh,l δ, u δ h,l ) for some sufficiently small constant c > 0 then ql δ q as δ 0. Optimal rates under source conditions (logarithic/hölder). AIP09, Vienna, July

96 Summary and Outlook for 2nd Approach goal oriented error estimators allow to control the error in some quantity of interest sufficiently small error in residual norm i( 1 α ) and its derivative i ( 1 α ) fast convergence of Newton s method for choosing α (discr. princ.) coarse grids at the beginning of Newton s method save computational effort sufficiently small error in residual norm and Tikhonov functional convergence of Tikhonov regularization preserved other regularization methods: regularization by discretization multilevel method [BK&Vexler 09] other regularization parameter choice strategies: e.g., balancing principle AIP09, Vienna, July

97 Thank you for your attention! AIP09, Vienna, July

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

Numerical Solution of a Coefficient Identification Problem in the Poisson equation

Numerical Solution of a Coefficient Identification Problem in the Poisson equation Swiss Federal Institute of Technology Zurich Seminar for Applied Mathematics Department of Mathematics Bachelor Thesis Spring semester 2014 Thomas Haener Numerical Solution of a Coefficient Identification

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Optimal control problems with PDE constraints

Optimal control problems with PDE constraints Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017 General framework Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function.

More information

Adaptive methods for control problems with finite-dimensional control space

Adaptive methods for control problems with finite-dimensional control space Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy

More information

The estimation problem ODE stability The embedding method The simultaneous method In conclusion. Stability problems in ODE estimation

The estimation problem ODE stability The embedding method The simultaneous method In conclusion. Stability problems in ODE estimation Mathematical Sciences Institute Australian National University HPSC Hanoi 2006 Outline The estimation problem ODE stability The embedding method The simultaneous method In conclusion Estimation Given the

More information

Parameter Identification in Partial Differential Equations

Parameter Identification in Partial Differential Equations Parameter Identification in Partial Differential Equations Differentiation of data Not strictly a parameter identification problem, but good motivation. Appears often as a subproblem. Given noisy observation

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing

More information

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem Larisa Beilina Michael V. Klibanov December 18, 29 Abstract

More information

Linear Diffusion and Image Processing. Outline

Linear Diffusion and Image Processing. Outline Outline Linear Diffusion and Image Processing Fourier Transform Convolution Image Restoration: Linear Filtering Diffusion Processes for Noise Filtering linear scale space theory Gauss-Laplace pyramid for

More information

A multigrid method for large scale inverse problems

A multigrid method for large scale inverse problems A multigrid method for large scale inverse problems Eldad Haber Dept. of Computer Science, Dept. of Earth and Ocean Science University of British Columbia haber@cs.ubc.ca July 4, 2003 E.Haber: Multigrid

More information

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura

More information

Parameter Identification

Parameter Identification Lecture Notes Parameter Identification Winter School Inverse Problems 25 Martin Burger 1 Contents 1 Introduction 3 2 Examples of Parameter Identification Problems 5 2.1 Differentiation of Data...............................

More information

Parameter Identification by Iterative Constrained Regularization

Parameter Identification by Iterative Constrained Regularization Journal of Physics: Conference Series PAPER OPEN ACCESS Parameter Identification by Iterative Constrained Regularization To cite this article: Fabiana Zama 2015 J. Phys.: Conf. Ser. 657 012002 View the

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Dimension-Independent likelihood-informed (DILI) MCMC

Dimension-Independent likelihood-informed (DILI) MCMC Dimension-Independent likelihood-informed (DILI) MCMC Tiangang Cui, Kody Law 2, Youssef Marzouk Massachusetts Institute of Technology 2 Oak Ridge National Laboratory 2 August 25 TC, KL, YM DILI MCMC USC

More information

Suboptimal Open-loop Control Using POD. Stefan Volkwein

Suboptimal Open-loop Control Using POD. Stefan Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,

More information

Iterative Methods for Ill-Posed Problems

Iterative Methods for Ill-Posed Problems Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization

More information

Numerical Solution I

Numerical Solution I Numerical Solution I Stationary Flow R. Kornhuber (FU Berlin) Summerschool Modelling of mass and energy transport in porous media with practical applications October 8-12, 2018 Schedule Classical Solutions

More information

NONLINEAR DIFFUSION PDES

NONLINEAR DIFFUSION PDES NONLINEAR DIFFUSION PDES Erkut Erdem Hacettepe University March 5 th, 0 CONTENTS Perona-Malik Type Nonlinear Diffusion Edge Enhancing Diffusion 5 References 7 PERONA-MALIK TYPE NONLINEAR DIFFUSION The

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable) Nonlinear equations Definition A nonlinear equation is any equation of the form where f is a nonlinear function. Nonlinear equations x 2 + x + 1 = 0 (f : R R) f(x) = 0 (x cos y, 2y sin x) = (0, 0) (f :

More information

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice 1 Lecture Notes, HCI, 4.1.211 Chapter 2 Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian

More information

Numerical Optimization Algorithms

Numerical Optimization Algorithms Numerical Optimization Algorithms 1. Overview. Calculus of Variations 3. Linearized Supersonic Flow 4. Steepest Descent 5. Smoothed Steepest Descent Overview 1 Two Main Categories of Optimization Algorithms

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

POD for Parametric PDEs and for Optimality Systems

POD for Parametric PDEs and for Optimality Systems POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,

More information

Selected HW Solutions

Selected HW Solutions Selected HW Solutions HW1 1 & See web page notes Derivative Approximations. For example: df f i+1 f i 1 = dx h i 1 f i + hf i + h h f i + h3 6 f i + f i + h 6 f i + 3 a realmax 17 1.7014 10 38 b realmin

More information

Spring 2014: Computational and Variational Methods for Inverse Problems CSE 397/GEO 391/ME 397/ORI 397 Assignment 4 (due 14 April 2014)

Spring 2014: Computational and Variational Methods for Inverse Problems CSE 397/GEO 391/ME 397/ORI 397 Assignment 4 (due 14 April 2014) Spring 2014: Computational and Variational Methods for Inverse Problems CSE 397/GEO 391/ME 397/ORI 397 Assignment 4 (due 14 April 2014) The first problem in this assignment is a paper-and-pencil exercise

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Reduced Basis Methods for inverse problems

Reduced Basis Methods for inverse problems Reduced Basis Methods for inverse problems Dominik Garmatter dominik.garmatter@mathematik.uni-stuttgart.de Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Joint work with Bastian

More information

A framework for the adaptive finite element solution of large inverse problems. I. Basic techniques

A framework for the adaptive finite element solution of large inverse problems. I. Basic techniques ICES Report 04-39 A framework for the adaptive finite element solution of large inverse problems. I. Basic techniques Wolfgang Bangerth Center for Subsurface Modeling, Institute for Computational Engineering

More information

Introduction into Implementation of Optimization problems with PDEs: Sheet 3

Introduction into Implementation of Optimization problems with PDEs: Sheet 3 Technische Universität München Center for Mathematical Sciences, M17 Lucas Bonifacius, Korbinian Singhammer www-m17.ma.tum.de/lehrstuhl/lehresose16labcourseoptpde Summer 216 Introduction into Implementation

More information

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5. LINEAR DIFFUSION Erkut Erdem Hacettepe University February 24 th, 2012 CONTENTS 1 Linear Diffusion 1 2 Appendix - The Calculus of Variations 5 References 6 1 LINEAR DIFFUSION The linear diffusion (heat)

More information

Efficient Numerical Solution of Parabolic Optimization Problems by Finite Element Methods

Efficient Numerical Solution of Parabolic Optimization Problems by Finite Element Methods Optimization Methods and Software Vol. 00, No. 00, October 2006, 1 28 Efficient Numerical Solution of Parabolic Optimization Problems by Finite Element Methods Roland Becker, Dominik Meidner, and Boris

More information

A Bound-Constrained Levenburg-Marquardt Algorithm for a Parameter Identification Problem in Electromagnetics

A Bound-Constrained Levenburg-Marquardt Algorithm for a Parameter Identification Problem in Electromagnetics A Bound-Constrained Levenburg-Marquardt Algorithm for a Parameter Identification Problem in Electromagnetics Johnathan M. Bardsley February 19, 2004 Abstract Our objective is to solve a parameter identification

More information

Time-dependent Dirichlet Boundary Conditions in Finite Element Discretizations

Time-dependent Dirichlet Boundary Conditions in Finite Element Discretizations Time-dependent Dirichlet Boundary Conditions in Finite Element Discretizations Peter Benner and Jan Heiland November 5, 2015 Seminar Talk at Uni Konstanz Introduction Motivation A controlled physical processes

More information

Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems

Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems Alen Alexanderian (Math/NC State), Omar Ghattas (ICES/UT-Austin), Noémi Petra (Applied Math/UC

More information

A Sobolev trust-region method for numerical solution of the Ginz

A Sobolev trust-region method for numerical solution of the Ginz A Sobolev trust-region method for numerical solution of the Ginzburg-Landau equations Robert J. Renka Parimah Kazemi Department of Computer Science & Engineering University of North Texas June 6, 2012

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

The Bock iteration for the ODE estimation problem

The Bock iteration for the ODE estimation problem he Bock iteration for the ODE estimation problem M.R.Osborne Contents 1 Introduction 2 2 Introducing the Bock iteration 5 3 he ODE estimation problem 7 4 he Bock iteration for the smoothing problem 12

More information

A globally convergent numerical method and adaptivity for an inverse problem via Carleman estimates

A globally convergent numerical method and adaptivity for an inverse problem via Carleman estimates A globally convergent numerical method and adaptivity for an inverse problem via Carleman estimates Larisa Beilina, Michael V. Klibanov Chalmers University of Technology and Gothenburg University, Gothenburg,

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 28 PART II Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 29 BOUNDARY VALUE PROBLEMS (I) Solving a TWO

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

A priori error estimates for state constrained semilinear parabolic optimal control problems

A priori error estimates for state constrained semilinear parabolic optimal control problems Wegelerstraße 6 53115 Bonn Germany phone +49 228 73-3427 fax +49 228 73-7527 www.ins.uni-bonn.de F. Ludovici, I. Neitzel, W. Wollner A priori error estimates for state constrained semilinear parabolic

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise www.oeaw.ac.at Numerical differentiation by means of Legendre polynomials in the presence of square summable noise S. Lu, V. Naumova, S. Pereverzyev RICAM-Report 2012-15 www.ricam.oeaw.ac.at Numerical

More information

A Line search Multigrid Method for Large-Scale Nonlinear Optimization

A Line search Multigrid Method for Large-Scale Nonlinear Optimization A Line search Multigrid Method for Large-Scale Nonlinear Optimization Zaiwen Wen Donald Goldfarb Department of Industrial Engineering and Operations Research Columbia University 2008 Siam Conference on

More information

On a multiscale representation of images as hierarchy of edges. Eitan Tadmor. University of Maryland

On a multiscale representation of images as hierarchy of edges. Eitan Tadmor. University of Maryland On a multiscale representation of images as hierarchy of edges Eitan Tadmor Center for Scientific Computation and Mathematical Modeling (CSCAMM) Department of Mathematics and Institute for Physical Science

More information

The Lifted Newton Method and Its Use in Optimization

The Lifted Newton Method and Its Use in Optimization The Lifted Newton Method and Its Use in Optimization Moritz Diehl Optimization in Engineering Center (OPTEC), K.U. Leuven, Belgium joint work with Jan Albersmeyer (U. Heidelberg) ENSIACET, Toulouse, February

More information

Parallelizing large scale time domain electromagnetic inverse problem

Parallelizing large scale time domain electromagnetic inverse problem Parallelizing large scale time domain electromagnetic inverse problems Eldad Haber with: D. Oldenburg & R. Shekhtman + Emory University, Atlanta, GA + The University of British Columbia, Vancouver, BC,

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

LECTURE 3: DISCRETE GRADIENT FLOWS

LECTURE 3: DISCRETE GRADIENT FLOWS LECTURE 3: DISCRETE GRADIENT FLOWS Department of Mathematics and Institute for Physical Science and Technology University of Maryland, USA Tutorial: Numerical Methods for FBPs Free Boundary Problems and

More information

Control of Interface Evolution in Multi-Phase Fluid Flows

Control of Interface Evolution in Multi-Phase Fluid Flows Control of Interface Evolution in Multi-Phase Fluid Flows Markus Klein Department of Mathematics University of Tübingen Workshop on Numerical Methods for Optimal Control and Inverse Problems Garching,

More information

A Framework for Analyzing and Constructing Hierarchical-Type A Posteriori Error Estimators

A Framework for Analyzing and Constructing Hierarchical-Type A Posteriori Error Estimators A Framework for Analyzing and Constructing Hierarchical-Type A Posteriori Error Estimators Jeff Ovall University of Kentucky Mathematics www.math.uky.edu/ jovall jovall@ms.uky.edu Kentucky Applied and

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) Ecole Centrale - Option DAD SequeL INRIA Lille EC-RL Course Value Iteration: the Idea 1. Let V 0 be any vector in R N A. LAZARIC Reinforcement

More information

Parameter Identification in Systems Biology: Solving Ill-posed Inverse Problems using Regularization

Parameter Identification in Systems Biology: Solving Ill-posed Inverse Problems using Regularization www.oeaw.ac.at Parameter Identification in Systems Biology: Solving Ill-posed Inverse Problems using Regularization S. Müller, J. Lu, P. Kuegler, H.W. Engl RICAM-Report 28-25 www.ricam.oeaw.ac.at Parameter

More information

On quasi-richards equation and finite volume approximation of two-phase flow with unlimited air mobility

On quasi-richards equation and finite volume approximation of two-phase flow with unlimited air mobility On quasi-richards equation and finite volume approximation of two-phase flow with unlimited air mobility B. Andreianov 1, R. Eymard 2, M. Ghilani 3,4 and N. Marhraoui 4 1 Université de Franche-Comte Besançon,

More information

Lecture 4: Birkhoff normal forms

Lecture 4: Birkhoff normal forms Lecture 4: Birkhoff normal forms Walter Craig Department of Mathematics & Statistics Waves in Flows - Lecture 4 Prague Summer School 2018 Czech Academy of Science August 31 2018 Outline Two ODEs Water

More information

On Multigrid for Phase Field

On Multigrid for Phase Field On Multigrid for Phase Field Carsten Gräser (FU Berlin), Ralf Kornhuber (FU Berlin), Rolf Krause (Uni Bonn), and Vanessa Styles (University of Sussex) Interphase 04 Rome, September, 13-16, 2004 Synopsis

More information

On Nonlinear Dirichlet Neumann Algorithms for Jumping Nonlinearities

On Nonlinear Dirichlet Neumann Algorithms for Jumping Nonlinearities On Nonlinear Dirichlet Neumann Algorithms for Jumping Nonlinearities Heiko Berninger, Ralf Kornhuber, and Oliver Sander FU Berlin, FB Mathematik und Informatik (http://www.math.fu-berlin.de/rd/we-02/numerik/)

More information

Space-time Finite Element Methods for Parabolic Evolution Problems

Space-time Finite Element Methods for Parabolic Evolution Problems Space-time Finite Element Methods for Parabolic Evolution Problems with Variable Coefficients Ulrich Langer, Martin Neumüller, Andreas Schafelner Johannes Kepler University, Linz Doctoral Program Computational

More information

Continuum Modeling of Transportation Networks with Differential Equations

Continuum Modeling of Transportation Networks with Differential Equations with Differential Equations King Abdullah University of Science and Technology Thuwal, KSA Examples of transportation networks The Silk Road Examples of transportation networks Painting by Latifa Echakhch

More information

The mortar element method for quasilinear elliptic boundary value problems

The mortar element method for quasilinear elliptic boundary value problems The mortar element method for quasilinear elliptic boundary value problems Leszek Marcinkowski 1 Abstract We consider a discretization of quasilinear elliptic boundary value problems by the mortar version

More information

Basic Aspects of Discretization

Basic Aspects of Discretization Basic Aspects of Discretization Solution Methods Singularity Methods Panel method and VLM Simple, very powerful, can be used on PC Nonlinear flow effects were excluded Direct numerical Methods (Field Methods)

More information

Scaled gradient projection methods in image deblurring and denoising

Scaled gradient projection methods in image deblurring and denoising Scaled gradient projection methods in image deblurring and denoising Mario Bertero 1 Patrizia Boccacci 1 Silvia Bonettini 2 Riccardo Zanella 3 Luca Zanni 3 1 Dipartmento di Matematica, Università di Genova

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Robust control and applications in economic theory

Robust control and applications in economic theory Robust control and applications in economic theory In honour of Professor Emeritus Grigoris Kalogeropoulos on the occasion of his retirement A. N. Yannacopoulos Department of Statistics AUEB 24 May 2013

More information

Mesh Refinement and Numerical Sensitivity Analysis for Parameter Calibration of Partial Differential Equations

Mesh Refinement and Numerical Sensitivity Analysis for Parameter Calibration of Partial Differential Equations Mesh Refinement and Numerical Sensitivity Analysis for Parameter Calibration of Partial Differential Equations Roland Becker a Boris Vexler b a Laboratoire de Mathématiques Appliquées, Université de Pau

More information

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.

More information

Advanced numerical methods for nonlinear advectiondiffusion-reaction. Peter Frolkovič, University of Heidelberg

Advanced numerical methods for nonlinear advectiondiffusion-reaction. Peter Frolkovič, University of Heidelberg Advanced numerical methods for nonlinear advectiondiffusion-reaction equations Peter Frolkovič, University of Heidelberg Content Motivation and background R 3 T Numerical modelling advection advection

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places. NUMERICAL METHODS 1. Rearranging the equation x 3 =.5 gives the iterative formula x n+1 = g(x n ), where g(x) = (2x 2 ) 1. (a) Starting with x = 1, compute the x n up to n = 6, and describe what is happening.

More information

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in

More information

5.1 2D example 59 Figure 5.1: Parabolic velocity field in a straight two-dimensional pipe. Figure 5.2: Concentration on the input boundary of the pipe. The vertical axis corresponds to r 2 -coordinate,

More information

Lecture 2: Tikhonov-Regularization

Lecture 2: Tikhonov-Regularization Lecture 2: Tikhonov-Regularization Bastian von Harrach harrach@math.uni-stuttgart.de Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Advanced Instructional School on Theoretical

More information

Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels

Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels Denis Sidorov Energy Systems Institute, Russian Academy of Sciences e-mail: contact.dns@gmail.com INV Follow-up Meeting Isaac

More information

Anna Avdeeva Dmitry Avdeev and Marion Jegen. 31 March Introduction to 3D MT inversion code x3di

Anna Avdeeva Dmitry Avdeev and Marion Jegen. 31 March Introduction to 3D MT inversion code x3di Anna Avdeeva (aavdeeva@ifm-geomar.de), Dmitry Avdeev and Marion Jegen 31 March 211 Outline 1 Essential Parts of 3D MT Inversion Code 2 Salt Dome Overhang Study with 3 Outline 1 Essential Parts of 3D MT

More information

11.3 MATLAB for Partial Differential Equations

11.3 MATLAB for Partial Differential Equations 276 3. Generate the shape functions N (i) j = a (i) j where i =1, 2, 3,..., m and j =1, 2, 3,..., m. + b (i) j x + c(i) j y (11.2.17) 4. Compute the integrals for matrix elements α ij and vector elements

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Algorithms for Scientific Computing

Algorithms for Scientific Computing Algorithms for Scientific Computing Finite Element Methods Michael Bader Technical University of Munich Summer 2016 Part I Looking Back: Discrete Models for Heat Transfer and the Poisson Equation Modelling

More information

Local pointwise a posteriori gradient error bounds for the Stokes equations. Stig Larsson. Heraklion, September 19, 2011 Joint work with A.

Local pointwise a posteriori gradient error bounds for the Stokes equations. Stig Larsson. Heraklion, September 19, 2011 Joint work with A. Local pointwise a posteriori gradient error bounds for the Stokes equations Stig Larsson Department of Mathematical Sciences Chalmers University of Technology and University of Gothenburg Heraklion, September

More information

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton

More information

A very short introduction to the Finite Element Method

A very short introduction to the Finite Element Method A very short introduction to the Finite Element Method Till Mathis Wagner Technical University of Munich JASS 2004, St Petersburg May 4, 2004 1 Introduction This is a short introduction to the finite element

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

A Posteriori Estimates for Cost Functionals of Optimal Control Problems

A Posteriori Estimates for Cost Functionals of Optimal Control Problems A Posteriori Estimates for Cost Functionals of Optimal Control Problems Alexandra Gaevskaya, Ronald H.W. Hoppe,2 and Sergey Repin 3 Institute of Mathematics, Universität Augsburg, D-8659 Augsburg, Germany

More information

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik Accelerated Landweber iteration in Banach spaces T. Hein, K.S. Kazimierski Preprint 2009-17 Fakultät für Mathematik Impressum: Herausgeber: Der Dekan der Fakultät für Mathematik an der Technischen Universität

More information

Acceleration of a Domain Decomposition Method for Advection-Diffusion Problems

Acceleration of a Domain Decomposition Method for Advection-Diffusion Problems Acceleration of a Domain Decomposition Method for Advection-Diffusion Problems Gert Lube 1, Tobias Knopp 2, and Gerd Rapin 2 1 University of Göttingen, Institute of Numerical and Applied Mathematics (http://www.num.math.uni-goettingen.de/lube/)

More information

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

A spacetime DPG method for acoustic waves

A spacetime DPG method for acoustic waves A spacetime DPG method for acoustic waves Rainer Schneckenleitner JKU Linz January 30, 2018 ( JKU Linz ) Spacetime DPG method January 30, 2018 1 / 41 Overview 1 The transient wave problem 2 The broken

More information