A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

Size: px
Start display at page:

Download "A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator"

Transcription

1 A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, Doctoral Program Computational Mathematics Numerical Analysis and Symbolic Computation supported by Bleyer, Ramlau JKU Linz 1/ 27

2 Overview Introduction Proposed method: DBL-RTLS Computational aspects Numerical illustration Outline and future work Bleyer, Ramlau JKU Linz 2/ 27

3 Introduction Overview Introduction Proposed method: DBL-RTLS Computational aspects Numerical illustration Outline and future work Bleyer, Ramlau JKU Linz 2/ 27

4 Introduction Inverse problems Inverse problems are concerned with determining causes for a desired or an observed effect [Engl, Hanke, and Neubauer, 2000] Consider a linear operator equation Ax = y. Inverse problems most oft do not fulfill Hadamard s postulate [1902] of well posedness (existence, uniqueness and stability). Computational issues: observed effect has measurement errors or perturbations caused by noise. Bleyer, Ramlau JKU Linz 3/ 27

5 Introduction 1st Case: noisy data Solve Ax = y 0 out of the measurement y δ with y0 y δ δ. Need apply some regularization technique minimize Ax yδ 2 +α Lx 2. x Tikhonov regularization fidelity term (based on LS); regularization parameter α; stabilization term (quadratic). [Tikhonov, 1963, Phillips, 1962] Bleyer, Ramlau JKU Linz 4/ 27

6 Introduction 1st Case: noisy data Solve Ax = y 0 out of the measurement y δ with y0 y δ δ. Need apply some regularization technique minimize Ax yδ 2 +αr(x). x Tikhonov-type regularization fidelity term (based on LS); regularization parameter α; R is a proper, convex and weakly lower semicontinuous functional. [Burger and Osher, 2004, Resmerita, 2005] Bleyer, Ramlau JKU Linz 4/ 27

7 Introduction Subgradient The Fenchel subdifferential of a functional R : U [0,+ ] at ū U is the set F R(ū) = {ξ U R(v) R(ū) ξ, v ū v U}. First in 1960 by Moreau & Rockafellar and extended by Clark Optimality condition: If ū minimizes R then 0 F R(ū) Bleyer, Ramlau JKU Linz 5/ 27

8 Introduction Example Consider the function R(u) = u Figure: Function (left) and its subdifferential (right). Bleyer, Ramlau JKU Linz 6/ 27

9 Introduction 2nd Case: inexact operator and noisy data Solve A 0 x = y 0 under the assumptions (i) noisy data y0 y δ δ. (ii) inexact operator A0 A ǫ ǫ. What have been done so far? Linear case - based on TLS [Golub and Van Loan, 1980]: R-TLS: Regularized TLS [Golub et al., 1999]; D-RTLS: Dual R-TLS [Lu et al., 2007]. Nonlinear case: no publication (?) LS: y δ and A 0 minimize y y yδ 2 subject to y R(A 0 ) TLS: y δ and A ǫ minimize [A,y] [Aǫ,y δ ] F subject to y R(A) Bleyer, Ramlau JKU Linz 7/ 27

10 Introduction 2nd Case: inexact operator and noisy data Solve A 0 x = y 0 under the assumptions (i) noisy data y0 y δ δ. (ii) inexact operator A0 A ǫ ǫ. What have been done so far? Linear case - based on TLS [Golub and Van Loan, 1980]: R-TLS: Regularized TLS [Golub et al., 1999]; D-RTLS: Dual R-TLS [Lu et al., 2007]. Nonlinear case: no publication (?) LS: y δ and A 0 minimize y y yδ 2 subject to y R(A 0 ) TLS: y δ and A ǫ minimize [A,y] [Aǫ,y δ ] F subject to y R(A) Bleyer, Ramlau JKU Linz 7/ 27

11 Introduction Illustration Solve 1D problem: am = b, find the slope m. 3 TLS vs LS 2.5 Given: 1. b δ, a ǫ (red) Solution: 1. LS solution (blue) 2. TLS solution (green) LS solution TLS solution noisy data true data slope Example: arctan(1) = 45 o [Van Huffel and Vandewalle, 1991] Bleyer, Ramlau JKU Linz 8/ 27

12 Introduction R-TLS The R-TLS method [Golub, Hansen, and O leary, 1999] minimize A Aǫ 2 + y yδ 2 { subject to Ax = y Lx 2 M. If the inequality constraint is active, then ( A T ǫ A ǫ +αl T L+βI )ˆx = A T ǫ y δ and Lˆx = M with α = µ(1+ ˆx 2 ), β = Aǫˆx y δ 2 multiplier. 1+ ˆx 2 and µ > 0 is the Lagrange Difficulty: requires a reliable bound M for the norm Lx 2. Bleyer, Ramlau JKU Linz 9/ 27

13 Proposed method: DBL-RTLS Overview Introduction Proposed method: DBL-RTLS Computational aspects Numerical illustration Outline and future work Bleyer, Ramlau JKU Linz 9/ 27

14 Proposed method: DBL-RTLS Consider the operator equation B(k,f) = g 0 where B is a bilinear operator (nonlinear) B : U V H (k,f) B(k,f) and B is characterized by a function k 0. K = B( k, ) compact linear operator for a fixed k U F = B(, f) linear operator for a fixed f V B(k0, ) V H C k0 U ; B(k,f) H C k U f V ; Example: B(k,f)(s) := Ω k(s,t)f(t)dt. Bleyer, Ramlau JKU Linz 10/ 27

15 Proposed method: DBL-RTLS Consider the operator equation B(k,f) = g 0 where B is a bilinear operator (nonlinear) B : U V H (k,f) B(k,f) and B is characterized by a function k 0. K = B( k, ) compact linear operator for a fixed k U F = B(, f) linear operator for a fixed f V B(k0, ) V H C k0 U ; B(k,f) H C k U f V ; Example: B(k,f)(s) := Ω k(s,t)f(t)dt. Bleyer, Ramlau JKU Linz 10/ 27

16 Proposed method: DBL-RTLS We want to solve B(k 0,f) = g 0 out of the measurements k ǫ and g δ with (i) noisy data g 0 g H δ δ. (ii) inexact operator k0 k U ǫ ǫ. We introduce the DBL-RTLS minimize k,f J (k,f) := T(k,f,k ǫ,g δ )+R(k,f) where T measures of accuracy (closeness/discrepancy) R promotes stability. Bleyer, Ramlau JKU Linz 11/ 27

17 Proposed method: DBL-RTLS DBL-RTLS where minimize k,f J (k,f) := T(k,f,k ǫ,g δ )+R(k,f) (1) T(k,f,k ǫ,g δ ) = 1 B(k,f) gδ 2 2 H + γ k kǫ 2 2 U R(k,f) = α Lf 2 2 V +βr(k) T is based on TLS method, measures the discrepancy on both data and operator; L : V V is a linear bounded operator; α, β are the regularization parameters and γ is a scaling parameter; double regularization [You and Kaveh, 1996], R : U [0,+ ] is proper convex function and w-lsc. Bleyer, Ramlau JKU Linz 12/ 27

18 Proposed method: DBL-RTLS Main theoretical results Assumption: (A1) B is strongly continuous, ie, if (k n,f n ) ( k, f) then B(k n,f n ) B( k, f) Proposition Let J be the functional defined on (1) and L be a bounded and positive operator. Then J is positive, weak lower semi-continuous and coercive functional. Theorem (existence) Let the assumptions of Proposition 1 hold. Then there exists a global minimum of minimize J (k,f). Bleyer, Ramlau JKU Linz 13/ 27

19 Proposed method: DBL-RTLS Theorem (stability) δ j δ and ǫ j ǫ g δj g δ and k ǫj k ǫ α,β > 0 (k j,f j ) is a minimizer of J with g δj and k ǫj Then there exists a convergent subsequence of (k j,f j ) j (k jm,f jm ) ( k, f) where ( k, f) is a minimizer of J with g δ,k ǫ,α and β. Bleyer, Ramlau JKU Linz 14/ 27

20 Proposed method: DBL-RTLS Theorem (stability) δ j δ and ǫ j ǫ g δj g δ and k ǫj k ǫ α,β > 0 (k j,f j ) is a minimizer of J with g δj and k ǫj Then there exists a convergent subsequence of (k j,f j ) j (k jm,f jm ) ( k, f) where ( k, f) is a minimizer of J with g δ,k ǫ,α and β. Bleyer, Ramlau JKU Linz 14/ 27

21 Proposed method: DBL-RTLS Consider the convex functional Φ(k,f) := 1 Lf 2 +ηr(k) 2 where the parameter η represents the different scaling of f and k. For convergence results we need to define Definition We call (k,f ) a Φ-minimizing solution if (k,f ) = argmin{φ(k,f) B(k,f) = g 0 }. (k,f) Bleyer, Ramlau JKU Linz 15/ 27

22 Proposed method: DBL-RTLS Theorem (convergence) δ j 0 and ǫ j 0 gδj g 0 δj and kǫj k 0 ǫj α j = α(ǫ j,δ j ) and β j = β(ǫ j,δ j ), s.t. α j 0, β j 0, lim j δ 2 j +γǫ2 j α j = 0 and lim j β j α j = η (k j,f j ) is a minimizer of J with g δj, k ǫj, α j and β j Then there exists a convergent subsequence of (k j,f j ) j (k jm,f jm ) (k,f ) where (k,f ) is a Φ-minimizing solution. Bleyer, Ramlau JKU Linz 16/ 27

23 Proposed method: DBL-RTLS Theorem (convergence) δ j 0 and ǫ j 0 gδj g 0 δj and kǫj k 0 ǫj α j = α(ǫ j,δ j ) and β j = β(ǫ j,δ j ), s.t. α j 0, β j 0, lim j δ 2 j +γǫ2 j α j = 0 and lim j β j α j = η (k j,f j ) is a minimizer of J with g δj, k ǫj, α j and β j Then there exists a convergent subsequence of (k j,f j ) j (k jm,f jm ) (k,f ) where (k,f ) is a Φ-minimizing solution. Bleyer, Ramlau JKU Linz 16/ 27

24 Computational aspects Overview Introduction Proposed method: DBL-RTLS Computational aspects Numerical illustration Outline and future work Bleyer, Ramlau JKU Linz 16/ 27

25 Computational aspects Optimality condition If the pair ( k, f) is a minimizer of J (k,f), then (0,0) J ( k, f). Theorem Let J : U V R be a nonconvex functional, J(u,v) = ϕ(u)+q(u,v)+ψ(v) where Q is a nonlinear differentiable term and ϕ, ψ are lsc convex functions. Then J(u,v) = { ϕ(u)+d u Q(u,v)} { ψ(v)+d v Q(u,v)} = { u J(u,v)} { v J(u,v)} Bleyer, Ramlau JKU Linz 17/ 27

26 Computational aspects Remark: is difficult to solve wrt both (k,f) J is bilinear and biconvex (linear and convex to each one) applied alternating minimization method. Alternating minimization algorithm Require: g δ,k ǫ,l,γ,α,β 1: n = 0 2: repeat 3: f n+1 argmin f J(k,f k n ) 4: k n+1 argmin k J(k,f f n+1 ) 5: until convergence Bleyer, Ramlau JKU Linz 18/ 27

27 Computational aspects Remark: is difficult to solve wrt both (k,f) J is bilinear and biconvex (linear and convex to each one) applied alternating minimization method. Alternating minimization algorithm Require: g δ,k ǫ,l,γ,α,β 1: n = 0 2: repeat 3: f n+1 argmin f J(k,f k n ) 4: k n+1 argmin k J(k,f f n+1 ) 5: until convergence Bleyer, Ramlau JKU Linz 18/ 27

28 Computational aspects Proposition The sequence generated by the function J(k n,f n ) is non-increasing, J(k n+1,f n+1 ) J(k n,f n+1 ) J(k n,f n ). Assumptions: (A1) B is strongly continuous, ie., if (k n,f n ) ( k, f) then B(k n,f n ) B( k, f) (A2) B is weakly sequentially closed, ie., if (k n,f n ) ( k, f) and B(k n,f n ) g then B( k, f) = g (A3) the adjoint of B is strongly continuous, ie., if (k n,f n ) ( k, f) then B (k n,f n ) z B ( k, f) z, z D(B ) Bleyer, Ramlau JKU Linz 19/ 27

29 Computational aspects Proposition The sequence generated by the function J(k n,f n ) is non-increasing, J(k n+1,f n+1 ) J(k n,f n+1 ) J(k n,f n ). Assumptions: (A1) B is strongly continuous, ie., if (k n,f n ) ( k, f) then B(k n,f n ) B( k, f) (A2) B is weakly sequentially closed, ie., if (k n,f n ) ( k, f) and B(k n,f n ) g then B( k, f) = g (A3) the adjoint of B is strongly continuous, ie., if (k n,f n ) ( k, f) then B (k n,f n ) z B ( k, f) z, z D(B ) Bleyer, Ramlau JKU Linz 19/ 27

30 Computational aspects Theorem Given regularization parameters 0 < α α and β, compute AM algorithm. The sequence {(k n+1,f n+1 )} n+1 has a weakly convergent subsequence, namely (k n j+1,f n j+1 ) ( k, f) and the limit has the property J( k, f) J( k,f) and J( k, f) J(k, f) for all f V and for all k U. Proposition Let {(k n,f n )} n be a weakly convergent sequence generated by AM algorithm, where k n k and f n f. Then there exists a subsequence {k n j } nj such that k n j k and there exists {ξ n j k } n j with ξ n j k kj(k n j,f n j ) such that ξ n j k 0. Bleyer, Ramlau JKU Linz 20/ 27

31 Computational aspects Theorem Given regularization parameters 0 < α α and β, compute AM algorithm. The sequence {(k n+1,f n+1 )} n+1 has a weakly convergent subsequence, namely (k n j+1,f n j+1 ) ( k, f) and the limit has the property J( k, f) J( k,f) and J( k, f) J(k, f) for all f V and for all k U. Proposition Let {(k n,f n )} n be a weakly convergent sequence generated by AM algorithm, where k n k and f n f. Then there exists a subsequence {k n j } nj such that k n j k and there exists {ξ n j k } n j with ξ n j k kj(k n j,f n j ) such that ξ n j k 0. Bleyer, Ramlau JKU Linz 20/ 27

32 Computational aspects Proposition Let {n} be a subsequence of N such that the sequence {(k n,f n )} n generated by AM algorithm satisfies k n k and f n f. Then f n j f and there exists {ξ n j f } n j with ξ n j f f J(k n j,f n j ) such that ξ n j f 0. Remark: Graph of subdifferential mapping is sw-closed, ie., if v n v and ξ n ξ with ξ n ϕ(v n ), then ξ ϕ( v). Theorem Let {(k n,f n )} n be the sequence generated by the AM algorithm, then there exists a subsequence converging towards to a critical point of J, ie., (0,0) J ( k, f). Bleyer, Ramlau JKU Linz 21/ 27

33 Computational aspects Proposition Let {n} be a subsequence of N such that the sequence {(k n,f n )} n generated by AM algorithm satisfies k n k and f n f. Then f n j f and there exists {ξ n j f } n j with ξ n j f f J(k n j,f n j ) such that ξ n j f 0. Remark: Graph of subdifferential mapping is sw-closed, ie., if v n v and ξ n ξ with ξ n ϕ(v n ), then ξ ϕ( v). Theorem Let {(k n,f n )} n be the sequence generated by the AM algorithm, then there exists a subsequence converging towards to a critical point of J, ie., (0,0) J ( k, f). Bleyer, Ramlau JKU Linz 21/ 27

34 Numerical illustration Overview Introduction Proposed method: DBL-RTLS Computational aspects Numerical illustration Outline and future work Bleyer, Ramlau JKU Linz 21/ 27

35 Numerical illustration First numerical result Convolution in 1D Ω k(s t)f(t)dt = g(s) characteristic kernel and hat function; space: Ω = [0,1], discretization: N = 2048 points; R(k) = k w,p with p = 1 Haar wavelet for {φ} λ and J = 10; initial guess: k 0 = k ǫ, τ = 1.0; 1st. relative error: 10% and 10%. 2nd. relative error: 0.1% and 0.1%. Bleyer, Ramlau JKU Linz 22/ 27

36 Numerical illustration 1.2 noisy kernel 1.2 solution 0.2 noisy data approach kernel 1.2 solution function 0.2 computed data kernel: ker 1.2 function: func 0.2 data: convolution Bleyer, Ramlau JKU Linz 23/ 27

37 Numerical illustration 1.2 noisy kernel 1 solution 0.2 noisy data approach kernel 1.2 solution function 0.2 computed data kernel: ker 1.2 function: func 0.2 data: convolution Bleyer, Ramlau JKU Linz 24/ 27

38 Outline and future work Overview Introduction Proposed method: DBL-RTLS Computational aspects Numerical illustration Outline and future work Bleyer, Ramlau JKU Linz 24/ 27

39 Outline and future work Outline and future work So far: introduced a method for nonlinear equation (bilinear operator) with noisy data and inexact operator; proved existence, stability and convergence; study of source conditions and convergence rates (k and f); suggested an iterative implementation; proved convergence of AM algorithm to a critical point; For further work: study variational inequalities; how to choose the best regularization parameter? a priori and a posteriori choice; implementations and numerical experiments (2D); Bleyer, Ramlau JKU Linz 25/ 27

40 Outline and future work Outline and future work So far: introduced a method for nonlinear equation (bilinear operator) with noisy data and inexact operator; proved existence, stability and convergence; study of source conditions and convergence rates (k and f); suggested an iterative implementation; proved convergence of AM algorithm to a critical point; For further work: study variational inequalities; how to choose the best regularization parameter? a priori and a posteriori choice; implementations and numerical experiments (2D); Bleyer, Ramlau JKU Linz 25/ 27

41 Outline and future work M. Burger and S. Osher. Convergence rates of convex variational regularization. Inverse Problems, 20(5): , ISSN doi: / /20/5/005. URL H. W. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Kluwer Academic Publishers, Dordrecht, G. H. Golub and C. F. Van Loan. An analysis of the total least squares problem. SIAM J. Numer. Anal., 17(6): , ISSN G. H. Golub, P. C. Hansen, and D. P. O leary. Tikhonov regularization and total least squares. SIAM J. Matrix Anal. Appl, 21: , S. Lu, S. V. Pereverzev, and U. Tautenhahn. Regularized total least squares: computational aspects and error bounds. Technical Report 30, Ricam, Linz, Austria, URL D. L. Phillips. A technique for the numerical solution of certain integral equations of the first kind. J. Assoc. Comput. Mach., 9:84 97, ISSN E. Resmerita. Regularization of ill-posed problems in Banach spaces: convergence rates. Inverse Problems, 21(4): , ISSN doi: / /21/4/007. URL A. N. Tikhonov. On the solution of incorrectly put problems and the regularisation method. In Outlines Joint Sympos. Partial Differential Equations (Novosibirsk, 1963), pages Acad. Sci. USSR Siberian Branch, Moscow, S. Van Huffel and J. Vandewalle. The total least squares problem, volume 9 of Frontiers in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, ISBN Computational aspects and analysis, With a foreword by Gene H. Golub. Y.-L. You and M. Kaveh. A regularization approach to joint blur identification and image restoration. Image Processing, IEEE Transactions on, 5(3): , mar ISSN Bleyer, Ramlau JKU Linz 26/ 27

42 Outline and future work Thank you for your kind attention! Bleyer, Ramlau JKU Linz 27/ 27

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Cambridge - July 28, 211. Doctoral

More information

Digital speech: an application of the dbl-rtls method for solving GIF problem

Digital speech: an application of the dbl-rtls method for solving GIF problem HELSINGIN YLIOPISTO HELSINGFORS UNIVERSITET UNIVERSITY OF HELSINKI Digital speech: an application of the dbl-rtls method for solving GIF problem Rodrigo Bleyer Bilbao Jan 8-9, 2015. Department of Mathematics

More information

A model function method in total least squares

A model function method in total least squares www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Stephan W Anzengruber Ronny Ramlau Abstract We derive convergence rates for Tikhonov-type regularization with conve

More information

A range condition for polyconvex variational regularization

A range condition for polyconvex variational regularization www.oeaw.ac.at A range condition for polyconvex variational regularization C. Kirisits, O. Scherzer RICAM-Report 2018-04 www.ricam.oeaw.ac.at A range condition for polyconvex variational regularization

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

Convergence rates of convex variational regularization

Convergence rates of convex variational regularization INSTITUTE OF PHYSICS PUBLISHING Inverse Problems 20 (2004) 1411 1421 INVERSE PROBLEMS PII: S0266-5611(04)77358-X Convergence rates of convex variational regularization Martin Burger 1 and Stanley Osher

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

Necessary conditions for convergence rates of regularizations of optimal control problems

Necessary conditions for convergence rates of regularizations of optimal control problems Necessary conditions for convergence rates of regularizations of optimal control problems Daniel Wachsmuth and Gerd Wachsmuth Johann Radon Institute for Computational and Applied Mathematics RICAM), Austrian

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

Convergence rates in l 1 -regularization when the basis is not smooth enough

Convergence rates in l 1 -regularization when the basis is not smooth enough Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

Tuning of Fuzzy Systems as an Ill-Posed Problem

Tuning of Fuzzy Systems as an Ill-Posed Problem Tuning of Fuzzy Systems as an Ill-Posed Problem Martin Burger 1, Josef Haslinger 2, and Ulrich Bodenhofer 2 1 SFB F 13 Numerical and Symbolic Scientific Computing and Industrial Mathematics Institute,

More information

Adaptive methods for control problems with finite-dimensional control space

Adaptive methods for control problems with finite-dimensional control space Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy

More information

Tikhonov Regularization for Weighted Total Least Squares Problems

Tikhonov Regularization for Weighted Total Least Squares Problems Tikhonov Regularization for Weighted Total Least Squares Problems Yimin Wei Naimin Zhang Michael K. Ng Wei Xu Abstract In this paper, we study and analyze the regularized weighted total least squares (RWTLS)

More information

Statistical regularization theory for Inverse Problems with Poisson data

Statistical regularization theory for Inverse Problems with Poisson data Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical

More information

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee J. Korean Math. Soc. 0 (0), No. 0, pp. 1 0 https://doi.org/10.4134/jkms.j160152 pissn: 0304-9914 / eissn: 2234-3008 DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM

More information

Iterative Methods for Ill-Posed Problems

Iterative Methods for Ill-Posed Problems Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

Convex Optimization Conjugate, Subdifferential, Proximation

Convex Optimization Conjugate, Subdifferential, Proximation 1 Lecture Notes, HCI, 3.11.211 Chapter 6 Convex Optimization Conjugate, Subdifferential, Proximation Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian Goldlücke Overview

More information

Arnoldi-Tikhonov regularization methods

Arnoldi-Tikhonov regularization methods Arnoldi-Tikhonov regularization methods Bryan Lewis a, Lothar Reichel b,,1 a Rocketcalc LLC, 100 W. Crain Ave., Kent, OH 44240, USA. b Department of Mathematical Sciences, Kent State University, Kent,

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

Fenchel-Moreau Conjugates of Inf-Transforms and Application to Stochastic Bellman Equation

Fenchel-Moreau Conjugates of Inf-Transforms and Application to Stochastic Bellman Equation Fenchel-Moreau Conjugates of Inf-Transforms and Application to Stochastic Bellman Equation Jean-Philippe Chancelier and Michel De Lara CERMICS, École des Ponts ParisTech First Conference on Discrete Optimization

More information

New Algorithms for Parallel MRI

New Algorithms for Parallel MRI New Algorithms for Parallel MRI S. Anzengruber 1, F. Bauer 2, A. Leitão 3 and R. Ramlau 1 1 RICAM, Austrian Academy of Sciences, Altenbergerstraße 69, 4040 Linz, Austria 2 Fuzzy Logic Laboratorium Linz-Hagenberg,

More information

Regularization of Inverse Problems

Regularization of Inverse Problems Regularization of Inverse Problems Otmar Scherzer Computational Science Center, University Vienna, Austria & Radon Institute of Computational and Applied Mathematics (RICAM), Linz, Austria Otmar Scherzer

More information

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES Dan Butnariu and Elena Resmerita Abstract. In this paper we establish criteria for the stability of the proximal mapping Prox f ϕ =( ϕ+ f)

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

Blind image restoration as a convex optimization problem

Blind image restoration as a convex optimization problem Int. J. Simul. Multidisci.Des. Optim. 4, 33-38 (2010) c ASMDO 2010 DOI: 10.1051/ijsmdo/ 2010005 Available online at: http://www.ijsmdo.org Blind image restoration as a convex optimization problem A. Bouhamidi

More information

Weak sharp minima on Riemannian manifolds 1

Weak sharp minima on Riemannian manifolds 1 1 Chong Li Department of Mathematics Zhejiang University Hangzhou, 310027, P R China cli@zju.edu.cn April. 2010 Outline 1 2 Extensions of some results for optimization problems on Banach spaces 3 4 Some

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency Applied Mathematics E-Notes, 16(2016), 133-143 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

Augmented Tikhonov Regularization

Augmented Tikhonov Regularization Augmented Tikhonov Regularization Bangti JIN Universität Bremen, Zentrum für Technomathematik Seminar, November 14, 2008 Outline 1 Background 2 Bayesian inference 3 Augmented Tikhonov regularization 4

More information

3 Compact Operators, Generalized Inverse, Best- Approximate Solution

3 Compact Operators, Generalized Inverse, Best- Approximate Solution 3 Compact Operators, Generalized Inverse, Best- Approximate Solution As we have already heard in the lecture a mathematical problem is well - posed in the sense of Hadamard if the following properties

More information

Parallel Cimmino-type methods for ill-posed problems

Parallel Cimmino-type methods for ill-posed problems Parallel Cimmino-type methods for ill-posed problems Cao Van Chung Seminar of Centro de Modelización Matemática Escuela Politécnica Naciónal Quito ModeMat, EPN, Quito Ecuador cao.vanchung@epn.edu.ec, cvanchung@gmail.com

More information

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

A theoretical analysis of L 1 regularized Poisson likelihood estimation

A theoretical analysis of L 1 regularized Poisson likelihood estimation See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/228359355 A theoretical analysis of L 1 regularized Poisson likelihood estimation Article in

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.

More information

Discrete ill posed problems

Discrete ill posed problems Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction

More information

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical

More information

Iterative Regularization Methods for Inverse Problems: Lecture 3

Iterative Regularization Methods for Inverse Problems: Lecture 3 Iterative Regularization Methods for Inverse Problems: Lecture 3 Thorsten Hohage Institut für Numerische und Angewandte Mathematik Georg-August Universität Göttingen Madrid, April 11, 2011 Outline 1 regularization

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Optimization with nonnegativity constraints

Optimization with nonnegativity constraints Optimization with nonnegativity constraints Arie Verhoeven averhoev@win.tue.nl CASA Seminar, May 30, 2007 Seminar: Inverse problems 1 Introduction Yves van Gennip February 21 2 Regularization strategies

More information

On nonstationary preconditioned iterative regularization methods for image deblurring

On nonstationary preconditioned iterative regularization methods for image deblurring On nonstationary preconditioned iterative regularization methods for image deblurring Alessandro Buccini joint work with Prof. Marco Donatelli University of Insubria Department of Science and High Technology

More information

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski.

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski. A NEW L-CURVE FOR ILL-POSED PROBLEMS LOTHAR REICHEL AND HASSANE SADOK Dedicated to Claude Brezinski. Abstract. The truncated singular value decomposition is a popular method for the solution of linear

More information

Generalized Tikhonov regularization

Generalized Tikhonov regularization Fakultät für Mathematik Professur Analysis Inverse Probleme Dissertation zur Erlangung des akademischen Grades Doctor rerum naturalium Dr. rer. nat.) Generalized Tikhonov regularization Basic theory and

More information

Modifying A Linear Support Vector Machine for Microarray Data Classification

Modifying A Linear Support Vector Machine for Microarray Data Classification Modifying A Linear Support Vector Machine for Microarray Data Classification Prof. Rosemary A. Renaut Dr. Hongbin Guo & Wang Juh Chen Department of Mathematics and Statistics, Arizona State University

More information

Greedy Tikhonov regularization for large linear ill-posed problems

Greedy Tikhonov regularization for large linear ill-posed problems International Journal of Computer Mathematics Vol. 00, No. 00, Month 200x, 1 20 Greedy Tikhonov regularization for large linear ill-posed problems L. Reichel, H. Sadok, and A. Shyshkov (Received 00 Month

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional

Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional arxiv:183.1757v1 [math.na] 5 Mar 18 Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional Simon Hubmer, Ronny Ramlau March 6, 18 Abstract In

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 2191-2195 1 On a new notion of regularizer A.G. Ramm LMA/CNRS, 31 Chemin Joseph Aiguier, Marseille 13402, France and Mathematics

More information

Nonlinear Functional Analysis and its Applications

Nonlinear Functional Analysis and its Applications Eberhard Zeidler Nonlinear Functional Analysis and its Applications III: Variational Methods and Optimization Translated by Leo F. Boron With 111 Illustrations Ш Springer-Verlag New York Berlin Heidelberg

More information

Parameter Identification in Partial Differential Equations

Parameter Identification in Partial Differential Equations Parameter Identification in Partial Differential Equations Differentiation of data Not strictly a parameter identification problem, but good motivation. Appears often as a subproblem. Given noisy observation

More information

WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS

WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS ROLAND HERZOG AND FRANK SCHMIDT Abstract. Sufficient conditions ensuring weak lower

More information

Parameter Identification in Systems Biology: Solving Ill-posed Inverse Problems using Regularization

Parameter Identification in Systems Biology: Solving Ill-posed Inverse Problems using Regularization www.oeaw.ac.at Parameter Identification in Systems Biology: Solving Ill-posed Inverse Problems using Regularization S. Müller, J. Lu, P. Kuegler, H.W. Engl RICAM-Report 28-25 www.ricam.oeaw.ac.at Parameter

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems

Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems Comparison of A-Posteriori Parameter Choice Rules for Linear Discrete Ill-Posed Problems Alessandro Buccini a, Yonggi Park a, Lothar Reichel a a Department of Mathematical Sciences, Kent State University,

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

Sparse Recovery in Inverse Problems

Sparse Recovery in Inverse Problems Radon Series Comp. Appl. Math XX, 1 63 de Gruyter 20YY Sparse Recovery in Inverse Problems Ronny Ramlau and Gerd Teschke Abstract. Within this chapter we present recent results on sparse recovery algorithms

More information

Regularization Methods to Solve Various Inverse Problems

Regularization Methods to Solve Various Inverse Problems Regularization Methods to Solve Various Inverse Problems Amita Garg 1, Komal Goyal 2 1 Research Scholar, Mathematics Department, Sharda University, Uttar Pradesh, India 2 Assistant Professor, Mathematics

More information

Parameter Identification

Parameter Identification Lecture Notes Parameter Identification Winter School Inverse Problems 25 Martin Burger 1 Contents 1 Introduction 3 2 Examples of Parameter Identification Problems 5 2.1 Differentiation of Data...............................

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik Accelerated Landweber iteration in Banach spaces T. Hein, K.S. Kazimierski Preprint 2009-17 Fakultät für Mathematik Impressum: Herausgeber: Der Dekan der Fakultät für Mathematik an der Technischen Universität

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

Dual Decomposition.

Dual Decomposition. 1/34 Dual Decomposition http://bicmr.pku.edu.cn/~wenzw/opt-2017-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/34 1 Conjugate function 2 introduction:

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

A semismooth Newton method for L 1 data fitting with automatic choice of regularization parameters and noise calibration

A semismooth Newton method for L 1 data fitting with automatic choice of regularization parameters and noise calibration A semismooth Newton method for L data fitting with automatic choice of regularization parameters and noise calibration Christian Clason Bangti Jin Karl Kunisch April 26, 200 This paper considers the numerical

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS

LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Lavrentiev regularization is a popular approach to the solution of linear discrete illposed problems

More information

arxiv: v1 [math.na] 30 Jan 2018

arxiv: v1 [math.na] 30 Jan 2018 Modern Regularization Methods for Inverse Problems Martin Benning and Martin Burger December 18, 2017 arxiv:1801.09922v1 [math.na] 30 Jan 2018 Abstract Regularization methods are a key tool in the solution

More information

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION International Journal of Pure and Applied Mathematics Volume 91 No. 3 2014, 291-303 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijpam.v91i3.2

More information

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing

More information

Optimal control as a regularization method for. an ill-posed problem.

Optimal control as a regularization method for. an ill-posed problem. Optimal control as a regularization method for ill-posed problems Stefan Kindermann, Carmeliza Navasca Department of Mathematics University of California Los Angeles, CA 995-1555 USA {kinder,navasca}@math.ucla.edu

More information

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS Olivier Scaillet a * This draft: July 2016. Abstract This note shows that adding monotonicity or convexity

More information

Joint additive Kullback Leibler residual minimization and regularization for linear inverse problems

Joint additive Kullback Leibler residual minimization and regularization for linear inverse problems MATHEMATICAL METHODS IN THE APPLIED SCIENCES Math. Meth. Appl. Sci. 2007; 30:1527 1544 Published online 21 March 2007 in Wiley InterScience (www.interscience.wiley.com).855 MOS subject classification:

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

A Convergence Rates Result for Tikhonov Regularization in Banach Spaces with Non-Smooth Operators

A Convergence Rates Result for Tikhonov Regularization in Banach Spaces with Non-Smooth Operators A Convergence Rates Result for Tikhonov Regularization in Banach Spaces with Non-Smooth Operators B. Hofmann, B. Kaltenbacher, C. Pöschl and O. Scherzer May 28, 2008 Abstract There exists a vast literature

More information

On some nonlinear parabolic equation involving variable exponents

On some nonlinear parabolic equation involving variable exponents On some nonlinear parabolic equation involving variable exponents Goro Akagi (Kobe University, Japan) Based on a joint work with Giulio Schimperna (Pavia Univ., Italy) Workshop DIMO-2013 Diffuse Interface

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 39, pp. 476-57,. Copyright,. ISSN 68-963. ETNA ON THE MINIMIZATION OF A TIKHONOV FUNCTIONAL WITH A NON-CONVEX SPARSITY CONSTRAINT RONNY RAMLAU AND

More information

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN Thai Journal of Mathematics Volume 14 (2016) Number 1 : 53 67 http://thaijmath.in.cmu.ac.th ISSN 1686-0209 A New General Iterative Methods for Solving the Equilibrium Problems, Variational Inequality Problems

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information