Functionalanalytic tools and nonlinear equations

Size: px
Start display at page:

Download "Functionalanalytic tools and nonlinear equations"

Transcription

1 Functionalanalytic tools and nonlinear equations Johann Baumeister Goethe University, Frankfurt, Germany Rio de Janeiro / October 2017

2 Outline Fréchet differentiability of the (PtS) mapping. Nonlinear equations HUM for nonlinear problems Range invariance condition Iterative methods for solving nonlinear ill-posed problems Tangential cone condition October 13, 2017

3 Nonlinear problem Nonlinear equation F (x) = y Solvability: the theory is more specific than in the linear case Identification: in general, parameter to solution mappings are nonlinear Identification: equation is ill-posed due to the lack of stability Linearization for ill-posed problems: works but is much more delicate Computational schemes: difficult for noisy data Equation: F (x) = y Exact data: F (x ) = y. Noisy data: Given y ε with y ε y Y ε. Find x ε with F (x ε ) y ε.

4 Fréchet-derivative Definition Let X, Y be Banach spaces and let F : X dom(f ) Y be a mapping with the domain of definition dom(f ). Let x 0 X be an interior point of dom(f ). F is called Fréchet-differentiable in x 0 iff there exists a linear continuous operator T : X Y such that F (x) F (x 0 ) T (x x 0 ) Y = o( x x 0 X ) as x converges to x 0. T (which is uniquely determined) is called the Fréchet-derivative in x 0 and we write F (x 0 ) for T.

5 Fréchet derivative-1 How to compute the Fréchet-derivative of a parameter to solution mapping? Parameter to solution mapping (PtS) F : Q ad q u V where u := F (q) solves the variational equation a 0 (u, v) + a 1 (q; u, v) = f, v for all v V. Later on, we are interested in the Fréchet-derivative of F : Q ad q ι F (q) H where ι is the imbedding of V into H. But this then obvious.

6 Fréchet-derivative-2 Assumptions (last lecture) (1) Given a Gelfand triple V H V of Hilbert spaces which is used to describe the state. (2) Given a Hilbert space P which is used to describe the parameters. (3) Q ad is a subset of P which describes the admissible parameters. (4) Given for each q Q ad a bilinear form a 0 (, ) + a 1 (q;, ) : V V IR. (5) For each q Q ad there exist constants γ 0 0, γ 1 0, γ(q) > 0 such that a 0 (u, v) γ 0 u V v V, a 1 (q; u, v) γ 1 u V v V, u, v V and a 0 (u, u) + a 1 (q; u, u) γ(q) u 2 V for all u V. (6) Given f V. By the Lax-Milgram Lemma (PtS) is well defined!

7 Fréchet-derivative of a (PtS) mapping Let p Q ad an interior point of Q ad, i.e. B ρ (p) Q ad for some ρ > 0, and let h P be a (small) increment. We consider the variational equation for the parameter p, p + h with associated solutions u p := F (p) and u p+h := F (p + h), respectively. Hence: a 0 (u p, v) + a 1 (p; u p, v) = f, v for all v V a 0 (u p+h, v) + a 1 (p + h; u p+h, v) = f, v for all v V. Subtracting these equations we obtain a 0 (u p+h u p, v)+a 1 (p; u p+h u p, v) = a 1 (h; u p+h, v) for all v V. (1) This just an informal calculation since a 1 is not defined in parameters h which do not belong necessarily to Q ad. (2) Accepting (1) we can guess what the Fréchet-derivative of F in p should be: z := F (p)(h) should solve the variational problem a 0 (z, v) + a 1 (p; z, v) = a 1 (h; u p, v) for all v V.

8 Fréchet-derivative of a (PtS) mapping-1 The assumptions which are sufficient for the continuation of the proof of Fréchet-differentibility are the following ones: Assumption a 1 is defined on P V V and the following estimation holds with a constant c 1 0: a 1 (r; u, v) c 1 r V u V v V for all r P, u, v V. There exists r (0, ρ) such that γ(q) γ for all q B r (p). The assumption above concerning the estimate of the trilinear form are in the context of the a/b/c-problems depending on the dimension of the domain where the differential operator is formulated. Now we continue the proof with h B r (θ) in three steps.

9 Fréchet-derivative of a (PtS) mapping-2 Step 1: Boundeness of F in B r (p). Let q B r (p) and let u q := F (q). Then we conclude from the variational equation for u q γ(q) u q 2 V a 0 (u q, u q ) + a 1 (q; u q, u q ) = f, u q f V u q V γ(q) 2 u q 2 V + 1 2γ(q) f 2 V With the assumption above we conclude u q 2 V 1 γ(q) 2 f 2 V 1 γ(q) 2 f 2 V

10 Fréchet-derivative of a (PtS) mapping-3 Step 2: Lipschitz continuity of F in B r (x 0 ) locally in p. Let h B r (θ), let u p := F (p), u p+h := F (p + h) and w := u p+h u p. Then we have γ(p) w 2 V = a 0 (w, w) + a 1 (p; w, w) = a 1 (h; u p+h, w) c 1 h P u p+h V w V c 1 h P w V with a constant c 1 0. Now we can continue similar to Step 1 to obtain u p+h u p V c 2 h P with a constant c 2 0.

11 Fréchet-derivative of a (PtS) mapping-4 Step 3 Fréchet-differentiability of F in p. Let w := u p+h u p z (see above). Then we have using the estimates in Step 1 and Step 2 γ(p) w 2 V a 0 (w, w) + a 1 (p; w, w) = a 1 (h; u p+h u p, w) c 3 h 2 P w V with a constant c 3 0. Now we can continue similar to Step 1,2 to obtain u p+h u p z V c 3 h 2 P with a constant c 3 0. Now, the mapping F : Q ad H is Fréchet differentiable in p. Later on we need the adjoint F (p). We omit the computation of this adjoint here. It can be obtained by computing a variational solution too.

12 General nonlinearity/assumptions F (x ) = y Standard assumptions F0) F : X dom(f ) Y, X, Y are Banach spaces Here we assume that X, Y are Hilbert spaces F is weakly sequentially continuous dom(f ) weakly sequentially closed There exists x 0 X, ρ > 0, with B ρ (x 0 ) dom(f ). F1) F is Fréchet-differentiable in B ρ (x 0 ) with derivative F There exists m 0 with F (x) X Y m, x B ρ (x 0 ). F2) x B ρ/2 (x 0 ). F3) F (x ) is injective but not closed. Consequence of F3): To solve the linearized equation F (x )h = z is an ill-posed problem.

13 Assumptions Linearization Quantitative reformulation of F1) F1) F is Fréchet differentiable in the ball B ρ (x 0 ) dom(f ) U X is a Banach space with u U u X for all u U There exists r (0, ρ/2) and E > 0 such that x B r (x 0 ) BE U(θ) There exists ν (1, 2] such that F (x) F (x ) F (x )(x x ) Y c 1 x x ν X for x Br X (x ) BE U (θ) with some constant c 1 := c 1 (x, r, E). The assumption x B ρ/2 (x 0 ) B U E (θ) may be considered as source-type assumption.

14 Linearization Theorem Suppose that the assumptions F0),F1),F2),F3) hold. Moreover assume that x x X c 2 F (x )(x x ) µ Y, x BX r (x ) B U E (θ) for some constant c 2 := c 2 (x, r, E) where µ > 0 with µν > 1. Then there exists r (0, r) and a constant c := c(x, r, E) such that the following estimate holds: Proof:See x x X c F (x) F (x ) µ Y, x BX r (x ) B U E (θ). C.D. Pagani Questions of stability for inverse problems. Rend. Sem. Mat. Fis. Milano 52, 1982 Notice: µ (0, 1) due to the assumption F3).

15 Stability estimate Theorem Under the assumptions of the theorem above we have sup{ x x X x, x B X r (x ) B U E (θ), F (x) F (x ) Y ε} c(x, r, E)ε µ How to find a space U such that the assumptions above hold? HUM for nonlinear problems Spaces V x = ran(f (x) ), x B X ρ (x 0 ). Gelfand triple V x X V x, x B X ρ (x 0 ). Problem: V x = V x, x B X ρ (x 0 )? Answer: Range invariance condition

16 Range Invariance Range Invariance Condition There exist linear bounded operators R( x, x) satisfying F ( x) = R( x, x) F (x), R( x, x) id c R, x, x Bρ X (x 0 ), for some 0 < c R < 1. Consequence for the HUM-approach V x = V x, x B X ρ (x 0 ) Sufficient conditions for the range invariance condition?

17 Range Invariance Theorem (Douglas et al.) Suppose that the assumptions F0),F1),F2) are satisfied. Then for x B X ρ (x 0 ) the following conditions are equivalent: a) ran(f (x) ) V x. b) F (x ) majorizes F (x), i.e. there exists a constant M > 0 such that F (x)h Y M F (x )h Y for all h X. c) There exists a continuous linear operator R : Y Y with F (x) = R F (x ). B.A. Barnes Majorization, range inclusion, and factorization for bounded linear operators. PAMS 133, 2004 R. Douglas On majorization, factorization, and range inclusion of operators on Hilbert space. PAMS 17, 1966 I. Serban and F. Turcu Compact perturbations and factorizations of closed range operators. Preprint, 2008

18 Range Invariance-1 Theorem Suppose that F0),F1),F2) are satisfied. In addition we assume: F is twice Fréchet-differentiable. F (x ) majorizes F (x) and F (x) majorizes F (x ) for all x B X ρ (x 0 ). Then there exists r (0, ρ) and a family (R x ) x B X r (x 0 ) of continuous linear operators with a) ran(f (x) ) = V x for all x B X r (x 0 ). b) F (x) = R x F (x 0 ) for all x B X r (x 0 ). c) R x id x x 0 for all x B X r (x 0 ).

19 Range Invariance-2 Proof: Theorem of implicit functions applied to G : B X ρ (x0 ) B(Y Y) (x, Q) F (x) F (x ) Q B(Y V). This mapping G is well defined due to the Theorem of Douglas. By the HUM-approach G Q (x, id) is an isomorphism from B(Y Y) onto B(Y Y). Obviously, G is continously differentiable. Moreover and G(x, id) = Θ, G x (x, id)(h)( ) = F (x ) (h, ), h X, G Q (x, id)(h) = F (x ) H, H B(Y Y). Since F (x ) is an isomorphism from Y onto V G Q (x, id) is an isomorphism from B(Y Y) onto B(Y Y). Now, by an application of the implicit function theorem we obtain r > 0 and a family (Q x ) x Br (x ) of continuous linear operators with F (x) = F (x ) Q x for all x B r (x ), Q x id x x for all x B r (x ). To get the desired result, we set R x := Q x, x B r (x ).

20 Range Invariance-3 More on these subjects can be found in J. Baumeister Linear ill-posed problems in Banach spaces: Banach uniqueness method and stability. Working paper, 2009 J. Baumeister Nonlinear ill-posed problems in Banach spaces: stability and regularization. Working paper, 2010

21 Iterative methods We consider again an equation F (x) = y where a noisy data y ε for y is available only Here is a list of iterative methods for nonlinear ill-posed problems which are well analyzed and efficient implemented. Landweber method Iterated Tikhonov method Levenberg-Marquardt method Gauss-Newton method Newton-type methods All these methods may benefit from a Kaczmarz-type implementation

22 Kaczmarz-type implementation Suppose we have mapping f : dom(f ) R and an iterative method x k+1 := I (x k, f ), k IN 0. We assume that we can decompose the mapping f as follows f = (f 0,..., f N 1 ) : dom(f 0 ) dom(f N 1 ) R := R 0 R N 1 Then we may reformulate the iteration above in a cyclic way: x k+1 = I (x k, f [k] ), k IN 0, where [k] = k mod N. A cycle is a sequence x m,... x m+n of iterates where m is a multiple of N. This idea goes back to a method of Kaczmarz which has strong applications in computer tomography. S. Kaczmarz Angenäherte Auflösung von Systemen linearer Gleichungen. Bull. Acad. Polon. A35, 1937

23 Landweber method The Landweber method is an iterative method to solve The necessary condition suggests the fixed point iteration x argmin ( F (x) y ε 2 Y) F (x ) (F (x ) y ε ) = θ x k+1 := x k ω k F (x k ) (F (x k ) y ε ), k IN 0, where (ω k ) k IN0 is a sequence of stepsize parameters which should guarantee convergence and efficiency. Clearly, an analysis should show that the iteration operator x x ωf (x) (F (x) y ε ) shows the property of contraction, at least in the case of exact data.

24 Landweber method/references There is a huge number of papers concerning this subject. L. Landweber An iteration formula for Fredholm integral euqations of the first kind. Amer. J. Math. 73, 1951 M. Hanke and A. Neubauer and O. Scherzer A convergence analysis of the Landweber iteration for nonlinear ill-posed problems. Numerische Mathematik 72, 1995

25 An inverse problem and the Landweber iteration Model (q w) = 1 in Ω IR 2 w(ξ) = 0 in Ω The boundary of Ω is assumed to be smooth. The corresponding parameter to solution mapping is defined as F : dom(f ) := H 2 +(Ω) q w L 2 (Ω). where F (q) is the solution of the boundary value problem above. H 2 +(Ω) := {q H 2 (Ω) : essinf q > 0}. H 2 +(Ω) is an open subset of H 2. The parameter to solution map is well defined due to the Lax-Milgram lemma.

26 Inverse problem and the Landweber iteration-1 The associated inverse problem is For w L 2 (Ω) find q H 2 +(Ω) such that F (q) w. Clearly, for the solution of this problem we should consider the case of noisy data y. In the paper D. Garmatter and B. Haasdonk and B. Harrach A reduced basis Landweber method for nonlinear inverse problems Inverse Problems 32.3, 2016 the Landweber is used to solve the inverse problem. The focus is on the efficiency of the implementation. The Landweber method requires the solution of two forward problems. The numerical solution is organized in a way which uses the idea of a reduced basis. This idea is related to the idea of model reduction sketched in the first lecture.

27 Iterated Tikhonov method The so called iterated Tikhonov method is an iterative method to solve x argmin ( F (x) y ε 2 Y + α x x + 2 X ) where x + is an approximation for the solution x looked for. The necessary condition F (x ) (F (x ) y ε ) + α(x x + ) = θ suggests the fixed point iteration x k+1 := x k α 1 F (x k ) (F (x k+1 ) y ε ), k IN 0, The iteration above is an implicite method. Why to implement such a expensive method? But notice that the minimization of the Tikhonov functional has to be evaluated several times since the best regularization parameter α is not known. This is in relation to the number of iteration.

28 Iterated Tikhonov method/references J. Baumeister, A. De Cezaro and A. Leitão Modified iterated Tikhonov methods for solving systems of nonlinear equations Inverse Problems and Imaging 5, C.W. Groetsch and O. Scherzer Nonstationary iterated Tikhonov-Morozov method and third order differential equations for the evaluation of unbounded operators. Math. Meth. Appl. Sci. 23, 2000 O. Scherzer Convergence rate for iterated Tikhonov regularized solutions of nonlinear ill-posed problems. Numerische Mathenatik 66, 1993

29 Gauss-Newton method The necessary condition F (x ) (F (x y ε ) = θ for nonlinear least squares is the starting point for the Landweber method. To solve this equation one may formulate a Newton method. Suppose that an approximation x k is known. Then we compute a correction term s k as follows: (F (x k ) F (x k ) + )(s k ) = F (x k ) (F (x k ) y ε ) is a term of second order which we neglect. Then we obtain the following iteration: x k+1 := x k + ω k s k where s k solves F (x k ) F (x k )(s k ) = F (x k ) (F (x k ) y ε ), k IN 0, and (ω k ) k IN0 is a sequence of stepsize parameters.

30 Gauss-Newton method/remark and References Since we neglect the term of second order we cannot expect convergence of second order even in the case of exact data in a well-posed problem. In each step of the iteration above we have to compute the correction term s k. Since this is an ill-posed problem, in general, one has to apply a stable method to get s k (Truncation of singular values, regularized least squares,... ). Therefore the Gaus-Newton method may be considered as the combination of an outer iteration (x k x k+1 = ωs k ) and an inner iteration for the computation of s k. A variant of the Gauss-Newton method is the iteratively regularized Gauss-Newton method. It results in the iteration x k+1 := x k s k, k IN 0, where (α k id + F (x k ) F (x k ))(s k ) = F (x k ) (F (x k ) y ε ) + α k (x k x + ) The regularizing sequence (α k ) k IN0 has to be chosen in an appropriate way in order to obtain convergence results.

31 Gauss-Newton method/references B. Kaltenbacher and A. Neubauer and O. Scherzer On convergence rates for the iteratively regularized Gauss-Newton method. IMA Journal of Numerical Analysis 17.3, 1997 B. Kaltenbacher and A. Neubauer and A.G. Ramm Convergence rates of the continous regularized Gauss-Newton method. J. Inv. Ill-posed Problems 10, 2002 Q. Jin and M. Zhong On the itaratively regularized Gauss-Newton method in Banach spaces with applications tp parameter identification problems. 2013

32 Levenberg-Marquardt method The Gauss-Newton method requires a solution s k of F (x k ) F (x k )(s k ) = F (x k ) (F (x k ) y ε ) The idea of the Levenberg-Marquardt method is to stabilize this step. One uses a parameter α > 0 and solves instead (F (x k ) F (x k ) + α id)(s k ) = F (x k ) (F (x k ) y ε ) Then we obtain the following iteration: x k+1 := x k ω k (F (x k ) F (x k )+α id) 1 F (x k ) (F (x k ) y ε ), k IN 0, where (ω k ) k IN0 is a sequence of stepsize parameters.

33 Levenberg-Marquardt method/references M. Hanke A regularizing Levenberg-Marquardt scheme, with applications to inverse groundwater filtration. Inverse Problems 13, 1997 Marquardt, D.W. An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Indust. Appl. Math. 11, 1963 J. Baumeister, B. Kaltenbacher and A. Leitão On Levenberg-Marquardt-Kaczmarz iterative methods for solving systems of nonlinear equations. Inverse Problems and Imaging 4, 2010

34 Newton-type methods A Newton method applied to the equation F (x) y ε = θ, actually F (x) y ε θ, uses the linearized equation. Let x k be an actual approximation of its solution. Then the next approximation x k+1 is computed by solving F (x k )(x k+1 x k ) = (F (x k ) y ε ) This leads to the following iteration Hhere (ω k ) k IN0 x k+1 := x k + ω k s k where s k solves F (x k )(s k ) = (F (x k ) y ε ), k IN 0, is a sequence of stepsize parameters. The iteration method above may be considered as a combination of an outer iteration (x k x k+1 := x k + ωs k ) and an inner iteration for the computation of s k. Newton-type methods differ in the way how the inner iteration is implemented.

35 Newton-type methods/references A. Rieder On convergence rates of inexact Newton regularization. Numerische Mathematik 88, 2001 A. Lechleiter Towards a general convergence theory for inexact Newton regularization. Numerische Mathematik 114, 2010 F. Margotti On Inexact Newton Methods for Inverse Problems in Banach Spaces. Thesis Universität Karlsruhe, 2015 A. Rieder and F. Margotti An inexact Newton regularization in Banach spaces based on the nonstationary iterated Tikhonov method. Journal of Inverse and Ill-posed Problems 23, 2013 B. Kaltenbacher Some Newton-type methods for the regularization of nonlinear ill-posed problems. Inverse Problems 13, 1997

36 Remarks concerning convergence proofs for iterative methods For all the methods above, the tangential cone condition is essential. It helps to control the quantities x k x X. In general, a main step in the convergence proof in the noisy case is the proof of monotonicity of a part of the sequence ( x k x X ) k IN. A stopping rule is not very important in the case of exact data. For noisy date it is essential since one cannot have convergence of the iterates, in general. The stopping index plays the role of a regularizing parameter. There are various stopping rules on the market: Discrepancy methods, lopping rule,.... In the case of a system of linear equations the Kaczmarz idea is used in a symmetric form too: a symmetric cycle is a cycle forward and a cycle backwards. This has nice consequences for the convergence rate. For each Kaczmarz-type implementation of the methods above a randomized version can be formulated by taking f l from the decomposition f = (f 0,..., f N 1 ) in each iteration step per random.

37 Tangential cone condition-a remark Tangential cone condition (TCC) F ( x) F (x) F (x)( x x) Y η F ( x) F (x) Y x, x B X ρ (x 0 ), with η (0, 1 2 ). Important for the convergence analysis! See for instance B. Kaltenbacher, A. Neubauer and O. Scherzer Iterative Regularization Methods for Nonlinear Ill-Posed problems. De Gruyter Verlag, 2007 J. Baumeister, B. Kaltenbacher and A. Leitão On Levenberg-Marquardt Kaczmarz methods for regularizing systems of nonlinear ill-posed equations 2009; submitted J. Baumeister, A. De Cezaro and A. Leitão Modified iterated Tikhonov methods for solving systems of nonlinear ill-posed equations. Inverse Problems and Imaging 5, 2011

38 Tangential cone condition-a remark-1 Theorem Let the assumptions F0),F1) and F2) with ν = 2 hold. Let x, x V x with x, x Vx Vx E. Then we have F (x) F (x ) F (x )(x x ) Y c (E) F (x) F (x ) Y for some constant c (E) if E is small enough. Brings us nearer to the (TCC). The problem is in the assumption x, x V x with x Vx, x Vx E since V x, V x are not related by a range condition. The smallness of E is related to the nonlinearity of F in the neighborhood of x

39 Tangential cone condition again Tangential cone condition (TCC) In F ( x) F (x) F (x)( x x) Y η F ( x) F (x) Y x, x B X ρ (x 0 ), with η (0, 1). A. Leitao and B.F. Svaiter On a family of gradient type projection methods for nonlinear ill-posed problems. Working paper, 2016 A. Leitao and B.F. Svaiter On projective Landweber-Kaczmarz methods for solving systems of nonlinear ill-posed equations. Inverse Problems 32, 2016 interesting sets based on the tangential cone condition are used to accelerate computational schemes for nonlinear ill-posed equations. This sets have a nice geometrical interpretation.

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

Parameter Identification in Partial Differential Equations

Parameter Identification in Partial Differential Equations Parameter Identification in Partial Differential Equations Differentiation of data Not strictly a parameter identification problem, but good motivation. Appears often as a subproblem. Given noisy observation

More information

On a family of gradient type projection methods for nonlinear ill-posed problems

On a family of gradient type projection methods for nonlinear ill-posed problems On a family of gradient type projection methods for nonlinear ill-posed problems A. Leitão B. F. Svaiter September 28, 2016 Abstract We propose and analyze a family of successive projection methods whose

More information

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY ARMIN LECHLEITER AND ANDREAS RIEDER January 8, 2007 Abstract. The Newton-type regularization

More information

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton

More information

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.

More information

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2 Mathematical Modelling and Numerical Analysis Modélisation Mathématique et Analyse Numérique Will be set by the publisher REGULARIZATION OF NONLINEAR ILL-POSED PROBLEMS BY EXPONENTIAL INTEGRATORS Marlis

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations

Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Tikhonov Replacement Functionals for Iteratively Solving Nonlinear Operator Equations Ronny Ramlau Gerd Teschke April 13, 25 Abstract We shall be concerned with the construction of Tikhonov based iteration

More information

Iterative Regularization Methods for Inverse Problems: Lecture 3

Iterative Regularization Methods for Inverse Problems: Lecture 3 Iterative Regularization Methods for Inverse Problems: Lecture 3 Thorsten Hohage Institut für Numerische und Angewandte Mathematik Georg-August Universität Göttingen Madrid, April 11, 2011 Outline 1 regularization

More information

Convergence rates of the continuous regularized Gauss Newton method

Convergence rates of the continuous regularized Gauss Newton method J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper

More information

Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional

Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional arxiv:183.1757v1 [math.na] 5 Mar 18 Nesterov s Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional Simon Hubmer, Ronny Ramlau March 6, 18 Abstract In

More information

ON THE SIMILARITY OF CENTERED OPERATORS TO CONTRACTIONS. Srdjan Petrović

ON THE SIMILARITY OF CENTERED OPERATORS TO CONTRACTIONS. Srdjan Petrović ON THE SIMILARITY OF CENTERED OPERATORS TO CONTRACTIONS Srdjan Petrović Abstract. In this paper we show that every power bounded operator weighted shift with commuting normal weights is similar to a contraction.

More information

Parallel Cimmino-type methods for ill-posed problems

Parallel Cimmino-type methods for ill-posed problems Parallel Cimmino-type methods for ill-posed problems Cao Van Chung Seminar of Centro de Modelización Matemática Escuela Politécnica Naciónal Quito ModeMat, EPN, Quito Ecuador cao.vanchung@epn.edu.ec, cvanchung@gmail.com

More information

Chapter 6. Metric projection methods. 6.1 Alternating projection method an introduction

Chapter 6. Metric projection methods. 6.1 Alternating projection method an introduction Chapter 6 Metric projection methods The alternating projection method is an algorithm for computing a point in the intersection of some convex sets. The common feature of these methods is that they use

More information

TOWARDS A GENERAL CONVERGENCE THEORY FOR INEXACT NEWTON REGULARIZATIONS

TOWARDS A GENERAL CONVERGENCE THEORY FOR INEXACT NEWTON REGULARIZATIONS TOWARDS A GENERAL CONVERGENCE THEOR FOR INEXACT NEWTON REGULARIZATIONS ARMIN LECHLEITER AND ANDREAS RIEDER July 13, 2009 Abstract. We develop a general convergence analysis for a class of inexact Newtontype

More information

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces

Due Giorni di Algebra Lineare Numerica (2GALN) Febbraio 2016, Como. Iterative regularization in variable exponent Lebesgue spaces Due Giorni di Algebra Lineare Numerica (2GALN) 16 17 Febbraio 2016, Como Iterative regularization in variable exponent Lebesgue spaces Claudio Estatico 1 Joint work with: Brigida Bonino 1, Fabio Di Benedetto

More information

Iterative Regularization Methods for a Discrete Inverse Problem in MRI

Iterative Regularization Methods for a Discrete Inverse Problem in MRI CUBO A Mathematical Journal Vol.10, N ō 02, (137 146). July 2008 Iterative Regularization Methods for a Discrete Inverse Problem in MRI A. Leitão Universidade Federal de Santa Catarina, Departamento de

More information

New Algorithms for Parallel MRI

New Algorithms for Parallel MRI New Algorithms for Parallel MRI S. Anzengruber 1, F. Bauer 2, A. Leitão 3 and R. Ramlau 1 1 RICAM, Austrian Academy of Sciences, Altenbergerstraße 69, 4040 Linz, Austria 2 Fuzzy Logic Laboratorium Linz-Hagenberg,

More information

SHADOWING AND INVERSE SHADOWING IN SET-VALUED DYNAMICAL SYSTEMS. HYPERBOLIC CASE. Sergei Yu. Pilyugin Janosch Rieger. 1.

SHADOWING AND INVERSE SHADOWING IN SET-VALUED DYNAMICAL SYSTEMS. HYPERBOLIC CASE. Sergei Yu. Pilyugin Janosch Rieger. 1. Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 32, 2008, 151 164 SHADOWING AND INVERSE SHADOWING IN SET-VALUED DYNAMICAL SYSTEMS. HYPERBOLIC CASE Sergei Yu. Pilyugin

More information

The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator

The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator G. Bal (gb2030@columbia.edu) 1 W. Naetar (wolf.naetar@univie.ac.at) 2 O. Scherzer (otmar.scherzer@univie.ac.at) 2,3

More information

A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010),

A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010), A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (21), 1916-1921. 1 Implicit Function Theorem via the DSM A G Ramm Department of Mathematics Kansas

More information

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2

Marlis Hochbruck 1, Michael Hönig 1 and Alexander Ostermann 2 ESAIM: M2AN 43 (29) 79 72 DOI:.5/m2an/292 ESAIM: Mathematical Modelling and Numerical Analysis www.esaim-m2an.org REGULARIZATION OF NONLINEAR ILL-POSED PROBLEMS BY EXPONENTIAL INTEGRATORS Marlis Hochbruck,

More information

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:

More information

Convergence rates in l 1 -regularization when the basis is not smooth enough

Convergence rates in l 1 -regularization when the basis is not smooth enough Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

INVERSE FUNCTION THEOREM and SURFACES IN R n

INVERSE FUNCTION THEOREM and SURFACES IN R n INVERSE FUNCTION THEOREM and SURFACES IN R n Let f C k (U; R n ), with U R n open. Assume df(a) GL(R n ), where a U. The Inverse Function Theorem says there is an open neighborhood V U of a in R n so that

More information

INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION

INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION INEXACT NEWTON REGULARIZATION USING CONJUGATE GRADIENTS AS INNER ITERATION ANDREAS RIEDER August 2004 Abstract. In our papers [Inverse Problems, 15, 309-327,1999] and [Numer. Math., 88, 347-365, 2001]

More information

arxiv: v1 [math.na] 16 Jan 2018

arxiv: v1 [math.na] 16 Jan 2018 A FAST SUBSPACE OPTIMIZATION METHOD FOR NONLINEAR INVERSE PROBLEMS IN BANACH SPACES WITH AN APPLICATION IN PARAMETER IDENTIFICATION ANNE WALD arxiv:1801.05221v1 [math.na] 16 Jan 2018 Abstract. We introduce

More information

On Friedrichs inequality, Helmholtz decomposition, vector potentials, and the div-curl lemma. Ben Schweizer 1

On Friedrichs inequality, Helmholtz decomposition, vector potentials, and the div-curl lemma. Ben Schweizer 1 On Friedrichs inequality, Helmholtz decomposition, vector potentials, and the div-curl lemma Ben Schweizer 1 January 16, 2017 Abstract: We study connections between four different types of results that

More information

Dynamical systems method (DSM) for selfadjoint operators

Dynamical systems method (DSM) for selfadjoint operators Dynamical systems method (DSM) for selfadjoint operators A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 6656-262, USA ramm@math.ksu.edu http://www.math.ksu.edu/ ramm Abstract

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

Lamé Parameter Estimation from Static Displacement Field Measurements in the Framework of Nonlinear Inverse Problems

Lamé Parameter Estimation from Static Displacement Field Measurements in the Framework of Nonlinear Inverse Problems arxiv:1710.10446v2 [math.na] 19 Jan 2018 Lamé Parameter Estimation from Static Displacement Field Measurements in the Framework of Nonlinear Inverse Problems Simon Hubmer, Ekaterina Sherina, Andreas Neubauer,

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES ARCHIVUM MATHEMATICUM (BRNO) Tomus 42 (2006), 167 174 SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES ABDELHAKIM MAADEN AND ABDELKADER STOUTI Abstract. It is shown that under natural assumptions,

More information

Comm. Nonlin. Sci. and Numer. Simul., 12, (2007),

Comm. Nonlin. Sci. and Numer. Simul., 12, (2007), Comm. Nonlin. Sci. and Numer. Simul., 12, (2007), 1390-1394. 1 A Schrödinger singular perturbation problem A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 66506-2602, USA ramm@math.ksu.edu

More information

Inverse problems and medical imaging

Inverse problems and medical imaging Inverse problems and medical imaging Bastian von Harrach harrach@math.uni-frankfurt.de Institute of Mathematics, Goethe University Frankfurt, Germany Seminario di Calcolo delle Variazioni ed Equazioni

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

How large is the class of operator equations solvable by a DSM Newton-type method?

How large is the class of operator equations solvable by a DSM Newton-type method? This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. How large

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, 2547-2565 An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Monotonicity-based inverse scattering

Monotonicity-based inverse scattering Monotonicity-based inverse scattering Bastian von Harrach http://numerical.solutions Institute of Mathematics, Goethe University Frankfurt, Germany (joint work with M. Salo and V. Pohjola, University of

More information

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik Accelerated Landweber iteration in Banach spaces T. Hein, K.S. Kazimierski Preprint 2009-17 Fakultät für Mathematik Impressum: Herausgeber: Der Dekan der Fakultät für Mathematik an der Technischen Universität

More information

Inverse problems and medical imaging

Inverse problems and medical imaging Inverse problems and medical imaging Bastian von Harrach harrach@math.uni-stuttgart.de Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Rhein-Main Arbeitskreis Mathematics of

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

Inverse problems and medical imaging

Inverse problems and medical imaging Inverse problems and medical imaging Bastian von Harrach harrach@math.uni-frankfurt.de Institute of Mathematics, Goethe University Frankfurt, Germany Colloquium of the Department of Mathematics Saarland

More information

Novel tomography techniques and parameter identification problems

Novel tomography techniques and parameter identification problems Novel tomography techniques and parameter identification problems Bastian von Harrach harrach@ma.tum.de Department of Mathematics - M1, Technische Universität München Colloquium of the Institute of Biomathematics

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Int. Journal of Math. Analysis, Vol. 3, 2009, no. 12, 549-561 Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Nguyen Buong Vietnamse Academy of Science

More information

This article was originally published in a journal published by Elsevier, and the attached copy is provided by Elsevier for the author s benefit and for the benefit of the author s institution, for non-commercial

More information

Inverse Gravimetry Problem

Inverse Gravimetry Problem Inverse Gravimetry Problem Victor Isakov September 21, 2010 Department of M athematics and Statistics W ichita State U niversity W ichita, KS 67260 0033, U.S.A. e mail : victor.isakov@wichita.edu 1 Formulation.

More information

INF-SUP CONDITION FOR OPERATOR EQUATIONS

INF-SUP CONDITION FOR OPERATOR EQUATIONS INF-SUP CONDITION FOR OPERATOR EQUATIONS LONG CHEN We study the well-posedness of the operator equation (1) T u = f. where T is a linear and bounded operator between two linear vector spaces. We give equivalent

More information

BIHARMONIC WAVE MAPS INTO SPHERES

BIHARMONIC WAVE MAPS INTO SPHERES BIHARMONIC WAVE MAPS INTO SPHERES SEBASTIAN HERR, TOBIAS LAMM, AND ROLAND SCHNAUBELT Abstract. A global weak solution of the biharmonic wave map equation in the energy space for spherical targets is constructed.

More information

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1 53 17. Lecture 17 Nonlinear Equations Essentially, the only way that one can solve nonlinear equations is by iteration. The quadratic formula enables one to compute the roots of p(x) = 0 when p P. Formulas

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

On the Local Convergence of Regula-falsi-type Method for Generalized Equations

On the Local Convergence of Regula-falsi-type Method for Generalized Equations Journal of Advances in Applied Mathematics, Vol., No. 3, July 017 https://dx.doi.org/10.606/jaam.017.300 115 On the Local Convergence of Regula-falsi-type Method for Generalized Equations Farhana Alam

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Generalized Local Regularization for Ill-Posed Problems

Generalized Local Regularization for Ill-Posed Problems Generalized Local Regularization for Ill-Posed Problems Patricia K. Lamm Department of Mathematics Michigan State University AIP29 July 22, 29 Based on joint work with Cara Brooks, Zhewei Dai, and Xiaoyue

More information

Inverse scattering problem from an impedance obstacle

Inverse scattering problem from an impedance obstacle Inverse Inverse scattering problem from an impedance obstacle Department of Mathematics, NCKU 5 th Workshop on Boundary Element Methods, Integral Equations and Related Topics in Taiwan NSYSU, October 4,

More information

LINEAR CHAOS? Nathan S. Feldman

LINEAR CHAOS? Nathan S. Feldman LINEAR CHAOS? Nathan S. Feldman In this article we hope to convience the reader that the dynamics of linear operators can be fantastically complex and that linear dynamics exhibits the same beauty and

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

I teach myself... Hilbert spaces

I teach myself... Hilbert spaces I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition

More information

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS Olivier Scaillet a * This draft: July 2016. Abstract This note shows that adding monotonicity or convexity

More information

Tuning of Fuzzy Systems as an Ill-Posed Problem

Tuning of Fuzzy Systems as an Ill-Posed Problem Tuning of Fuzzy Systems as an Ill-Posed Problem Martin Burger 1, Josef Haslinger 2, and Ulrich Bodenhofer 2 1 SFB F 13 Numerical and Symbolic Scientific Computing and Industrial Mathematics Institute,

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Institut für Numerische und Angewandte Mathematik

Institut für Numerische und Angewandte Mathematik Institut für Numerische und Angewandte Mathematik Iteratively regularized Newton-type methods with general data mist functionals and applications to Poisson data T. Hohage, F. Werner Nr. 20- Preprint-Serie

More information

Analysis in weighted spaces : preliminary version

Analysis in weighted spaces : preliminary version Analysis in weighted spaces : preliminary version Frank Pacard To cite this version: Frank Pacard. Analysis in weighted spaces : preliminary version. 3rd cycle. Téhéran (Iran, 2006, pp.75.

More information

arxiv: v1 [math.fa] 16 Jun 2011

arxiv: v1 [math.fa] 16 Jun 2011 arxiv:1106.3342v1 [math.fa] 16 Jun 2011 Gauge functions for convex cones B. F. Svaiter August 20, 2018 Abstract We analyze a class of sublinear functionals which characterize the interior and the exterior

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

Riesz bases of Floquet modes in semi-infinite periodic waveguides and implications

Riesz bases of Floquet modes in semi-infinite periodic waveguides and implications Riesz bases of Floquet modes in semi-infinite periodic waveguides and implications Thorsten Hohage joint work with Sofiane Soussi Institut für Numerische und Angewandte Mathematik Georg-August-Universität

More information

2 Nonlinear least squares algorithms

2 Nonlinear least squares algorithms 1 Introduction Notes for 2017-05-01 We briefly discussed nonlinear least squares problems in a previous lecture, when we described the historical path leading to trust region methods starting from the

More information

Sparse Recovery in Inverse Problems

Sparse Recovery in Inverse Problems Radon Series Comp. Appl. Math XX, 1 63 de Gruyter 20YY Sparse Recovery in Inverse Problems Ronny Ramlau and Gerd Teschke Abstract. Within this chapter we present recent results on sparse recovery algorithms

More information

Universität des Saarlandes. Fachrichtung 6.1 Mathematik

Universität des Saarlandes. Fachrichtung 6.1 Mathematik Universität des Saarlandes Fachrichtung 6.1 Mathematik Preprint Nr. 362 An iterative method for EIT involving only solutions of Poisson equations. I: Mesh-free forward solver Thorsten Hohage and Sergej

More information

Universität des Saarlandes. Fachrichtung 6.1 Mathematik

Universität des Saarlandes. Fachrichtung 6.1 Mathematik Universität des Saarlandes U N I V E R S I T A S S A R A V I E N I S S Fachrichtung 6.1 Mathematik Preprint Nr. 151 Unitary extensions of Hilbert A(D)-modules split Michael Didas and Jörg Eschmeier Saarbrücken

More information

Rolle s Theorem for Polynomials of Degree Four in a Hilbert Space 1

Rolle s Theorem for Polynomials of Degree Four in a Hilbert Space 1 Journal of Mathematical Analysis and Applications 265, 322 33 (2002) doi:0.006/jmaa.200.7708, available online at http://www.idealibrary.com on Rolle s Theorem for Polynomials of Degree Four in a Hilbert

More information

Factorization method in inverse

Factorization method in inverse Title: Name: Affil./Addr.: Factorization method in inverse scattering Armin Lechleiter University of Bremen Zentrum für Technomathematik Bibliothekstr. 1 28359 Bremen Germany Phone: +49 (421) 218-63891

More information

1.4 The Jacobian of a map

1.4 The Jacobian of a map 1.4 The Jacobian of a map Derivative of a differentiable map Let F : M n N m be a differentiable map between two C 1 manifolds. Given a point p M we define the derivative of F at p by df p df (p) : T p

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Stability constants for kernel-based interpolation processes

Stability constants for kernel-based interpolation processes Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report 59 Stability constants for kernel-based interpolation processes Stefano De Marchi Robert Schaback Dipartimento

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

Bounded Error Parameter Estimation for Models Described by Ordinary and Delay Differential Equations

Bounded Error Parameter Estimation for Models Described by Ordinary and Delay Differential Equations Bounded Error Parameter Estimation for Models Described by Ordinary and Delay Differential Equations John A. Burns and Adam F. Childers Interdisciplinary Center for Applied Mathematics Virginia Tech Blacksburg,

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information