An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

Similar documents
An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

Research Article Inverse Free Iterative Methods for Nonlinear Ill-Posed Operator Equations

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),

Convergence rates of the continuous regularized Gauss Newton method

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization

On the Midpoint Method for Solving Generalized Equations

How large is the class of operator equations solvable by a DSM Newton-type method?

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Functionalanalytic tools and nonlinear equations

A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010),

Two-parameter regularization method for determining the heat source

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

Numerical Methods for Large-Scale Nonlinear Systems

New w-convergence Conditions for the Newton-Kantorovich Method

On nonexpansive and accretive operators in Banach spaces

FIXED POINT ITERATIONS

An improved convergence theorem for the Newton method under relaxed continuity assumptions


Numerische Mathematik

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik

A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

Weak and strong convergence theorems of modified SP-iterations for generalized asymptotically quasi-nonexpansive mappings

On a family of gradient type projection methods for nonlinear ill-posed problems

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1)

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

3 Compact Operators, Generalized Inverse, Best- Approximate Solution

Robust error estimates for regularization and discretization of bang-bang control problems

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Affine covariant Semi-smooth Newton in function space

Numerical Methods for Differential Equations Mathematical and Computational Tools

APPROXIMATING SOLUTIONS FOR THE SYSTEM OF REFLEXIVE BANACH SPACE

Ann. Polon. Math., 95, N1,(2009),

THROUGHOUT this paper, we let C be a nonempty

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Fixed point theorems for Ćirić type generalized contractions defined on cyclic representations

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

Dynamical systems method (DSM) for selfadjoint operators

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

On the Local Convergence of Regula-falsi-type Method for Generalized Equations

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

A derivative-free nonmonotone line search and its application to the spectral residual method

Iterative common solutions of fixed point and variational inequality problems

Kantorovich s Majorants Principle for Newton s Method

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

Functional Analysis Exercise Class

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

THE INVERSE FUNCTION THEOREM

APPLICATIONS OF DIFFERENTIABILITY IN R n.

arxiv: v1 [math.na] 28 Jan 2009

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

arxiv: v1 [math.na] 16 Jan 2018

An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R. M.THAMBAN NAIR (I.I.T. Madras)

Lecture 11 Hyperbolicity.

B. Appendix B. Topological vector spaces

Iteration-complexity of first-order penalty methods for convex programming

THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES

Unconstrained optimization

Statistical Inverse Problems and Instrumental Variables

************************************* Applied Analysis I - (Advanced PDE I) (Math 940, Fall 2014) Baisheng Yan

A range condition for polyconvex variational regularization

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Improved Complexity of a Homotopy Method for Locating an Approximate Zero

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES

ITERATIVE METHODS FOR SOLVING A NONLINEAR BOUNDARY INVERSE PROBLEM IN GLACIOLOGY S. AVDONIN, V. KOZLOV, D. MAXWELL, AND M. TRUFFER

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY

Fixed point iteration Numerical Analysis Math 465/565

Journal of Complexity. New general convergence theory for iterative processes and its applications to Newton Kantorovich type theorems

THEOREMS, ETC., FOR MATH 515

A double projection method for solving variational inequalities without monotonicity

On Generalized Set-Valued Variational Inclusions

16 1 Basic Facts from Functional Analysis and Banach Lattices

THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

for all u C, where F : X X, X is a Banach space with its dual X and C X

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space

arxiv: v1 [math.ca] 5 Mar 2015

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

Your first day at work MATH 806 (Fall 2015)

Nonlinear stabilization via a linear observability

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

A strongly polynomial algorithm for linear systems having a binary solution

PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS

7 Complete metric spaces and function spaces

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces

Normed & Inner Product Vector Spaces

Transcription:

Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational Sciences National Institute of Technology Karnataka Surathkal, India-5755. E-mail:sgeorge@nitk.ac.in Atef Ibrahim Elmahdy Department of Mathematical and Computational Sciences National Institute of Technology Karnataka Surathkal, India-5755. E-mail:atefmahdy5@yahoo.com Abstract An iteratively regularized projection method, which converges quadratically, has been considered for obtaining stable approximate solution to nonlinear ill-posed operator equations F (x) = y where F : D(F ) X X is a nonlinear monotone operator defined on the real Hilbert space X. We assume that only a noisy data y δ with y y δ δ are available. Under the assumption that the Fréchet derivative F of F is Lipschitz continuous, a choice of the regularization parameter using an adaptive selection of the parameter and a stopping rule for the iteration index using a majorizing sequence are presented. We prove that under a general source condition on x ˆx, the error n, ˆx between the regularized approximation x h,δ n,, (x h,δ, := P hx where P h is an orthogonal projection on to a finite dimensional subspace X h of X) and the solution ˆx is of optimal order. Mathematics Subject Classification:65J,65J15, 47J6 Keywords: Nonlinear ill-posed operator, Monotone operator, Majorizing sequence, Regularized Projection method, Quadratic convergence.

1 S. George and A. I. Elmahdy 1 Introduction Let F : D(F ) X X be a nonlinear monotone operator (see [13]) defined on a real Hilbert space X. We consider the problem of approximately solving the nonlinear ill-posed operator equation; F (x) = y. (1) Throughout this paper we shall denote the inner product and the corresponding norm on X by.,. and. respectively. And we assume that (1) has a solution, namely ˆx and that y δ X are the available noisy data with y y δ δ. () Then the problem of recovery of ˆx from noisy equation F (x) = y δ is ill-posed, in the sense that the Fréchet derivative F (.) is not boundedly invertible (see [1], page 6). A well known method for regularizing (1), when F is monotone is the method of Lavrentiev regularization (see [13]). In this method approximation x δ is obtained by solving the singularly perturbed operator equation F (x) + (x x ) = y δ. (3) In practice, one has to deal with some sequence (x δ n,) n=1, converging to the solution x δ of (3). Many authors considered such sequences (see [, 3, 6, 7, 9, 8]). In [11], George and Elmahdy considered an iterative regularization method; x δ n+1, = x δ n, (F (x δ n,) + I) 1 (F (x δ n,) y δ + (x δ n, x )), (4) where x δ, = x and proved that (4) converges quadratically to the unique solution x δ of (3). Recall that, a sequence (x n ) is said to be converges quadratically to x, if there exist a positive number M, not necessarily less than 1, such that x n+1 x M x n x, for all n sufficiently large. And the convergence of (x n ) to x, is said to be linear if there exist a positive number M (, 1), such that x n+1 x M x n x. Note that regardless of the value of M quadratic convergent sequence will always eventually converges faster than a linearly convergent sequence. The following Assumptions are used for proving the results in [11] as well as the results in this paper.

An Iteratively Regularized Projection Method 13 Assumption 1.1 There exists r > such that B r (ˆx) D(F ) and F is Fréchet differentiable at all x B r (ˆx). Assumption 1. There exists a constant k > such that for every x, u B r (ˆx) and v X, there exists an element Φ(x, u, v) X satisfying [F (x) F (u)]v = F (u)φ(x, u, v), Φ(x, u, v) k v x u for all x, u B r (ˆx) and v X. Assumption 1.3 There exists a continuous, strictly monotonically increasing function ϕ : (, a] (, ) with a F (ˆx) satisfying lim ϕ(λ) = and λ v X with v 1 such that x ˆx = ϕ(f (ˆx))v and sup ϕ(λ) λ λ + c ϕϕ(), (, a]. The analysis in [11] as well as in this paper is based on majorizing sequences. Recall (see [5], Definition 1.3.11) that a nonnegative sequence (t n ) is said to be a majorizing sequence of a sequence (x n ) in X if x n+1 x n t n+1 t n, n. The majorizing sequence gives an a priori error estimate which can be used to determine the number of iterations needed to achieve a prescribed solution accuracy before actual computation take place. The plan of this paper is as follows. In section we collect some results from [11] for the error estimates in this paper. In section 3 we considered an iteratively regularized projection method for obtaining a sequence (x h,δ n,) in a finite dimensional subspace X h of X and proved that x h,δ n, converges to x δ. Also in section 3 we obtained an estimate for n, x δ. Using an error estimate for x δ ˆx (see [11, 13]), we obtained an estimate for n, ˆx in section 4. The error analysis for the order optimal result using an adaptive selection of the parameter and a stopping rule using a majorizing sequence are also given in section 4. Implementation of the adaptive choice of the parameter and the choice of the stopping rule are given in section 5. Finally the paper ends with some concluding remarks in section 6.

14 S. George and A. I. Elmahdy Preliminaries In [11] the following majorizing sequence (t n ) defined iteratively by, t =, t 1 = η, and t n+1 = t n + 3k (t n t n 1 ) (5) where k, η,and q [, 1) are nonnegative numbers such that 3k η q (6) were used for proving the quadratic convergence of the sequence (x δ n,) to the unique solution x δ of equation (3). For proving the results in [11] as well as the results in this paper we use the following Lemma on majorization, which is a reformulation of Lemma 1.3.1 in [5]. Lemma.1 Let (t n ) be a majorizing sequence for (x n ). If x = lim x n n exists and lim n t n = t, then x x n t t n, n. (7) The following Lemma based on the Assumption 1. will be used in due course. Lemma. For u, v B r (x ) 1 F (u) F (v) F (u)(u v) = F (u) Φ(v + t(u v), u, u v)dt. Proof.Using the Fundamental Theorem of Integration, for u, v B r (x ) we have and so, F (u) F (v) = F (u)(u v) = F (v + t(u v))(u v)dt F (u)(u v)dt F (u) F (v) F (u)(u v) = 1 [F (v + t(u v)) F (u)](u v)dt

An Iteratively Regularized Projection Method 15 so by Assumption 1. we have 1 F (u) F (v) F (u)(u v) = F (u) Φ(v + t(u v), u, u v)dt. This completes the proof of the Lemma. Here after we assume that x ˆx ρ and k ρ + ρ + δ η min{ q 3k, r (1 q)}. (8) Theorem.3 ([11], Theorem.1) Suppose Assumption 1. holds. Let < t η := t where η as in (8) and let (5) and (6) be satisfied. Then 1 q the sequence (x δ n,) defined in (4) is well defined and x δ n, B t (x ) for all n. Further (x δ n,) is a Cauchy sequence in B t (x ) and hence converges to x δ B t (x ) B t (x ) and F (x δ ) = y δ + (x x δ ). Moreover, the following estimate hold for all n, and x δ n+1, x δ n, t n+1 t n, (9) x δ n, x δ t t n qn η 1 q, (1) x δ n+1, x δ 3k xδ n, x δ. (11) Remark.4 Note that (11) implies (x δ n,) converges quadratically to x δ. 3 Iteratively Regularized Projection Method Let H be a bounded subset of positive reals such that zero is a limit point of H, and let {P h } h H be a family of orthogonal projections from X into itself. We assume that b h := (I P h )x (1) as h. The above assumption is satisfied if P h I pointwise. Let x h,δ n+1, = x h,δ n, (P h F (x h,δ n,) + I) 1 P h (F (x h,δ n,) y δ + (x h,δ n, x )), (13) where x h,δ, := P h x. Let Γ n,h := (I P h )F (x δ n,), (14) ϱ := F (P h x ),

16 S. George and A. I. Elmahdy γ n,h := F (x h,δ n,)(i P h ) (15) and let ( t n,h ), n be defined iteratively by t,h =, t 1,h = >, t n+1,h = t n,h + (1 + k ϱ t n,h + γ,h ) 3k ( t n,h t n 1,h ) (16) where k,, and r h [, 1) are nonnegative numbers. Lemma 3.1 Assume there exist nonnegative numbers k,, and r h [, 1 such that for all n (1 + k ϱ + γ,h ) 3k r h. (17) Then the sequence ( t n,h ) defined in (16) is increasing, bounded above by t h := 1 r h, and converges to some t h such that < t h 1 r h. Moreover, for n ; t n+1,h t n,h r h ( t n,h t n 1,h ) r n h, (18) 1+ 3k ϱ η h ) and t h t n,h rn h 1 r h. (19) Proof.Since the result is true for =, k = or r h =, we assume that, k and r h. Observe that t i,h t i 1,h for all i 1. If (1 + k ϱ t i,h + γ,h ) 3k ( t i,h t i 1,h ) r h () then the estimate (18) follows from (16). Thus we shall prove () by induction on i 1. For i = 1, () holds by (17). Suppose () holds for all i k for some k. Then by (16) we have (1 + k ϱ t k+1,h + γ,h ) 3k ( t k+1,h t k,h ) (1 + k ϱ t k+1,h + γ,h ) 3k (1 + k ϱ t k,h + γ,h ) 3k ( t k,h t k 1,h ) (1 + k ϱ( t k+1,h t k,h + t k,h ) + γ,h ) 3k (1 + k ϱ t k,h + γ,h ) 3k ( t k,h t k 1,h ) [(1 + k ϱ t k,h + γ,h ) 3k ( t k,h t k 1,h )]

An Iteratively Regularized Projection Method 17 + k ϱ( t k+1,h t k,h ) 3k Thus by induction () holds for all i 1 and for k, (1 + k ϱ t k,h + γ,h ) 3k ( t k,h t k 1,h ) [(1 + k ϱ t k,h + γ,h ) 3k ( t k,h t k 1,h )] + 3k ϱ [(1 + k ϱ t k,h + γ,h ) 3k ( t k,h t k 1,h ) ] rh + 3k ϱ r h( t k,h t k 1,h ) rh + 3k ϱ r h(rh k 1 ) r h. (1) t k+1,h t k,h + r h ( t k,h t k 1,h ) + r h + + r k h () Hence the sequence ( t n,h ), n is bounded above by so it converges to some t h 1 r h. Further t h t n,h = lim i t n+i,h t n,h This completes the proof of the Lemma. Hereafter we assume; = 1 rk+1 h < (3) 1 r h 1 r h lim i 1 i j= (1 + γ,h )(k (b h + ρ) + b h + ρ) + δ where C := 1 [ ( + γ k ϱ,h) + ( + γ,h ) + 8ϱ r h ]. 3 Remark 3. We need the above assumption because: 1 r h and nondecreasing, ( t n+1+j,h t n+j,h ) rn h. 1 r h min{c, r (1 r h )} (4) The assumption (4) implies t h = Assumption 1.1. 1 r h r and hence we can apply Equation (17) implies 1 k ϱ [ ( + γ,h) + ( + γ,h ) + 8ϱ r h ] = C. (5) 3

18 S. George and A. I. Elmahdy Theorem 3.3 Let the assumptions in Lemma 3.1 with as in (4) and Assumption 1. be satisfied. Then the sequence ( t n,h ) defined in (16) is a majorizing sequence of sequence (x h,δ n,) defined in (13) and x h,δ n, Bt h (P h x ) for all n. Proof. Let G(x) = x R (x) 1 [F (x) y δ + (x x )] where R (x) 1 = (P h F (x)p h + P h ) 1. Then for u, v B t h (P h x ), we have G(u) G(v) = u v R (u) 1 [F (u) y δ + (u x )] +R (v) 1 [F (v) y δ + (v x )] = R (u) 1 [R (u)(u v) (F (u) F (v)) (u v)] +(R (v) 1 R (u) 1 )(F (v) y δ + (v x )) = R (u) 1 [P h F (u)p h (u v) (F (u) F (v))] +R (u) 1 (R (u) R (v))r (v) 1 (F (v) y δ + (v x )) = R (u) 1 [P h F (u)p h (u v) (F (u) F (v))] R (u) 1 (R (u) R (v))(g(v) v). Now since G(x h,δ n,) = x h,δ n+1,, R (x) 1 = R (x) 1 P h = P h R (x) 1 and P h (x h,δ x h,δ n 1,) = (x h,δ n, x h,δ n 1,) we have (x h,δ n+1, x h,δ n,) = G(x h,δ n,) G(x h,δ n 1,) = R (x h,δ n,) 1 [F (x h,δ n,)(x h,δ n, x h,δ n 1,) (F (x h,δ n,) F (x h,δ n 1,))] R (x h,δ n,) 1 (R (x h,δ n,) R (x h,δ n 1,))(x h,δ n, x h,δ n 1,) = R (x h,δ n,) 1 F (x h,δ n,) Φ(x h,δ n, + t(x h,δ n 1, x h,δ n,), x h,δ n,, x h,δ n 1, x h,δ n,)dt R (x h,δ n,) 1 (F (x h,δ n,) F (x h,δ n 1,))(x h,δ n, x h,δ n 1,) = R (x h,δ n,) 1 F (x h,δ n,) Φ(x h,δ n, + t(x h,δ n 1, x h,δ n,), x h,δ n,, x h,δ n 1, x h,δ n,)dt +R (x h,δ n,) 1 F (x h,δ n,) n, Φ(x h,δ n 1,, x h,δ n,, x h,δ n, x h,δ n 1,)dt = R (x h,δ n,) 1 [F (x h,δ n,)p h + F (x h,δ n,)(i P h )] Φ(x h,δ n, + t(x h,δ n 1, x h,δ n,), x h,δ n,, x h,δ n 1, x h,δ n,)dt +R (x h,δ n,) 1 [F (x h,δ n,)p h + F (x h,δ n,)(i P h )]

An Iteratively Regularized Projection Method 19 Φ(x h,δ n 1,, x h,δ n,, x h,δ n, x h,δ n 1,)dt The last but one step follows from Lemma.. So by Assumption 1. and the relation we have Note that R (x h,δ n,) 1 [F (x h,δ n,)p h + F (x h,δ n,)(i P h )] 1 + γ n,h, (6) n+1, x h,δ n, (1 + γ n,h )3k xh,δ n, x h,δ n 1,. (7) γ n,h = F (x h,δ n,)(i P h ) = [F (x h,δ n,) F (P h x ) + F (P h x )](I P h ) F (P h x )(I P h ) + [F (x h,δ n,) F (P h x )](I P h ) γ,h + k F (P h x ) n, P h x γ,h + k ϱ n, P h x. (8) So by (7) we have n+1, x h,δ n, (1 + γ,h + k ϱ n, P h x ) 3k xh,δ n, x h,δ n 1,.(9) Now we shall prove that the sequence ( t n,h ) defined in (16) is a majorizing sequence of the sequence (x h,δ n,) and x h,δ n, Bt h (P h x ), for all n. Note that F (ˆx) = y, so 1, P h x = (P h F (P h x )P h + P h ) 1 P h (F (P h x ) y δ ) = (P h F (P h x )P h + P h ) 1 P h (F (P h x ) y + y y δ ) = (P h F (P h x )P h + P h ) 1 P h (F (P h x ) F (ˆx) + y y δ ) = (P h F (P h x )P h + P h ) 1 P h (F (P h x ) F (ˆx) F (P h x )(P h x ˆx) + F (P h x )(P h x ˆx) + y y δ ) (P h F (P h x )P h + P h ) 1 P h (F (P h x ) F (ˆx) F (P h x )(P h x ˆx)) + (P h F (P h x )P h + P h ) 1 P h F (P h x )(P h x ˆx) + (P h F (P h x )P h + P h ) 1 P h (y y δ ) (P h F (P h x )P h + P h ) 1 P h F (P h x ) Φ(ˆx + t(p h x ˆx), P h x, (P h x ˆx))dt

S. George and A. I. Elmahdy + (P h F (P h x )P h + P h ) 1 P h F (P h x )(P h x ˆx) + δ (P h F (P h x )P h + P h ) 1 P h [F (P h x )P h + F (P h x )(I P h )] Φ(ˆx + t(p h x ˆx), P h x, (P h x ˆx))dt + (P h F (P h x )P h + P h ) 1 P h [F (P h x )P h +F (P h x )(I P h )](P h x ˆx) + δ (1 + γ,h )(k P hx ˆx + P h x ˆx ) + δ (1 + γ,h )(k (b h + ρ) + b h + ρ) + δ. The last but one step follows from Assumption 1., (6) and the inequality P h x ˆx b h + ρ. So 1, P h x t 1,h t,h. Assume that for some k. Then i+1, x h,δ i, t i+1,h t i,h, i k (3) k+1, P hx k+1, xh,δ k, + xh,δ k, xh,δ k 1, + + xh,δ 1, P h x t k+1,h t k,h + t k,h t k 1,h + + t 1,h t,h = t k+1,h t h. So x h,δ i+1, B t h (P h x ) for all i k. Therefore by (9) and (3) we have k+, xh,δ k+1, 3k (1 + γ,h + k ϱ k+1, P hx 3k (1 + γ,h + k ϱ t k+1,h )( t k+1,h t k,h ) = t k+,h t k+1,h. ) k+1, xh,δ k, Thus by induction n+1, x h,δ n, t n+1,h t n,h for all n and hence ( t n,h ), n is a majorizing sequence of the sequence (x h,δ n,). In particular n, P h x t n,h t h, i.e., x h,δ n, Bt h (P h x ), for all n. Hence n, P h x t h 1 r h. (31) Lemma 3.4 Let x h,δ n, be as in (13) and x δ n, be as in (4). Let assumptions in Theorem.3 and Theorem(3.3) hold. Then n 1, x δ n 1, 1 r h + b h + η 1 q. (3)

An Iteratively Regularized Projection Method 1 Proof.Note that n 1, x δ n 1, = n 1, P h x + (P h I)x + x x δ n 1, [ This completes the proof. Let Q := k ( and n 1, P h x + (P h I)x + x x δ n 1, ] + b h + η 1 r h 1 q. + b h + η 1 r h 1 q ) (33) Q n,h := Γ n,h + Q F (x h,δ n,). (34) Lemma 3.5 Let Q n,h be as in (34) and assumptions in Theorem.3 be satisfied. Then for all n, Q n,h C h η where C h = (k + 1) F (x 1 q ) + Q(k 1 r h + 1)ϱ. Proof.Note that Γ n,h F (x δ n,) sup = v 1 F (x δ n,)v sup v 1 [F (x δ n,) F (x ) + F (x )]v sup v 1 [F (x δ n,) F (x )]v + sup v 1 F (x )v sup v 1 F (x )Φ(x δ n,, x, v) + F (x ) sup v 1 k F (x ) x δ n, x v + F (x ) k F (x ) x δ n, x + F (x ) k F (x ) η 1 q + F (x ) η (k 1 q + 1) F (x ). (35) Similarly one can proof that F (x h,δ n,) (k + 1)ϱ. (36) 1 r h Now the proof follows from (35), (36) and the relation Q n,h F (x δ n,) + Q F (x h,δ n,). Hereafter we assume that 1 < Q <, so that r h < 1 < Q and Q < 1.

S. George and A. I. Elmahdy Theorem 3.6 Let x h,δ n, be as in (13) and x δ n, be as in (4). Let assumptions in Theorem 3.3, Theorem.3 and Lemma 3.5 hold. Then we have the following estimate, ( ) ( ) Q n n, x δ n, b h + C Q n h ( Q r h). Proof.Note that x h,δ n, x δ n, = x h,δ n 1, x δ n 1, (P h F (x h,δ n 1,)P h + P h ) 1 P h where [F (x h,δ n 1,) y δ + (x h,δ n 1, x )] +(F (x δ n 1,) + I) 1 [F (x δ n 1,) y δ + (x δ n 1, x )] = x h,δ n 1, x δ n 1, [(P h F (x h,δ n 1,)P h + P h ) 1 P h (F (x δ n 1,) + I) 1 ][F (x h,δ n 1,) y δ + (x h,δ n 1, x )] (F (x δ n 1,) + I) 1 [F (x h,δ n 1,) F (x δ n 1,) + (x h,δ n 1, x δ n 1,)] = x h,δ n 1, x δ n 1, (F (x δ n 1,) + I) 1 [F (x h,δ n 1,) F (x δ n 1,) + (x h,δ n 1, x δ n 1,)] [(P h F (x h,δ n 1,)P h + P h ) 1 P h (F (x δ n 1,) + I) 1 ] [F (x h,δ n 1,) y δ + (x h,δ n 1, x )] = (F (x δ n 1,) + I) 1 [F (x δ n 1,)(x h,δ n 1, x δ n 1,) (F (x h,δ n 1,) F (x δ n 1,))] (F (x δ n 1,) + I) 1 [F (x δ n 1,)P h P h F (x h,δ n 1,)P h ](P h F (x h,δ n 1,)P h + P h ) 1 [F (x h,δ n 1,) y δ + (x h,δ n 1, x )] = (F (x δ n 1,) + I) 1 [F (x δ n 1,)(x h,δ n 1, x δ n 1,) (F (x h,δ n 1,) F (x δ n 1,))] (F (x δ n 1,) + I) 1 [F (x δ n 1,) P h F (x h,δ n 1,)](x h,δ n, x h,δ n 1,) = Γ 1 Γ (37) Γ 1 = (F (x δ n 1,)+I) 1 [F (x δ n 1,)(x h,δ n 1, x δ n 1,) (F (x h,δ n 1,) F (x δ n 1,))] and Γ = (F (x δ n 1,) + I) 1 [F (x δ n 1,) P h F (x h,δ n 1,)](x h,δ n, x h,δ n 1,) Thus by Lemma., Lemma 3.4 and Asssumption1. we have Γ 1 = (F (x δ n 1,) + I) 1 F (x δ n 1,)

An Iteratively Regularized Projection Method 3 and 1 k k Φ(x h,δ n 1, + t(x δ n 1, x h,δ n 1,), x δ n 1,, x δ n 1, x h,δ n 1,)dt x δ n 1, x h,δ n 1, n 1, + t(x δ n 1, x h,δ n 1,) x δ n 1, dt x δ n 1, x h,δ n 1, (t 1)(x δ n 1, x h,δ n 1,) dt k xh,δ n 1, x δ n 1, Q xh,δ n 1, x δ n 1,. (38) Γ = (F (x δ n 1,) + I) 1 ((I P h )F (x δ n 1,) +P h (F (x δ n 1,) F (x h,δ n 1,)))(x h,δ n, x h,δ n 1,) (F (x δ n 1,) + I) 1 (I P h )F (x δ n 1,)(x h,δ n, x h,δ n 1,) + (F (x δ n 1,) + I) 1 P h (F (x δ n 1,) F (x h,δ n 1,))(x h,δ n, x h,δ n 1,) (I P h)f (x δ n 1,) n, x h,δ n 1, + (F (x δ n 1,) + I) 1 P h F (x h,δ n 1,)Φ(x δ n 1,, x h,δ n 1,, x h,δ n, x h,δ n 1,) Γ n 1,h + k F (x h,δ n 1,) n 1, x δ n 1, n, x h,δ n 1, Γ n 1,h + Q F (x h,δ n 1,) n, x h,δ n 1, Q n 1,h xh,δ n, x h,δ n 1, (39) Therefore by (37), (38), (39) and Lemma 3.5 we have x h,δ n, x δ n, Q x h,δ n 1, x δ n 1, + C h x h,δ n, x h,δ n 1, ( ) Q n ( x h,δ, x δ Q, + ( ) Q Ch + + rn h + C h rn 1 h ( ) Q n ( ) Q n 1 ( C h Q b h + + ( ) Q Ch + + rn h + C h rn 1 h ) n 1 ( ) C h Q n η C h h + r h ) n C h r h

4 S. George and A. I. Elmahdy This completes the proof. ( ) Q n b h + C ( ) h Q n 1 ( ) Q n [ + r h ( ) Q + + rh n + rh n 1 ] ( ) ( ) Q n b h + C Q n h ( Q r h). 4 Error Bounds Under Source Conditions In view of Theorem.3 and Theorem 3.6; to obtain an error estimate for n, ˆx, it is enough to obtain an error estimate for x δ ˆx. It is known (cf. [13], Proposition 3.1) that x δ x δ (4) and (cf. [11], Theorem 3.1) that x ˆx (k r + 1)c ϕ ϕ(). (41) Combining the estimates in Theorem.3, and Theorem 3.6, (4) and (41) we obtaining the following, Theorem 4.1 Let x h,δ n, be as in (13) and let the assumptions in Theorem.3,Theorem 3.6 and Lemma 3.5 be satisfied. Then we have the following; n, ˆx n, x δ n, + x δ n, x δ + x δ x + x ˆx Let ( Q ) n b h + C h ( Q r h) C := max{1 + b h + ( Q ) n + qn η (1 q) + δ + (k r + 1)c ϕ ϕ(). C h ( Q r h) + η 1 q, (k r + 1)c ϕ } and let ( ) Q n n δ := min{n : δ}. (4) Note that for < < 1, δ δ/. Thus by Theorem 4.1 we have the following Theorem. Theorem 4. Let x h,δ n, be as in (13) an let the assumptions in Theorem.3, Theorem 3.6 and Lemma 3.5 be satisfied. Let n δ be as in (4). Then for < < 1 we have, n δ, ˆx C (ϕ() + δ ). (43)

An Iteratively Regularized Projection Method 5 4.1 A Priori Choice of the Parameter Note that the error estimate ϕ() + δ in (43) is of optimal order if := δ = (δ) satisfies, δ ϕ( δ ) = δ. Now using the function ψ(λ) := λϕ 1 (λ), < λ a we have δ = δ ϕ( δ ) = ψ(ϕ( δ )), so that δ = ϕ 1 (ψ 1 (δ)). Hence by (43) we have the following. Theorem 4.3 Let ψ(λ) := λϕ 1 (λ) for < λ a, and assumptions in Theorem 4. holds. For δ >, let δ = ϕ 1 (ψ 1 (δ)). Let n δ be as in (4). Then n δ, ˆx = (ψ 1 (δ)). 4. An Adaptive Choice of the Parameter In this subsection, we will present a parameter choice rule based on the adaptive method studied in [1, 1]. In practice, the regularization parameter is often selected from some finite set D M () := { i = µ i, i =, 1,, M} (44) where µ > 1 and M is such that M < 1 M+1. We choose := δ, because we expect to have an accuracy of order ( δ) and from Theorem 4.3, it follows that such an accuracy cannot be guaranteed for < δ. Let ( ) Q n n M = min{n : δ}. (45) and let x i := x h,δ n M, i. The parameter choice strategy that we are going to consider in this paper, we selects = i from D M () and operates only with corresponding x i, i =, 1,, M. Theorem 4.4 Assume that there exists i {, 1,,, M} such that ϕ( i ) δ i. Let assumptions of Theorem 4. and Theorem 4.3 hold and let Then l k and l := max{i : ϕ( i ) δ i } < M, k := max{i : x i x j 4C δ j, j =, 1,,, i}. (46) ˆx x k cψ 1 (δ) where c = 6C µ. Proof. To see that l k, it is enough to show that, for each i {1,,, M}, ϕ( i ) δ i = x i x j 4C δ j, j =, 1,, i.

6 S. George and A. I. Elmahdy For j i, by (43) we have x i x j x i ˆx + ˆx x j δ δ C ϕ( i ) + C + C ϕ( j ) + C i j δ δ C + C i j δ 4C. j Thus the relation l k is proved. Next we observe that ˆx x k ˆx x l + x l x k δ δ C ϕ( l ) + C + 4C l l δ 6C. l Now since δ l+1 µ l, it follows that Thus δ l µ δ δ = µϕ( δ ) = µψ 1 (δ). ˆx x k 6C µψ 1 (δ) cψ 1 (δ) where c = 6C µ.this completes the proof of the theorem. 5 Implementation of Adaptive Choice Rule In this section we provide an algorithm for the determination of a parameter fulfilling the balancing principle (46) and also provide a starting point for the iteration (13) approximating the unique solution x δ of (3).The choice of the starting point involves the following steps: Choose = δ, µ > 1 and r h < 1. Choose x D(F ) such that x ˆx ρ and (1 + γ,h )( k (b h + ρ) + b h + ρ) + δ min{c, r (1 r h )}. Choose 1 < Q < where Q is as in (33). Choose n M such that n M = min{n : ( ) Q n δ}. Finally the adaptive algorithm associated with the choice of the parameter specified in Theorem 4.4 involves the following steps:

An Iteratively Regularized Projection Method 7 5.1 Algorithm Set i Solve x i := x h,δ n M, i by using the iteration (13). If x i x j > 6C δ µ j, j i, then take k = i 1. Set i = i + 1 and return to step. 6 Concluding Remarks In this paper we considered a new iterative method in the finite dimensional setting for approximately solving the nonlinear ill-posed operator equation F (x) = y, when the available data y δ in place of the exact data y. It is assumed that F is Fréchet differentiable in a neighborhood of some initial guess x of the actual solution ˆx. The procedure involves finding the fixed point of the function G h (x) = x (P h F (x)p h + P h ) 1 (F (x) y δ + (x x )), in an iterative manner in a finite dimensional subspace X h of the Hilbert space X. Here x is an initial guess and P h is the orthogonal projection onto X h. For choosing the regularization parameter we made use of the adaptive method suggested by Pereversev and Schock in [1] and the stopping rule is based on a majorizing sequence. Acknowledgements The first author thanks National Institute of Technology Karnataka, India, for the financial support under seed money grant No. RGO/O. am/seed GRANT/16/9. The work of Atef I Elmahdy is supported by Indo-Egypt Cultural Exchange Programme 7-8, under the research fellowship of ICCR, India; BNG/171/7-8. References [1] A. G. Ramm, Inverse Problems: Mathematical and Analytical Techniques with Applications to Engineering, Springer, 5. [] A. Bakushinsky and A. Smirnova, On application of generalized discrepancy principle to iterative methods for nonlinear ill-posed problems, Numerical Func. Anal. and Optimization, 6(1), (5), 35 48.

8 S. George and A. I. Elmahdy [3] B. Blaschke, A. Neubauer and O. Scherzer, On convergence rates for the iteratively regularized Gauss-Newton method, IMA Journal on Numerical Analysis, 17(3), (1997), 41 436. [4] H. W. Engle and M. Hanke and A. Neubauer, Regularization of Inverse Problems, (Dordrecht: Kluwer), 1996. [5] I. K. Argyros, Convergence and Applications of Newton-type Iterations, Springer, 8. [6] Jin.Qi-Nian, Error estimates of some Newton-type methods for solving nonlinear inverse problems in Hilbert scales, Inverse problems, 16(1), (), 187 197. [7] Jin.Qi-Nian, On the iteratively regularized Gauss-Newton method for solving nonlinear ill-posed problems, Mathematics of Computation, 69(3), (), 163 163. [8] M. Hanke, A. Neubauer and O. Scherzer, A convergence analysis of the Landweber iteration for nonlinear ill-posed problems, Numer. Math., 7(1), (1995), 1 37. [9] P. Deuflhard, H. W. Engle and O. Scherzer, A convergenve analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions, Inverse problems, 14(5), (1998), 181 116. [1] P. Mathe and S. V. Perverzev, Geometry of linear ill-posed problems in variable Hilbert scales, Inverse problems, 19(3) (3), 789 83. [11] S. George and A. I. Elmahdy, A quadratic convergence yielding iterative method for nonlinear ill-posed operator equations,(1) (Communicated). [1] S. V. Perverzev and E. Schock, On the adaptive selection of the parameter in regularization of ill-posed problems, SIAM J.Numer.Anal., 43(5), (5), 6 76. [13] U. Tautanhahn, On the method of Lavrentiev regularization for nonlinear ill-posed problems, Inverse Problems, 18(1), (), 191 7. Received: June, 1