An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

Size: px
Start display at page:

Download "An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems"

Transcription

1 Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational Sciences National Institute of Technology Karnataka Surathkal, India Atef Ibrahim Elmahdy Department of Mathematical and Computational Sciences National Institute of Technology Karnataka Surathkal, India Abstract An iteratively regularized projection method, which converges quadratically, has been considered for obtaining stable approximate solution to nonlinear ill-posed operator equations F (x) = y where F : D(F ) X X is a nonlinear monotone operator defined on the real Hilbert space X. We assume that only a noisy data y δ with y y δ δ are available. Under the assumption that the Fréchet derivative F of F is Lipschitz continuous, a choice of the regularization parameter using an adaptive selection of the parameter and a stopping rule for the iteration index using a majorizing sequence are presented. We prove that under a general source condition on x ˆx, the error n, ˆx between the regularized approximation x h,δ n,, (x h,δ, := P hx where P h is an orthogonal projection on to a finite dimensional subspace X h of X) and the solution ˆx is of optimal order. Mathematics Subject Classification:65J,65J15, 47J6 Keywords: Nonlinear ill-posed operator, Monotone operator, Majorizing sequence, Regularized Projection method, Quadratic convergence.

2 1 S. George and A. I. Elmahdy 1 Introduction Let F : D(F ) X X be a nonlinear monotone operator (see [13]) defined on a real Hilbert space X. We consider the problem of approximately solving the nonlinear ill-posed operator equation; F (x) = y. (1) Throughout this paper we shall denote the inner product and the corresponding norm on X by.,. and. respectively. And we assume that (1) has a solution, namely ˆx and that y δ X are the available noisy data with y y δ δ. () Then the problem of recovery of ˆx from noisy equation F (x) = y δ is ill-posed, in the sense that the Fréchet derivative F (.) is not boundedly invertible (see [1], page 6). A well known method for regularizing (1), when F is monotone is the method of Lavrentiev regularization (see [13]). In this method approximation x δ is obtained by solving the singularly perturbed operator equation F (x) + (x x ) = y δ. (3) In practice, one has to deal with some sequence (x δ n,) n=1, converging to the solution x δ of (3). Many authors considered such sequences (see [, 3, 6, 7, 9, 8]). In [11], George and Elmahdy considered an iterative regularization method; x δ n+1, = x δ n, (F (x δ n,) + I) 1 (F (x δ n,) y δ + (x δ n, x )), (4) where x δ, = x and proved that (4) converges quadratically to the unique solution x δ of (3). Recall that, a sequence (x n ) is said to be converges quadratically to x, if there exist a positive number M, not necessarily less than 1, such that x n+1 x M x n x, for all n sufficiently large. And the convergence of (x n ) to x, is said to be linear if there exist a positive number M (, 1), such that x n+1 x M x n x. Note that regardless of the value of M quadratic convergent sequence will always eventually converges faster than a linearly convergent sequence. The following Assumptions are used for proving the results in [11] as well as the results in this paper.

3 An Iteratively Regularized Projection Method 13 Assumption 1.1 There exists r > such that B r (ˆx) D(F ) and F is Fréchet differentiable at all x B r (ˆx). Assumption 1. There exists a constant k > such that for every x, u B r (ˆx) and v X, there exists an element Φ(x, u, v) X satisfying [F (x) F (u)]v = F (u)φ(x, u, v), Φ(x, u, v) k v x u for all x, u B r (ˆx) and v X. Assumption 1.3 There exists a continuous, strictly monotonically increasing function ϕ : (, a] (, ) with a F (ˆx) satisfying lim ϕ(λ) = and λ v X with v 1 such that x ˆx = ϕ(f (ˆx))v and sup ϕ(λ) λ λ + c ϕϕ(), (, a]. The analysis in [11] as well as in this paper is based on majorizing sequences. Recall (see [5], Definition ) that a nonnegative sequence (t n ) is said to be a majorizing sequence of a sequence (x n ) in X if x n+1 x n t n+1 t n, n. The majorizing sequence gives an a priori error estimate which can be used to determine the number of iterations needed to achieve a prescribed solution accuracy before actual computation take place. The plan of this paper is as follows. In section we collect some results from [11] for the error estimates in this paper. In section 3 we considered an iteratively regularized projection method for obtaining a sequence (x h,δ n,) in a finite dimensional subspace X h of X and proved that x h,δ n, converges to x δ. Also in section 3 we obtained an estimate for n, x δ. Using an error estimate for x δ ˆx (see [11, 13]), we obtained an estimate for n, ˆx in section 4. The error analysis for the order optimal result using an adaptive selection of the parameter and a stopping rule using a majorizing sequence are also given in section 4. Implementation of the adaptive choice of the parameter and the choice of the stopping rule are given in section 5. Finally the paper ends with some concluding remarks in section 6.

4 14 S. George and A. I. Elmahdy Preliminaries In [11] the following majorizing sequence (t n ) defined iteratively by, t =, t 1 = η, and t n+1 = t n + 3k (t n t n 1 ) (5) where k, η,and q [, 1) are nonnegative numbers such that 3k η q (6) were used for proving the quadratic convergence of the sequence (x δ n,) to the unique solution x δ of equation (3). For proving the results in [11] as well as the results in this paper we use the following Lemma on majorization, which is a reformulation of Lemma in [5]. Lemma.1 Let (t n ) be a majorizing sequence for (x n ). If x = lim x n n exists and lim n t n = t, then x x n t t n, n. (7) The following Lemma based on the Assumption 1. will be used in due course. Lemma. For u, v B r (x ) 1 F (u) F (v) F (u)(u v) = F (u) Φ(v + t(u v), u, u v)dt. Proof.Using the Fundamental Theorem of Integration, for u, v B r (x ) we have and so, F (u) F (v) = F (u)(u v) = F (v + t(u v))(u v)dt F (u)(u v)dt F (u) F (v) F (u)(u v) = 1 [F (v + t(u v)) F (u)](u v)dt

5 An Iteratively Regularized Projection Method 15 so by Assumption 1. we have 1 F (u) F (v) F (u)(u v) = F (u) Φ(v + t(u v), u, u v)dt. This completes the proof of the Lemma. Here after we assume that x ˆx ρ and k ρ + ρ + δ η min{ q 3k, r (1 q)}. (8) Theorem.3 ([11], Theorem.1) Suppose Assumption 1. holds. Let < t η := t where η as in (8) and let (5) and (6) be satisfied. Then 1 q the sequence (x δ n,) defined in (4) is well defined and x δ n, B t (x ) for all n. Further (x δ n,) is a Cauchy sequence in B t (x ) and hence converges to x δ B t (x ) B t (x ) and F (x δ ) = y δ + (x x δ ). Moreover, the following estimate hold for all n, and x δ n+1, x δ n, t n+1 t n, (9) x δ n, x δ t t n qn η 1 q, (1) x δ n+1, x δ 3k xδ n, x δ. (11) Remark.4 Note that (11) implies (x δ n,) converges quadratically to x δ. 3 Iteratively Regularized Projection Method Let H be a bounded subset of positive reals such that zero is a limit point of H, and let {P h } h H be a family of orthogonal projections from X into itself. We assume that b h := (I P h )x (1) as h. The above assumption is satisfied if P h I pointwise. Let x h,δ n+1, = x h,δ n, (P h F (x h,δ n,) + I) 1 P h (F (x h,δ n,) y δ + (x h,δ n, x )), (13) where x h,δ, := P h x. Let Γ n,h := (I P h )F (x δ n,), (14) ϱ := F (P h x ),

6 16 S. George and A. I. Elmahdy γ n,h := F (x h,δ n,)(i P h ) (15) and let ( t n,h ), n be defined iteratively by t,h =, t 1,h = >, t n+1,h = t n,h + (1 + k ϱ t n,h + γ,h ) 3k ( t n,h t n 1,h ) (16) where k,, and r h [, 1) are nonnegative numbers. Lemma 3.1 Assume there exist nonnegative numbers k,, and r h [, 1 such that for all n (1 + k ϱ + γ,h ) 3k r h. (17) Then the sequence ( t n,h ) defined in (16) is increasing, bounded above by t h := 1 r h, and converges to some t h such that < t h 1 r h. Moreover, for n ; t n+1,h t n,h r h ( t n,h t n 1,h ) r n h, (18) 1+ 3k ϱ η h ) and t h t n,h rn h 1 r h. (19) Proof.Since the result is true for =, k = or r h =, we assume that, k and r h. Observe that t i,h t i 1,h for all i 1. If (1 + k ϱ t i,h + γ,h ) 3k ( t i,h t i 1,h ) r h () then the estimate (18) follows from (16). Thus we shall prove () by induction on i 1. For i = 1, () holds by (17). Suppose () holds for all i k for some k. Then by (16) we have (1 + k ϱ t k+1,h + γ,h ) 3k ( t k+1,h t k,h ) (1 + k ϱ t k+1,h + γ,h ) 3k (1 + k ϱ t k,h + γ,h ) 3k ( t k,h t k 1,h ) (1 + k ϱ( t k+1,h t k,h + t k,h ) + γ,h ) 3k (1 + k ϱ t k,h + γ,h ) 3k ( t k,h t k 1,h ) [(1 + k ϱ t k,h + γ,h ) 3k ( t k,h t k 1,h )]

7 An Iteratively Regularized Projection Method 17 + k ϱ( t k+1,h t k,h ) 3k Thus by induction () holds for all i 1 and for k, (1 + k ϱ t k,h + γ,h ) 3k ( t k,h t k 1,h ) [(1 + k ϱ t k,h + γ,h ) 3k ( t k,h t k 1,h )] + 3k ϱ [(1 + k ϱ t k,h + γ,h ) 3k ( t k,h t k 1,h ) ] rh + 3k ϱ r h( t k,h t k 1,h ) rh + 3k ϱ r h(rh k 1 ) r h. (1) t k+1,h t k,h + r h ( t k,h t k 1,h ) + r h + + r k h () Hence the sequence ( t n,h ), n is bounded above by so it converges to some t h 1 r h. Further t h t n,h = lim i t n+i,h t n,h This completes the proof of the Lemma. Hereafter we assume; = 1 rk+1 h < (3) 1 r h 1 r h lim i 1 i j= (1 + γ,h )(k (b h + ρ) + b h + ρ) + δ where C := 1 [ ( + γ k ϱ,h) + ( + γ,h ) + 8ϱ r h ]. 3 Remark 3. We need the above assumption because: 1 r h and nondecreasing, ( t n+1+j,h t n+j,h ) rn h. 1 r h min{c, r (1 r h )} (4) The assumption (4) implies t h = Assumption r h r and hence we can apply Equation (17) implies 1 k ϱ [ ( + γ,h) + ( + γ,h ) + 8ϱ r h ] = C. (5) 3

8 18 S. George and A. I. Elmahdy Theorem 3.3 Let the assumptions in Lemma 3.1 with as in (4) and Assumption 1. be satisfied. Then the sequence ( t n,h ) defined in (16) is a majorizing sequence of sequence (x h,δ n,) defined in (13) and x h,δ n, Bt h (P h x ) for all n. Proof. Let G(x) = x R (x) 1 [F (x) y δ + (x x )] where R (x) 1 = (P h F (x)p h + P h ) 1. Then for u, v B t h (P h x ), we have G(u) G(v) = u v R (u) 1 [F (u) y δ + (u x )] +R (v) 1 [F (v) y δ + (v x )] = R (u) 1 [R (u)(u v) (F (u) F (v)) (u v)] +(R (v) 1 R (u) 1 )(F (v) y δ + (v x )) = R (u) 1 [P h F (u)p h (u v) (F (u) F (v))] +R (u) 1 (R (u) R (v))r (v) 1 (F (v) y δ + (v x )) = R (u) 1 [P h F (u)p h (u v) (F (u) F (v))] R (u) 1 (R (u) R (v))(g(v) v). Now since G(x h,δ n,) = x h,δ n+1,, R (x) 1 = R (x) 1 P h = P h R (x) 1 and P h (x h,δ x h,δ n 1,) = (x h,δ n, x h,δ n 1,) we have (x h,δ n+1, x h,δ n,) = G(x h,δ n,) G(x h,δ n 1,) = R (x h,δ n,) 1 [F (x h,δ n,)(x h,δ n, x h,δ n 1,) (F (x h,δ n,) F (x h,δ n 1,))] R (x h,δ n,) 1 (R (x h,δ n,) R (x h,δ n 1,))(x h,δ n, x h,δ n 1,) = R (x h,δ n,) 1 F (x h,δ n,) Φ(x h,δ n, + t(x h,δ n 1, x h,δ n,), x h,δ n,, x h,δ n 1, x h,δ n,)dt R (x h,δ n,) 1 (F (x h,δ n,) F (x h,δ n 1,))(x h,δ n, x h,δ n 1,) = R (x h,δ n,) 1 F (x h,δ n,) Φ(x h,δ n, + t(x h,δ n 1, x h,δ n,), x h,δ n,, x h,δ n 1, x h,δ n,)dt +R (x h,δ n,) 1 F (x h,δ n,) n, Φ(x h,δ n 1,, x h,δ n,, x h,δ n, x h,δ n 1,)dt = R (x h,δ n,) 1 [F (x h,δ n,)p h + F (x h,δ n,)(i P h )] Φ(x h,δ n, + t(x h,δ n 1, x h,δ n,), x h,δ n,, x h,δ n 1, x h,δ n,)dt +R (x h,δ n,) 1 [F (x h,δ n,)p h + F (x h,δ n,)(i P h )]

9 An Iteratively Regularized Projection Method 19 Φ(x h,δ n 1,, x h,δ n,, x h,δ n, x h,δ n 1,)dt The last but one step follows from Lemma.. So by Assumption 1. and the relation we have Note that R (x h,δ n,) 1 [F (x h,δ n,)p h + F (x h,δ n,)(i P h )] 1 + γ n,h, (6) n+1, x h,δ n, (1 + γ n,h )3k xh,δ n, x h,δ n 1,. (7) γ n,h = F (x h,δ n,)(i P h ) = [F (x h,δ n,) F (P h x ) + F (P h x )](I P h ) F (P h x )(I P h ) + [F (x h,δ n,) F (P h x )](I P h ) γ,h + k F (P h x ) n, P h x γ,h + k ϱ n, P h x. (8) So by (7) we have n+1, x h,δ n, (1 + γ,h + k ϱ n, P h x ) 3k xh,δ n, x h,δ n 1,.(9) Now we shall prove that the sequence ( t n,h ) defined in (16) is a majorizing sequence of the sequence (x h,δ n,) and x h,δ n, Bt h (P h x ), for all n. Note that F (ˆx) = y, so 1, P h x = (P h F (P h x )P h + P h ) 1 P h (F (P h x ) y δ ) = (P h F (P h x )P h + P h ) 1 P h (F (P h x ) y + y y δ ) = (P h F (P h x )P h + P h ) 1 P h (F (P h x ) F (ˆx) + y y δ ) = (P h F (P h x )P h + P h ) 1 P h (F (P h x ) F (ˆx) F (P h x )(P h x ˆx) + F (P h x )(P h x ˆx) + y y δ ) (P h F (P h x )P h + P h ) 1 P h (F (P h x ) F (ˆx) F (P h x )(P h x ˆx)) + (P h F (P h x )P h + P h ) 1 P h F (P h x )(P h x ˆx) + (P h F (P h x )P h + P h ) 1 P h (y y δ ) (P h F (P h x )P h + P h ) 1 P h F (P h x ) Φ(ˆx + t(p h x ˆx), P h x, (P h x ˆx))dt

10 S. George and A. I. Elmahdy + (P h F (P h x )P h + P h ) 1 P h F (P h x )(P h x ˆx) + δ (P h F (P h x )P h + P h ) 1 P h [F (P h x )P h + F (P h x )(I P h )] Φ(ˆx + t(p h x ˆx), P h x, (P h x ˆx))dt + (P h F (P h x )P h + P h ) 1 P h [F (P h x )P h +F (P h x )(I P h )](P h x ˆx) + δ (1 + γ,h )(k P hx ˆx + P h x ˆx ) + δ (1 + γ,h )(k (b h + ρ) + b h + ρ) + δ. The last but one step follows from Assumption 1., (6) and the inequality P h x ˆx b h + ρ. So 1, P h x t 1,h t,h. Assume that for some k. Then i+1, x h,δ i, t i+1,h t i,h, i k (3) k+1, P hx k+1, xh,δ k, + xh,δ k, xh,δ k 1, + + xh,δ 1, P h x t k+1,h t k,h + t k,h t k 1,h + + t 1,h t,h = t k+1,h t h. So x h,δ i+1, B t h (P h x ) for all i k. Therefore by (9) and (3) we have k+, xh,δ k+1, 3k (1 + γ,h + k ϱ k+1, P hx 3k (1 + γ,h + k ϱ t k+1,h )( t k+1,h t k,h ) = t k+,h t k+1,h. ) k+1, xh,δ k, Thus by induction n+1, x h,δ n, t n+1,h t n,h for all n and hence ( t n,h ), n is a majorizing sequence of the sequence (x h,δ n,). In particular n, P h x t n,h t h, i.e., x h,δ n, Bt h (P h x ), for all n. Hence n, P h x t h 1 r h. (31) Lemma 3.4 Let x h,δ n, be as in (13) and x δ n, be as in (4). Let assumptions in Theorem.3 and Theorem(3.3) hold. Then n 1, x δ n 1, 1 r h + b h + η 1 q. (3)

11 An Iteratively Regularized Projection Method 1 Proof.Note that n 1, x δ n 1, = n 1, P h x + (P h I)x + x x δ n 1, [ This completes the proof. Let Q := k ( and n 1, P h x + (P h I)x + x x δ n 1, ] + b h + η 1 r h 1 q. + b h + η 1 r h 1 q ) (33) Q n,h := Γ n,h + Q F (x h,δ n,). (34) Lemma 3.5 Let Q n,h be as in (34) and assumptions in Theorem.3 be satisfied. Then for all n, Q n,h C h η where C h = (k + 1) F (x 1 q ) + Q(k 1 r h + 1)ϱ. Proof.Note that Γ n,h F (x δ n,) sup = v 1 F (x δ n,)v sup v 1 [F (x δ n,) F (x ) + F (x )]v sup v 1 [F (x δ n,) F (x )]v + sup v 1 F (x )v sup v 1 F (x )Φ(x δ n,, x, v) + F (x ) sup v 1 k F (x ) x δ n, x v + F (x ) k F (x ) x δ n, x + F (x ) k F (x ) η 1 q + F (x ) η (k 1 q + 1) F (x ). (35) Similarly one can proof that F (x h,δ n,) (k + 1)ϱ. (36) 1 r h Now the proof follows from (35), (36) and the relation Q n,h F (x δ n,) + Q F (x h,δ n,). Hereafter we assume that 1 < Q <, so that r h < 1 < Q and Q < 1.

12 S. George and A. I. Elmahdy Theorem 3.6 Let x h,δ n, be as in (13) and x δ n, be as in (4). Let assumptions in Theorem 3.3, Theorem.3 and Lemma 3.5 hold. Then we have the following estimate, ( ) ( ) Q n n, x δ n, b h + C Q n h ( Q r h). Proof.Note that x h,δ n, x δ n, = x h,δ n 1, x δ n 1, (P h F (x h,δ n 1,)P h + P h ) 1 P h where [F (x h,δ n 1,) y δ + (x h,δ n 1, x )] +(F (x δ n 1,) + I) 1 [F (x δ n 1,) y δ + (x δ n 1, x )] = x h,δ n 1, x δ n 1, [(P h F (x h,δ n 1,)P h + P h ) 1 P h (F (x δ n 1,) + I) 1 ][F (x h,δ n 1,) y δ + (x h,δ n 1, x )] (F (x δ n 1,) + I) 1 [F (x h,δ n 1,) F (x δ n 1,) + (x h,δ n 1, x δ n 1,)] = x h,δ n 1, x δ n 1, (F (x δ n 1,) + I) 1 [F (x h,δ n 1,) F (x δ n 1,) + (x h,δ n 1, x δ n 1,)] [(P h F (x h,δ n 1,)P h + P h ) 1 P h (F (x δ n 1,) + I) 1 ] [F (x h,δ n 1,) y δ + (x h,δ n 1, x )] = (F (x δ n 1,) + I) 1 [F (x δ n 1,)(x h,δ n 1, x δ n 1,) (F (x h,δ n 1,) F (x δ n 1,))] (F (x δ n 1,) + I) 1 [F (x δ n 1,)P h P h F (x h,δ n 1,)P h ](P h F (x h,δ n 1,)P h + P h ) 1 [F (x h,δ n 1,) y δ + (x h,δ n 1, x )] = (F (x δ n 1,) + I) 1 [F (x δ n 1,)(x h,δ n 1, x δ n 1,) (F (x h,δ n 1,) F (x δ n 1,))] (F (x δ n 1,) + I) 1 [F (x δ n 1,) P h F (x h,δ n 1,)](x h,δ n, x h,δ n 1,) = Γ 1 Γ (37) Γ 1 = (F (x δ n 1,)+I) 1 [F (x δ n 1,)(x h,δ n 1, x δ n 1,) (F (x h,δ n 1,) F (x δ n 1,))] and Γ = (F (x δ n 1,) + I) 1 [F (x δ n 1,) P h F (x h,δ n 1,)](x h,δ n, x h,δ n 1,) Thus by Lemma., Lemma 3.4 and Asssumption1. we have Γ 1 = (F (x δ n 1,) + I) 1 F (x δ n 1,)

13 An Iteratively Regularized Projection Method 3 and 1 k k Φ(x h,δ n 1, + t(x δ n 1, x h,δ n 1,), x δ n 1,, x δ n 1, x h,δ n 1,)dt x δ n 1, x h,δ n 1, n 1, + t(x δ n 1, x h,δ n 1,) x δ n 1, dt x δ n 1, x h,δ n 1, (t 1)(x δ n 1, x h,δ n 1,) dt k xh,δ n 1, x δ n 1, Q xh,δ n 1, x δ n 1,. (38) Γ = (F (x δ n 1,) + I) 1 ((I P h )F (x δ n 1,) +P h (F (x δ n 1,) F (x h,δ n 1,)))(x h,δ n, x h,δ n 1,) (F (x δ n 1,) + I) 1 (I P h )F (x δ n 1,)(x h,δ n, x h,δ n 1,) + (F (x δ n 1,) + I) 1 P h (F (x δ n 1,) F (x h,δ n 1,))(x h,δ n, x h,δ n 1,) (I P h)f (x δ n 1,) n, x h,δ n 1, + (F (x δ n 1,) + I) 1 P h F (x h,δ n 1,)Φ(x δ n 1,, x h,δ n 1,, x h,δ n, x h,δ n 1,) Γ n 1,h + k F (x h,δ n 1,) n 1, x δ n 1, n, x h,δ n 1, Γ n 1,h + Q F (x h,δ n 1,) n, x h,δ n 1, Q n 1,h xh,δ n, x h,δ n 1, (39) Therefore by (37), (38), (39) and Lemma 3.5 we have x h,δ n, x δ n, Q x h,δ n 1, x δ n 1, + C h x h,δ n, x h,δ n 1, ( ) Q n ( x h,δ, x δ Q, + ( ) Q Ch + + rn h + C h rn 1 h ( ) Q n ( ) Q n 1 ( C h Q b h + + ( ) Q Ch + + rn h + C h rn 1 h ) n 1 ( ) C h Q n η C h h + r h ) n C h r h

14 4 S. George and A. I. Elmahdy This completes the proof. ( ) Q n b h + C ( ) h Q n 1 ( ) Q n [ + r h ( ) Q + + rh n + rh n 1 ] ( ) ( ) Q n b h + C Q n h ( Q r h). 4 Error Bounds Under Source Conditions In view of Theorem.3 and Theorem 3.6; to obtain an error estimate for n, ˆx, it is enough to obtain an error estimate for x δ ˆx. It is known (cf. [13], Proposition 3.1) that x δ x δ (4) and (cf. [11], Theorem 3.1) that x ˆx (k r + 1)c ϕ ϕ(). (41) Combining the estimates in Theorem.3, and Theorem 3.6, (4) and (41) we obtaining the following, Theorem 4.1 Let x h,δ n, be as in (13) and let the assumptions in Theorem.3,Theorem 3.6 and Lemma 3.5 be satisfied. Then we have the following; n, ˆx n, x δ n, + x δ n, x δ + x δ x + x ˆx Let ( Q ) n b h + C h ( Q r h) C := max{1 + b h + ( Q ) n + qn η (1 q) + δ + (k r + 1)c ϕ ϕ(). C h ( Q r h) + η 1 q, (k r + 1)c ϕ } and let ( ) Q n n δ := min{n : δ}. (4) Note that for < < 1, δ δ/. Thus by Theorem 4.1 we have the following Theorem. Theorem 4. Let x h,δ n, be as in (13) an let the assumptions in Theorem.3, Theorem 3.6 and Lemma 3.5 be satisfied. Let n δ be as in (4). Then for < < 1 we have, n δ, ˆx C (ϕ() + δ ). (43)

15 An Iteratively Regularized Projection Method A Priori Choice of the Parameter Note that the error estimate ϕ() + δ in (43) is of optimal order if := δ = (δ) satisfies, δ ϕ( δ ) = δ. Now using the function ψ(λ) := λϕ 1 (λ), < λ a we have δ = δ ϕ( δ ) = ψ(ϕ( δ )), so that δ = ϕ 1 (ψ 1 (δ)). Hence by (43) we have the following. Theorem 4.3 Let ψ(λ) := λϕ 1 (λ) for < λ a, and assumptions in Theorem 4. holds. For δ >, let δ = ϕ 1 (ψ 1 (δ)). Let n δ be as in (4). Then n δ, ˆx = (ψ 1 (δ)). 4. An Adaptive Choice of the Parameter In this subsection, we will present a parameter choice rule based on the adaptive method studied in [1, 1]. In practice, the regularization parameter is often selected from some finite set D M () := { i = µ i, i =, 1,, M} (44) where µ > 1 and M is such that M < 1 M+1. We choose := δ, because we expect to have an accuracy of order ( δ) and from Theorem 4.3, it follows that such an accuracy cannot be guaranteed for < δ. Let ( ) Q n n M = min{n : δ}. (45) and let x i := x h,δ n M, i. The parameter choice strategy that we are going to consider in this paper, we selects = i from D M () and operates only with corresponding x i, i =, 1,, M. Theorem 4.4 Assume that there exists i {, 1,,, M} such that ϕ( i ) δ i. Let assumptions of Theorem 4. and Theorem 4.3 hold and let Then l k and l := max{i : ϕ( i ) δ i } < M, k := max{i : x i x j 4C δ j, j =, 1,,, i}. (46) ˆx x k cψ 1 (δ) where c = 6C µ. Proof. To see that l k, it is enough to show that, for each i {1,,, M}, ϕ( i ) δ i = x i x j 4C δ j, j =, 1,, i.

16 6 S. George and A. I. Elmahdy For j i, by (43) we have x i x j x i ˆx + ˆx x j δ δ C ϕ( i ) + C + C ϕ( j ) + C i j δ δ C + C i j δ 4C. j Thus the relation l k is proved. Next we observe that ˆx x k ˆx x l + x l x k δ δ C ϕ( l ) + C + 4C l l δ 6C. l Now since δ l+1 µ l, it follows that Thus δ l µ δ δ = µϕ( δ ) = µψ 1 (δ). ˆx x k 6C µψ 1 (δ) cψ 1 (δ) where c = 6C µ.this completes the proof of the theorem. 5 Implementation of Adaptive Choice Rule In this section we provide an algorithm for the determination of a parameter fulfilling the balancing principle (46) and also provide a starting point for the iteration (13) approximating the unique solution x δ of (3).The choice of the starting point involves the following steps: Choose = δ, µ > 1 and r h < 1. Choose x D(F ) such that x ˆx ρ and (1 + γ,h )( k (b h + ρ) + b h + ρ) + δ min{c, r (1 r h )}. Choose 1 < Q < where Q is as in (33). Choose n M such that n M = min{n : ( ) Q n δ}. Finally the adaptive algorithm associated with the choice of the parameter specified in Theorem 4.4 involves the following steps:

17 An Iteratively Regularized Projection Method Algorithm Set i Solve x i := x h,δ n M, i by using the iteration (13). If x i x j > 6C δ µ j, j i, then take k = i 1. Set i = i + 1 and return to step. 6 Concluding Remarks In this paper we considered a new iterative method in the finite dimensional setting for approximately solving the nonlinear ill-posed operator equation F (x) = y, when the available data y δ in place of the exact data y. It is assumed that F is Fréchet differentiable in a neighborhood of some initial guess x of the actual solution ˆx. The procedure involves finding the fixed point of the function G h (x) = x (P h F (x)p h + P h ) 1 (F (x) y δ + (x x )), in an iterative manner in a finite dimensional subspace X h of the Hilbert space X. Here x is an initial guess and P h is the orthogonal projection onto X h. For choosing the regularization parameter we made use of the adaptive method suggested by Pereversev and Schock in [1] and the stopping rule is based on a majorizing sequence. Acknowledgements The first author thanks National Institute of Technology Karnataka, India, for the financial support under seed money grant No. RGO/O. am/seed GRANT/16/9. The work of Atef I Elmahdy is supported by Indo-Egypt Cultural Exchange Programme 7-8, under the research fellowship of ICCR, India; BNG/171/7-8. References [1] A. G. Ramm, Inverse Problems: Mathematical and Analytical Techniques with Applications to Engineering, Springer, 5. [] A. Bakushinsky and A. Smirnova, On application of generalized discrepancy principle to iterative methods for nonlinear ill-posed problems, Numerical Func. Anal. and Optimization, 6(1), (5),

18 8 S. George and A. I. Elmahdy [3] B. Blaschke, A. Neubauer and O. Scherzer, On convergence rates for the iteratively regularized Gauss-Newton method, IMA Journal on Numerical Analysis, 17(3), (1997), [4] H. W. Engle and M. Hanke and A. Neubauer, Regularization of Inverse Problems, (Dordrecht: Kluwer), [5] I. K. Argyros, Convergence and Applications of Newton-type Iterations, Springer, 8. [6] Jin.Qi-Nian, Error estimates of some Newton-type methods for solving nonlinear inverse problems in Hilbert scales, Inverse problems, 16(1), (), [7] Jin.Qi-Nian, On the iteratively regularized Gauss-Newton method for solving nonlinear ill-posed problems, Mathematics of Computation, 69(3), (), [8] M. Hanke, A. Neubauer and O. Scherzer, A convergence analysis of the Landweber iteration for nonlinear ill-posed problems, Numer. Math., 7(1), (1995), [9] P. Deuflhard, H. W. Engle and O. Scherzer, A convergenve analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions, Inverse problems, 14(5), (1998), [1] P. Mathe and S. V. Perverzev, Geometry of linear ill-posed problems in variable Hilbert scales, Inverse problems, 19(3) (3), [11] S. George and A. I. Elmahdy, A quadratic convergence yielding iterative method for nonlinear ill-posed operator equations,(1) (Communicated). [1] S. V. Perverzev and E. Schock, On the adaptive selection of the parameter in regularization of ill-posed problems, SIAM J.Numer.Anal., 43(5), (5), [13] U. Tautanhahn, On the method of Lavrentiev regularization for nonlinear ill-posed problems, Inverse Problems, 18(1), (), Received: June, 1

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, 2547-2565 An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational

More information

Research Article Inverse Free Iterative Methods for Nonlinear Ill-Posed Operator Equations

Research Article Inverse Free Iterative Methods for Nonlinear Ill-Posed Operator Equations International Mathematics and Mathematical Sciences, Article I 75454, 8 pages http://dx.doi.org/.55/24/75454 Research Article Inverse Free Iterative Methods for Nonlinear Ill-Posed Operator Equations Ioannis

More information

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 2191-2195 1 On a new notion of regularizer A.G. Ramm LMA/CNRS, 31 Chemin Joseph Aiguier, Marseille 13402, France and Mathematics

More information

Convergence rates of the continuous regularized Gauss Newton method

Convergence rates of the continuous regularized Gauss Newton method J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper

More information

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton

More information

On the Midpoint Method for Solving Generalized Equations

On the Midpoint Method for Solving Generalized Equations Punjab University Journal of Mathematics (ISSN 1016-56) Vol. 40 (008) pp. 63-70 On the Midpoint Method for Solving Generalized Equations Ioannis K. Argyros Cameron University Department of Mathematics

More information

How large is the class of operator equations solvable by a DSM Newton-type method?

How large is the class of operator equations solvable by a DSM Newton-type method? This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. How large

More information

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.

More information

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems

Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems www.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing Nonlinear Inverse Problems H. Egger RICAM-Report 2005-01 www.ricam.oeaw.ac.at Accelerated Newton-Landweber Iterations for Regularizing

More information

Functionalanalytic tools and nonlinear equations

Functionalanalytic tools and nonlinear equations Functionalanalytic tools and nonlinear equations Johann Baumeister Goethe University, Frankfurt, Germany Rio de Janeiro / October 2017 Outline Fréchet differentiability of the (PtS) mapping. Nonlinear

More information

A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010),

A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (2010), A G Ramm, Implicit Function Theorem via the DSM, Nonlinear Analysis: Theory, Methods and Appl., 72, N3-4, (21), 1916-1921. 1 Implicit Function Theorem via the DSM A G Ramm Department of Mathematics Kansas

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

New w-convergence Conditions for the Newton-Kantorovich Method

New w-convergence Conditions for the Newton-Kantorovich Method Punjab University Journal of Mathematics (ISSN 116-2526) Vol. 46(1) (214) pp. 77-86 New w-convergence Conditions for the Newton-Kantorovich Method Ioannis K. Argyros Department of Mathematicsal Sciences,

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

An improved convergence theorem for the Newton method under relaxed continuity assumptions

An improved convergence theorem for the Newton method under relaxed continuity assumptions An improved convergence theorem for the Newton method under relaxed continuity assumptions Andrei Dubin ITEP, 117218, BCheremushinsaya 25, Moscow, Russia Abstract In the framewor of the majorization technique,

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated

More information

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik

Accelerated Landweber iteration in Banach spaces. T. Hein, K.S. Kazimierski. Preprint Fakultät für Mathematik Accelerated Landweber iteration in Banach spaces T. Hein, K.S. Kazimierski Preprint 2009-17 Fakultät für Mathematik Impressum: Herausgeber: Der Dekan der Fakultät für Mathematik an der Technischen Universität

More information

A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces

A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces MING TIAN College of Science Civil Aviation University of China Tianjin 300300, China P. R. CHINA

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces YUAN-HENG WANG Zhejiang Normal University Department of Mathematics Yingbing Road 688, 321004 Jinhua

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Weak and strong convergence theorems of modified SP-iterations for generalized asymptotically quasi-nonexpansive mappings

Weak and strong convergence theorems of modified SP-iterations for generalized asymptotically quasi-nonexpansive mappings Mathematica Moravica Vol. 20:1 (2016), 125 144 Weak and strong convergence theorems of modified SP-iterations for generalized asymptotically quasi-nonexpansive mappings G.S. Saluja Abstract. The aim of

More information

On a family of gradient type projection methods for nonlinear ill-posed problems

On a family of gradient type projection methods for nonlinear ill-posed problems On a family of gradient type projection methods for nonlinear ill-posed problems A. Leitão B. F. Svaiter September 28, 2016 Abstract We propose and analyze a family of successive projection methods whose

More information

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1)

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1) Title On the stability of contact Navier-Stokes equations with discont free b Authors Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 4 Issue 4-3 Date Text Version publisher URL

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

3 Compact Operators, Generalized Inverse, Best- Approximate Solution

3 Compact Operators, Generalized Inverse, Best- Approximate Solution 3 Compact Operators, Generalized Inverse, Best- Approximate Solution As we have already heard in the lecture a mathematical problem is well - posed in the sense of Hadamard if the following properties

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Int. Journal of Math. Analysis, Vol. 1, 2007, no. 4, 175-186 Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Haiyun Zhou Institute

More information

Affine covariant Semi-smooth Newton in function space

Affine covariant Semi-smooth Newton in function space Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held

More information

Numerical Methods for Differential Equations Mathematical and Computational Tools

Numerical Methods for Differential Equations Mathematical and Computational Tools Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic

More information

APPROXIMATING SOLUTIONS FOR THE SYSTEM OF REFLEXIVE BANACH SPACE

APPROXIMATING SOLUTIONS FOR THE SYSTEM OF REFLEXIVE BANACH SPACE Bulletin of Mathematical Analysis and Applications ISSN: 1821-1291, URL: http://www.bmathaa.org Volume 2 Issue 3(2010), Pages 32-39. APPROXIMATING SOLUTIONS FOR THE SYSTEM OF φ-strongly ACCRETIVE OPERATOR

More information

Ann. Polon. Math., 95, N1,(2009),

Ann. Polon. Math., 95, N1,(2009), Ann. Polon. Math., 95, N1,(29), 77-93. Email: nguyenhs@math.ksu.edu Corresponding author. Email: ramm@math.ksu.edu 1 Dynamical systems method for solving linear finite-rank operator equations N. S. Hoang

More information

THROUGHOUT this paper, we let C be a nonempty

THROUGHOUT this paper, we let C be a nonempty Strong Convergence Theorems of Multivalued Nonexpansive Mappings and Maximal Monotone Operators in Banach Spaces Kriengsak Wattanawitoon, Uamporn Witthayarat and Poom Kumam Abstract In this paper, we prove

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

Fixed point theorems for Ćirić type generalized contractions defined on cyclic representations

Fixed point theorems for Ćirić type generalized contractions defined on cyclic representations Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 8 (2015), 1257 1264 Research Article Fixed point theorems for Ćirić type generalized contractions defined on cyclic representations Adrian Magdaş

More information

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based

More information

Dynamical systems method (DSM) for selfadjoint operators

Dynamical systems method (DSM) for selfadjoint operators Dynamical systems method (DSM) for selfadjoint operators A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 6656-262, USA ramm@math.ksu.edu http://www.math.ksu.edu/ ramm Abstract

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

On the Local Convergence of Regula-falsi-type Method for Generalized Equations

On the Local Convergence of Regula-falsi-type Method for Generalized Equations Journal of Advances in Applied Mathematics, Vol., No. 3, July 017 https://dx.doi.org/10.606/jaam.017.300 115 On the Local Convergence of Regula-falsi-type Method for Generalized Equations Farhana Alam

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Iterative common solutions of fixed point and variational inequality problems

Iterative common solutions of fixed point and variational inequality problems Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1882 1890 Research Article Iterative common solutions of fixed point and variational inequality problems Yunpeng Zhang a, Qing Yuan b,

More information

Kantorovich s Majorants Principle for Newton s Method

Kantorovich s Majorants Principle for Newton s Method Kantorovich s Majorants Principle for Newton s Method O. P. Ferreira B. F. Svaiter January 17, 2006 Abstract We prove Kantorovich s theorem on Newton s method using a convergence analysis which makes clear,

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Week: January 18 Deadline to hand in the homework: your exercise class on week January 5 9. Exercises with solutions (1) a) Show that for every unitary operators U, V,

More information

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage: Nonlinear Analysis 71 2009 2744 2752 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A nonlinear inequality and applications N.S. Hoang A.G. Ramm

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

APPLICATIONS OF DIFFERENTIABILITY IN R n.

APPLICATIONS OF DIFFERENTIABILITY IN R n. APPLICATIONS OF DIFFERENTIABILITY IN R n. MATANIA BEN-ARTZI April 2015 Functions here are defined on a subset T R n and take values in R m, where m can be smaller, equal or greater than n. The (open) ball

More information

arxiv: v1 [math.na] 28 Jan 2009

arxiv: v1 [math.na] 28 Jan 2009 The Dynamical Systems Method for solving nonlinear equations with monotone operators arxiv:0901.4377v1 [math.na] 28 Jan 2009 N. S. Hoang and A. G. Ramm Mathematics Department, Kansas State University,

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

arxiv: v1 [math.na] 16 Jan 2018

arxiv: v1 [math.na] 16 Jan 2018 A FAST SUBSPACE OPTIMIZATION METHOD FOR NONLINEAR INVERSE PROBLEMS IN BANACH SPACES WITH AN APPLICATION IN PARAMETER IDENTIFICATION ANNE WALD arxiv:1801.05221v1 [math.na] 16 Jan 2018 Abstract. We introduce

More information

An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R. M.THAMBAN NAIR (I.I.T. Madras)

An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R. M.THAMBAN NAIR (I.I.T. Madras) An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R M.THAMBAN NAIR (I.I.T. Madras) Abstract Let X 1 and X 2 be complex Banach spaces, and let A 1 BL(X 1 ), A 2 BL(X 2 ), A

More information

Lecture 11 Hyperbolicity.

Lecture 11 Hyperbolicity. Lecture 11 Hyperbolicity. 1 C 0 linearization near a hyperbolic point 2 invariant manifolds Hyperbolic linear maps. Let E be a Banach space. A linear map A : E E is called hyperbolic if we can find closed

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

Iteration-complexity of first-order penalty methods for convex programming

Iteration-complexity of first-order penalty methods for convex programming Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)

More information

THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS

THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS JONATHAN M. BORWEIN AND MATTHEW K. TAM Abstract. We analyse the behaviour of the newly introduced cyclic Douglas Rachford algorithm

More information

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES Fixed Point Theory, 12(2011), No. 2, 309-320 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES S. DHOMPONGSA,

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

************************************* Applied Analysis I - (Advanced PDE I) (Math 940, Fall 2014) Baisheng Yan

************************************* Applied Analysis I - (Advanced PDE I) (Math 940, Fall 2014) Baisheng Yan ************************************* Applied Analysis I - (Advanced PDE I) (Math 94, Fall 214) by Baisheng Yan Department of Mathematics Michigan State University yan@math.msu.edu Contents Chapter 1.

More information

A range condition for polyconvex variational regularization

A range condition for polyconvex variational regularization www.oeaw.ac.at A range condition for polyconvex variational regularization C. Kirisits, O. Scherzer RICAM-Report 2018-04 www.ricam.oeaw.ac.at A range condition for polyconvex variational regularization

More information

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered

More information

Improved Complexity of a Homotopy Method for Locating an Approximate Zero

Improved Complexity of a Homotopy Method for Locating an Approximate Zero Punjab University Journal of Mathematics (ISSN 116-2526) Vol. 5(2)(218) pp. 1-1 Improved Complexity of a Homotopy Method for Locating an Approximate Zero Ioannis K. Argyros Department of Mathematical Sciences,

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES

SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 76, Iss. 2, 2014 ISSN 1223-7027 SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES

More information

ITERATIVE METHODS FOR SOLVING A NONLINEAR BOUNDARY INVERSE PROBLEM IN GLACIOLOGY S. AVDONIN, V. KOZLOV, D. MAXWELL, AND M. TRUFFER

ITERATIVE METHODS FOR SOLVING A NONLINEAR BOUNDARY INVERSE PROBLEM IN GLACIOLOGY S. AVDONIN, V. KOZLOV, D. MAXWELL, AND M. TRUFFER ITERATIVE METHODS FOR SOLVING A NONLINEAR BOUNDARY INVERSE PROBLEM IN GLACIOLOGY S. AVDONIN, V. KOZLOV, D. MAXWELL, AND M. TRUFFER Abstract. We address a Cauchy problem for a nonlinear elliptic PDE arising

More information

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY

A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY A CONVERGENCE ANALYSIS OF THE NEWTON-TYPE REGULARIZATION CG-REGINN WITH APPLICATION TO IMPEDANCE TOMOGRAPHY ARMIN LECHLEITER AND ANDREAS RIEDER January 8, 2007 Abstract. The Newton-type regularization

More information

Fixed point iteration Numerical Analysis Math 465/565

Fixed point iteration Numerical Analysis Math 465/565 Fixed point iteration Numerical Analysis Math 465/565 1 Fixed Point Iteration Suppose we wanted to solve : f(x) = cos(x) x =0 or cos(x) =x We might consider a iteration of this type : x k+1 = cos(x k )

More information

Journal of Complexity. New general convergence theory for iterative processes and its applications to Newton Kantorovich type theorems

Journal of Complexity. New general convergence theory for iterative processes and its applications to Newton Kantorovich type theorems Journal of Complexity 26 (2010) 3 42 Contents lists available at ScienceDirect Journal of Complexity journal homepage: www.elsevier.com/locate/jco New general convergence theory for iterative processes

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

On Generalized Set-Valued Variational Inclusions

On Generalized Set-Valued Variational Inclusions Journal of Mathematical Analysis and Applications 26, 23 240 (200) doi:0.006/jmaa.200.7493, available online at http://www.idealibrary.com on On Generalized Set-Valued Variational Inclusions Li-Wei Liu

More information

16 1 Basic Facts from Functional Analysis and Banach Lattices

16 1 Basic Facts from Functional Analysis and Banach Lattices 16 1 Basic Facts from Functional Analysis and Banach Lattices 1.2.3 Banach Steinhaus Theorem Another fundamental theorem of functional analysis is the Banach Steinhaus theorem, or the Uniform Boundedness

More information

THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS Asian-European Journal of Mathematics Vol. 3, No. 1 (2010) 57 105 c World Scientific Publishing Company THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. Hoang

More information

for all u C, where F : X X, X is a Banach space with its dual X and C X

for all u C, where F : X X, X is a Banach space with its dual X and C X ROMAI J., 6, 1(2010), 41 45 PROXIMAL POINT METHODS FOR VARIATIONAL INEQUALITIES INVOLVING REGULAR MAPPINGS Corina L. Chiriac Department of Mathematics, Bioterra University, Bucharest, Romania corinalchiriac@yahoo.com

More information

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Radu Ioan Boț and Bernd Hofmann September 16, 2016 Abstract In the literature on singular perturbation

More information

arxiv: v1 [math.ca] 5 Mar 2015

arxiv: v1 [math.ca] 5 Mar 2015 arxiv:1503.01809v1 [math.ca] 5 Mar 2015 A note on a global invertibility of mappings on R n Marek Galewski July 18, 2017 Abstract We provide sufficient conditions for a mapping f : R n R n to be a global

More information

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS Int. J. Appl. Math. Comput. Sci., 2002, Vol.2, No.2, 73 80 CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS JERZY KLAMKA Institute of Automatic Control, Silesian University of Technology ul. Akademicka 6,

More information

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable. Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

Nonlinear stabilization via a linear observability

Nonlinear stabilization via a linear observability via a linear observability Kaïs Ammari Department of Mathematics University of Monastir Joint work with Fathia Alabau-Boussouira Collocated feedback stabilization Outline 1 Introduction and main result

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS

PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS Annales Academiæ Scientiarum Fennicæ Mathematica Volumen 28, 2003, 207 222 PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS Fumi-Yuki Maeda and Takayori Ono Hiroshima Institute of Technology, Miyake,

More information

7 Complete metric spaces and function spaces

7 Complete metric spaces and function spaces 7 Complete metric spaces and function spaces 7.1 Completeness Let (X, d) be a metric space. Definition 7.1. A sequence (x n ) n N in X is a Cauchy sequence if for any ɛ > 0, there is N N such that n, m

More information

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 016, 4478 4488 Research Article Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information